The Observatory #4: The American Regulatory State’s Murky Future
Plus the implications of India's election, military use of AI on the rise, and more
In a now-famous 1984 case, Chevron v. Natural Resources Defense Council, the US Supreme Court directed that whenever there were uncertainties in a piece of federal legislation, the power to decide on its interpretation should lie in the hands of the federal bureaucracy, rather than the courts. But in June, a 6-3 majority of justices divided along ideological lines struck down the Chevron precedent with rulings on two cases—Relentless v. Department of Commerce and Loper Bright Enterprises v. Raimondo, together known as Loper Bright—that characterized the earlier decision as “fundamentally misguided.”
Conservatives have long been itching to see Chevron overturned, arguing that its doctrine had vastly expanded the power of the federal bureaucracy. From this perspective, Chevron violated separation of powers by giving the executive the power to make law, and was fundamentally unworkable because the initial ambiguity of a given piece of legislation was often a matter of dispute. Meanwhile, the three dissenting justices claimed that the decision would destabilize government’s role in a variety of areas—from environmental regulations to the provision of public goods—and would give the courts “control over matters they know nothing about.” Both sides agree that Loper Bright shifts power back into the hands of the judiciary; where they come apart is on whether or not that is a good thing.
Already, House committees have begun scheduling hearings to discuss the Congressional response, and several U.S. law firms have begun preparing for new case filings on administrative law challenges. The uncertainty around the ruling’s effects seems likely to hinder the creation of new regulations as agencies become more hesitant in rule-making and look to Congress to provide more guidance, making it more difficult for the government to respond to emerging problems and potentially discouraging investments as companies await the fallout of the decision. Agencies are likely to respond to Loper Bright by leaning more heavily on their legal departments, providing additional commentary to assist in ensuring that decisions survive judicial oversight. Lobbying groups and others will now need to increase their legal expertise in order to provide more detailed and specific legislative text to Congress and file more amicus briefs to support their favored interpretations when lawsuits are indeed brought to court.
Just how much Loper Bright will change things is a matter of significant debate. Because Loper Bright makes it more likely that court challenges to rules might succeed, there may be an increase in ideologically-motivated lawsuits against federal agencies, especially those involved in environmental, financial, and health regulations. In his majority opinion, Chief Justice Roberts indicated that there would be no need to revisit previous decisions (over 18,000 in all) made under Chevron’s forty-year precedent, but the liberal justices have expressed doubts about that in their dissents to both Loper Bright and a separate case decided the following week, Corner Post v. Board of Governors of the Federal Reserve System, which greatly expands plaintiffs’ ability to sue the federal government over agency actions that took place long in the past. If the court system becomes flooded with lawsuits that succeed in overturning one longstanding regulation after another, that—in combination with ongoing Congressional gridlock—would create the potential for a massive destabilization of the administrative state. With that said, the existing Chevron doctrine already had a “major questions” exception, which allowed courts to supersede agencies on matters of “vast economic and political significance.” Since it is possible to argue that a wide variety of rules have such significance, Loper Bright may not expand judicial oversight as much as liberals argue; indeed, the Supreme Court itself has not relied on Chevron since 2016. In addition, a number of agencies are already somewhat insulated from judicial oversight by Congress; for instance, certain key activities of the Department of Health and Human Services (HHS) and the Food and Drug Administration (FDA) fall within a mandate of expressed Congressional delegation of authority (other important activities, however, do not).
Given the forward-looking emphasis of the majority opinion, Loper Bright appears to be most salient when it comes to agency regulation of novel, complex, and fast-moving technologies like AI. A legal challenge to President Biden’s 2023 executive order on AI seems virtually certain, and the ruling could make an already uphill climb for comprehensive AI legislation in Congress even more difficult. Congressional policy knowledge and capacity are already stretched thin in general, to say nothing of technical AI expertise, and the loss of Chevron puts the onus on lawmakers to agree that they want to rely on federal agencies to fill in the sticky details of legislation rather than trusting themselves and their staff to take on the challenge. The resulting procedural questions effectively throw up another hurdle for Democrats and Republicans, whose perspectives on the value of AI regulation are increasingly diverging, to overcome in search of compromise (and an easy lever for anti-regulation lobbyists to pull in their quest to defeat a bill). It will be telling to see if any such language explicitly delegating authority to agencies starts to pop up in any of the more than 80 AI-related bills under consideration this session.
Loper Bright is only the latest in a string of Supreme Court rulings that invite further legal challenges to federal agency powers. If the trend continues, we could see even more radical changes such as a revision of the nondelegation doctrine, which could prevent Congress from delegating any rule-making authority to agencies whatsoever. In the meantime, one thing’s for sure: the decision will almost certainly be a boon for lawyers’ job security.
Forecasting questions:
A vote for stability shows the resilience of the world’s largest democracy
Earlier this year, India conducted the single largest election in human history, which granted the incumbent prime minister Narendra Modi his third consecutive term. No other prime minister has received such a mandate since Jawaharlal Nehru, India’s first. Despite this feat, Modi’s third term will offer him less power than before as his Bharatiya Janata Party (BJP) won only 240 of the Lok Sabha’s 543 seats–down from 303 and far short of the supermajority that Modi himself had predicted. To establish a governing majority, Modi has formed a coalition with a range of parties under the National Democratic Alliance (NDA) representing a narrow majority of 293 seats.
This unexpected drop in BJP support could lead to a more restrained, less nationalistic government line in Modi’s third term. Most of the BJP’s coalition partners are regional federalist parties concerned with advocating for the rights and interests of their particular interest group. The most influential among these are the Telugu Desam Party (TDP) and the Janata Dal United Party (JD(U)), which won 16 and 12 seats in the Lok Sabha and have bases in Andhra Pradesh and Bihar, respectively. Both the TDP and JD(U) have substantial Muslim and ethnic minority vote bases and are secular parties, which could lower the likelihood of Hindu nationalist policies from the BJP in Modi’s third term. Furthermore, both parties only joined the NDA in 2024 after periods of switching between support for and opposition against the BJP. (Chandrababu Naidum, the leader of the TDP, claimed as recently as 2019 that “all leaders are better than Narendra Modi.”) Although India has enjoyed relatively stable coalition governments in the three terms before Modi’s rule, maintaining that track record in the current term will likely rely on a more accommodative BJP line that is willing to devolve power to localized interest groups.
The diminishment of the BJP’s influence is unlikely to change the Modi government’s participation on the world stage. The cabinet ministry’s four most influential portfolios (home, defense, external affairs, and finance) have retained the same ministers as in Modi’s last term, sending a clear signal of continuity. India’s current government prioritizes mitigating perceived threats from Pakistan and China, and most parties in the NDA do not hold strong anti-Western views. However, the government’s administration will likely change in two key ways. Most notably, the local interests of the NDA’s coalition parties will significantly slow Modi’s efforts to centralize the Indian government as coalition parties seek to retain their autonomy in serving their specific constituencies. Similarly, the new need to account for a range of opinions within the government could slow or stall many of Modi’s major national projects across infrastructure, payments, and energy sectors.
Most of all, the election is a reaffirmation of the world’s largest democracy. Even as the BJP earned accolades for driving Indian and global growth, in recent years, India had been classified by the V-Dem index as an “electoral autocracy” due to antics like suspending opposition lawmakers from Parliament during important votes and targeted crackdowns on NGOs. It remains to be seen whether a more inclusive and collaborative spirit will prevail in the new term, but with voters making clear they remain the ultimate authority in India, the kinds of sweeping structural changes to centralize power in the executive that have been the hallmark of past failed democracies now seem unlikely to occur in the next five years.
Prospects dim for comprehensive AI legislation in Congress
Following its convening of a series of nine “Insight Forums” throughout last fall and winter, the Bipartisan Senate Artificial Intelligence Working Group has unveiled its much-anticipated policy roadmap titled "Driving U.S. Innovation in Artificial Intelligence." The roadmap outlines eight priorities intended to guide future AI legislation, which run the gamut from innovation to elections, data privacy, national security, and global competitiveness.
Despite the working group’s mandate to consider the risks and benefits of AI holistically, the initial reaction to the roadmap was fairly polarized along industry vs. civil society lines. Industry groups have largely welcomed the roadmap’s emphasis on innovation; TechNet, a lobbying group which represents nearly all of the major AI firms, has praised the report’s funding recommendations—$32 billion in non-defense funding by 2026—and has begun lobbying for associated legislation including the CREATE AI Act. However, prominent civil society and AI ethics organizations including the ACLU and the Leadership Conference on Civil and Human Rights voiced concerns, with some going so far as to release a shadow report arguing that the roadmap shows a bias towards industry interests, lacks enforceable safeguards for civil rights, and fails to provide sufficient protection for marginalized groups. Likewise, AI safety advocates have noted the absence of attention to existential risks, lamenting in particular that the roadmap “encourages committees to undertake additional consideration toward developing frameworks from which legislation could then be derived, rather than contributing to those actionable frameworks directly.”
The roadmap’s priorities allow for flexibility and continued input from various stakeholders, which could still conceivably improve the resulting legislation. However, in floor remarks following the report’s release, Senate Majority Leader Chuck Schumer emphasized that the report would serve as a foundation for numerous smaller pieces of legislation, rather than a single comprehensive bill. With just a few months left in the current session and more than 80 pieces of AI legislation in the pipeline, that means it could take a while to build up anything like a comprehensive and coherent US policy on AI. It also means that hopes for codifying last year’s ambitious Biden executive order in law – thereby preventing it from being overturned by a new administration – is not going to happen anytime soon, though hope remains for some key elements.
While far from the only action taking place in Congress on AI, the Senate working group has been that body’s highest-profile effort to date, given its initiation by Senate Majority Leader Chuck Schumer and bipartisan imprimatur. Unfortunately, there are indications that the Senate’s counterpart, the House Task Force on Artificial Intelligence, is splitting along partisan lines. House Majority Leader Steve Scalise has expressed opposition to new AI regulations, with sources reporting that House Republicans would not support legislation that establishes new agencies, creates licensing requirements, allocates funds for research and development, or favors specific technologies. The divide was further highlighted in a recent meeting of Republicans from the House Taskforce, where members voiced disapproval of both the White House's AI executive order and the Senate's AI roadmap. Since any legislation would require passage in both the Senate and the House, lack of agreement between House Republicans and Democrats could delay or prevent new regulations and funding.
If the federal government is unable to pass meaningful AI regulation, efforts to do so will be left to the states. In California, several key AI bills are under consideration in the current legislative session. Arguably the most significant of these, SB 1047—proposed by Scott Wiener (D-San Francisco)—would require safety testing and make developers liable for large-scale destruction caused by their frontier models, and has become the focus of intraparty drama as industry interests have mounted a ferocious counter-campaign. Other states have begun setting up their own AI task forces to study possible legislation as well. Some of these bills are certainly better than nothing, but without Congressional leadership on AI, a patchwork state-level approach to AI regulation in the United States will struggle to meaningfully contain risks and guide the industry in prosocial directions.
Use of AI in the military is starting to ramp up
Earlier this year, a tell-all investigation by +972 Magazine and Local Call revealed the Israeli military’s use of Lavender, an AI-powered program that generates targets for assassination in Gaza. It has classed an estimated 37,000 Gazans as suspected Hamas operatives since the beginning of the war, marking them for potential air strikes. In theory, a human soldier makes the final decision on any given target. However, +972 reports that humans were essentially a “rubber stamp” for the machine, which spat out dozens more targets than they could reasonably process in a day. Interviewees from within the Israeli army confessed that they devoted only about 20 seconds to each target (usually just enough time to ensure it was a man) before authorizing a bombing. Lavender uses demographic features of known militants to predict whether a given Gaza resident is a Hamas operative based on a 1-100 rating system. Gazans are rated according to features like sharing a Whatsapp group with a known militant or changing phones or addresses frequently. The system incorrectly identifies a civilian as an operative, or the other way around, about 10% of the time. The report has only fueled continued controversy around Israel’s war in Gaza, which culminated in charges of war crimes against both Hamas and Israeli leaders by the International Criminal Court in May.
AI is also becoming core to Ukraine’s war with Russia as the former ramps up their use of partially autonomous, AI-powered drones on the battlefield. Partially autonomous weapons are not new, and drone technology has been crucial for Ukraine since the war’s outset. Now, though, AI is helping drones navigate, avoid enemy soldiers and carry out targeted bombings even when Russian jamming prevents manual control. It has been particularly impactful in Ukraine’s attempts to bomb key fuel infrastructure, crippling the industries funding Russia’s wartime economy. Ukraine is attempting to reframe itself as a military tech hub with Bravel, a government initiative modeled on the U.S. Department of Defense’s Defense Innovation Unit, which focuses on developing AI-powered drones. It also launched the Unmanned Systems Forces, a department focused exclusively on drone development, in February 2024.
Although Ukraine’s use of AI has generally been met with less controversy than the Israeli Lavender program, the country’s open discussion of increasingly autonomous weapons has stoked worries of a “suicidal arms race” where mass killings become ever easier and more commonplace. Currently in both Israel and Ukraine, human officers still get the final word on an attack or strike. However, as competition ramps up and the need for ever quicker, more robust technology becomes apparent, some fear human control will wane, as it reportedly already has in Israel, or even disappear. AI systems could provide more obedient and, as such, more destructive soldiers than humans, with devastating consequences for how war is fought.
To be clear, not everybody thinks AI will have transformative or even largely negative consequences for the military. Some military experts point out that Ukraine’s drones haven't lived up to the hype. Others argue that more robots and drones in the army might mean fewer human casualties. Whatever happens next, expect it to be steeped in controversy and secrecy as the powers that be battle out a whole new kind of arms race.
The World Health Organization tries to fix its biggest structural challenge
Observers of the World Health Organization (WHO) have long expressed concerns that the organization’s financing is broken. For the fiscal year 2024-2025, the WHO's operating budget is a mere $6.8 billion, from which it must accomplish a number of ambitious programs: advance the health-related Sustainable Development Goals, protect the world against health emergencies, and implement evidence-based standards for health. For perspective, that budget is smaller than those of several major hospitals in the US. The WHO’s biggest financing challenge, though, relates to how it is funded: a lack of flexible and dependable revenue for the WHO has become a long-term impediment to institutional effectiveness, and there are active efforts underway to modernize the funding structure.
The WHO is funded through two primary mechanisms: assessed and voluntary contributions. Assessed contributions can be thought of as membership dues; all 196 WHO Members and Associate Members are required to contribute to the WHO’s budget in line with the size of their populations and economies. While assessed contributions were intended to be the primary source of funding for the WHO (and indeed were the main source of financing in the early years of the organization), these membership dues made up just under 13% of the WHO’s 2021 budget. Since the 1980s, the World Health Assembly (the decision-making council of the WHO that sets the WHO’s budgetary policies) has enacted a series of measures to curtail increases in assessed contributions, which meant that assessed contributions ceased keeping pace with inflation. This gap left by reduced membership dues had to be filled by voluntary contributions from member states and other partners, such as the Gates Foundation and the private sector. The vast majority of voluntary contributions are tied to narrowly specified programmatic areas and/or geographies for limited timespans, impeding long-term planning and imposing significant overhead costs to ensure that grant reporting requirements are met.
This broken financing structure has motivated the World Health Assembly to reform its approach to assessed contributions. In 2022, the World Health Assembly agreed to recommendations put forth by the Working Group on Sustainable Finance to gradually increase assessed contributions. The aim is to have 50% of the core budget covered by assessed contributions by 2030-31. While this is a promising move to help secure more dependable and flexible funding, implementation will require the World Health Assembly to approve biennial budgets consistent with that goal. The approved 2024-25 biennium budget does so, but it remains to be seen if member states will maintain their resolve in the face of nationalistic sentiments and ongoing cost of living crises across much of the world.
The WHA also agreed to organize a first-ever Investment Round to help the WHO deliver on its 14th General Programme of Work for 2025-2028—its guiding mission for the next four years. The Investment Round builds on the recently published investment case, which articulates the organization's contributions to global health and seeks to convince donors that flexible funding for the WHO represents good value for money. The Investment Round will consist of a series of pledging events co-hosted by member states that will ultimately culminate in a high-level event in November which will be hosted by Brazil alongside the G20 Leaders’ Summit. Launched this past May, the Investment Round is aiming to raise a total of $7.1 billion in voluntary contributions by the end of 2024. To do so, the WHO is looking for new voluntary donors and seeking to loosen the strings attached to existing voluntary contributions. There are some early indications that momentum may be building–at the launch event, Singapore, Ireland and the EU announced contributions. There has been little evidence so far that private philanthropists are getting on board and providing flexible voluntary funding, but we will continue to monitor how philanthropists respond to the WHO’s pitch.
The Observatory is our way of answering the question “what’s going on in the world?” through the lens of institutions. This edition features our take on a selection of important and underrated news stories from spring and summer 2024. Please don’t hesitate to get in touch if you’d like to learn more about our work or get involved.