AI Governance: An EIP Primer
The decisions labs and governments make in the coming years might cast a long shadow over the future. Here's what to watch for and how to help.
Last update: June 2024
Seemingly overnight, AI has moved from the realm of science fiction and boring industry market analyses to the top of the news cycle, promising massive changes to society in its wake. And while the hype is intense, it’s not just hype: in the past couple of years, AI has enabled a fundamental breakthrough in biological science, spawned entirely new categories of business, passed the bar exam,, helped spur the biggest strike in Hollywood in more than half a century, made a music video, and turbocharged cybercrime with increasingly believable and personalized deepfakes.
This wild pace of progress doesn’t appear likely to let up anytime soon. Recent developments suggest that so-called agents are the next frontier for general purpose AI. Unlike ChatGPT or Copilot, which rely on specific, step-by-step instructions from human users, agents can make decisions and work towards a preset goal with minimal oversight. Perhaps the best-known example is Devin, a coding agent released in March 2024 able to “build and deploy apps end to end.” As these systems begin to interact with our economy and daily lives in more and more salient ways, it’s becoming increasingly challenging to project what things will look like even by the end of this decade, to say nothing of the generations to come.
EIP has treated AI as one of its core focus areas since the inception of the organization. Anticipating the current explosion of interest, our landscape analysis published in March 2022 identified several major tech companies and leading AI research labs, such as Alphabet (including its subsidiaries Google and DeepMind), Amazon, Meta, and OpenAI, as among the most important institutions in the world. What we’ve learned during that time has convinced us of the importance of establishing strong governance norms for this emerging and potentially transformative technology while the window for thoughtful oversight remains open. In this primer, we share the initial conclusions we’re drawing about how best to navigate the thorny path ahead.
Why we think AI governance is important
AI seems to clearly qualify as one of the most important issues in the world for institutions to be focused on right now. There is massive uncertainty about the potential impacts of AI on society, with a huge range of possibilities when it comes to both the degree and the nature of change that may be coming. In the short term, AI models could transform critical public services such as education and policing, reshape labor markets, increase inequalities, and help influence the outcomes of elections and policy debates. The implications become even more drastic once the technology becomes more powerful and bound up with our daily lives, as an advanced AI system deployed without sufficient safeguards could be misused to harm others at massive scale or even escape the control of its creators entirely.
None of us knows if the future has anything like the above in store for us, of course. But from EIP’s perspective, a few factors push in favor of taking that possibility seriously:
The ambitions of the developers. The creators of the most advanced AI models seem dead serious about their potential to usher in a radically different future, with some explicitly working towards building machines with superhuman-level capabilities across a range of practically useful domains.
The relationship between text and code. The most striking achievements of the past few years have been concentrated in large language models (LLMs), which can now write halfway decent poetry and consulting strategy memos. The same technology is allowing AI agents to author increasingly sophisticated computer programs and execute them, which could eventually lead to wild possibilities like recursive self-improvement or the ability for AI systems to make copies of themselves at will.
The uncertainty itself. Of course there are no guarantees that AI will play out in any of the directions that people are predicting or fearing. But if it did, it could easily become one of the most important developments in human history. The very fact that the trajectory seems so murky even just a few years out is a strong reason to pay attention. As a society, it helps us to be prepared for anything and everything.
So that’s why EIP cares about AI. But why focus on AI governance in particular? As long as AI capabilities continue to accelerate faster than the pace of society’s response and adaptation to them, already tough questions will become increasingly difficult to answer and impossible to avoid. Among the most important of these is how much automation is too much? Some bounds on the answer are already clear: we readily accept our phones’ suggestions of nearby restaurants to try in an unfamiliar place, and virtually all of us would agree that placing AI in charge of nuclear missile launches is a bridge too far. In between, however, lies a vast and uncertain playing field that is only just now beginning to be explored. There is arguably a strong case to be made that we may already be pretty close to the optimal level of AI capabilities for our society as it exists today, given that the benefits of the technology have to be considered alongside an array of harms and disruptions to current ways of life. As a society, we need time to come to collective agreement and sensemaking about exactly how many and which of our decisions we are willing to turn over to a machine – time that AI’s rapid advancement might not allow us.
Urgency is, therefore, one reason to focus on the issue. Another is that the timing is opportune. Evidence suggests that the public is paying attention and concerned: a 2023 poll conducted by Rethink Priorities indicates that 67% of Americans believe that AI is likely to achieve greater than human-level intelligence and 70% believe AI should be regulated. Many policymakers, journalists, and other civic leaders around the globe are forming their first impressions about advanced AI right now, casting about for expertise and guidance as they seek to understand the implications of what is unfolding in the labs and on the ground. It’s an environment in which ideas can quickly get traction and find their way into corporate practices, draft legislation, formative agreements, and institutional structures that could in turn have long-lasting impacts. At EIP, we expect that many of the foundational laws, regulations, norms, etc. governing the development and use of the technology for decades if not centuries to come will likely be forged over the next few years.
Yet this window of opportunity may close soon. Spurred forward by competitive market dynamics, corporations are racing to hire the best machine learning talent, build the most powerful technology they can, and to bring it to as many people in the shortest time possible. And as the race in the private sector intensifies, it also has implications for geopolitical dynamics, putting pressure on governments to escalate forward momentum as time goes on. If we wait, it might not be long before many previously attractive-looking strategies or regulations become politically or technologically infeasible. For all these reasons, we think that now is the critical time for institutions to act on AI.
The top institutions for AI governance
One reason why AI is particularly relevant from an institutional perspective is that, at the moment, most of the frontier development of the technology is being pushed forward by a handful of leading research labs housed within or funded by multinational corporations. Decisions by these corporations and the governments that regulate them, therefore, can have incredibly wide-reaching consequences. This is a strong argument in favor of trying to influence those decisions in positive directions, especially since many if not most of those institutions are still figuring out exactly how they want to handle the opportunities and challenges in front of them.
In our estimation, here are the most important institutions for global AI governance as of fall 2023:
The US government holds direct legal jurisdiction over the large majority of the world’s leading AI research labs, meaning that however the US decides to regulate (or not regulate) AI will be tremendously influential, if not decisive, for the trajectory of AI regulation worldwide. Furthermore, as the country boasting the largest economy and most powerful military in the world, anything the US government does to make direct use of an advanced technology like AI could have enormous downstream impacts for better or for worse. Within the US government, we see Congress and the White House as the most consequential actors, and indeed both have been ramping up engagement. In October 2023, the Biden Administration published an executive order detailing the White House’s plan for supporting displaced workers, regulating the safe development of increasingly advanced AI, and more. Half a year later, Senate Majority Leader Chuck Schumer unveiled a roadmap for Congressional regulation of and investment in AI based on months-long discussions with industry and other experts. Critics weren’t impressed, but discussion continues.
The Chinese government is arguably the biggest wildcard in global AI governance. Although China is home to some of the world’s best software engineering talent and many leading technology companies, the US enacted a set of export controls in October 2022 aimed at slowing the country’s progress on AI by targeting the semiconductor supply chain needed to develop and deploy frontier AI models. Among the reasons given by the US government for the policy measure was the fear the technology would be used to fuel China’s military and domestic surveillance apparatus, which is extensive. So far, there has been little concrete evidence to suggest that Chinese leaders are all-in on an AI race with the US amid the myriad other challenges the country is facing. Still, everything we’re seeing suggests that the trajectory of the US-China bilateral relationship in coming years will be a major driver of how much the West invests in military and security-related applications, and more generally creating the kinds of dynamics that are behind some of the most serious risks from AI.
Across the private sector, several players stand out. OpenAI, Google DeepMind, and Anthropic are widely considered to have access to the world’s most advanced ML tech and talent, some believe by a significant margin. OpenAI is a household name following the release of ChatGPT, while DeepMind is well known for earlier accomplishments including AlphaGo and AlphaFold. Anthropic is a newer entrant to the field founded by former leaders at OpenAI. All three labs have invested heavily in studying and preparing for potential long-term risks of the technology they’re developing, but they are also the institutional entities that are right now most explicitly aiming toward the development of superintelligence.
Among publicly-traded companies, Alphabet (which owns and shares its CEO with Google), Microsoft, and Amazon are the most significant given their status as the largest investors in Google DeepMind, OpenAI (source), and Anthropic (source, source) and ability to offer significant deployment platforms and massive user bases for those technologies. The three companies are also the three largest providers of compute clusters used to train AI systems worldwide. Many of these clusters use chips from Nvidia, which has rocketed to prominence as the world’s second-most valuable company on the strength of its cutting-edge semiconductors. Finally, Meta has what is widely believed to be the most advanced AI research lab working under leadership that is openly dismissive of safety concerns from frontier technologies.
Although it’s not home to any of the frontier labs, the European Union is nevertheless emerging as a significant player due to the Artificial Intelligence Act, the first major binding legislation regulating AI in the Western world. The AI Act uses a risk-based regulatory framework, which can be modified as needed in the future by a new Artificial Intelligence Board attached to the European Commission. EU legislation has a track record of influencing corporate practices and legislation in other jurisdictions due to its large market size and first-mover status, a phenomenon known as the Brussels effect.
What kinds of changes we’d like to see, and who’s working on them
While shaping the trajectory of AI is a complex global challenge, there are pathways for doing so that, at least for now, seem both viable and promising. Here are some of the interventions we’re currently most excited about.
Buying time
In general, we think it’s a pretty safe assumption that a rapid push toward powerful AI systems that we don’t fully understand and whose impacts on the world are impossible to anticipate is very dangerous for society. It probably isn’t realistic or desirable to hold back AI progress entirely, but slowing down advancement on the most powerful general-purpose models, at least temporarily, could allow society an opportunity to catch up to and prepare for the disruptive challenges that current and future systems are likely to create for the world. Enacting smart and well-targeted regulations in consequential settings like the EU and Congress is one way to make this outcome more likely.
The Future Society focuses on operationalizing AI governance for AI system regulators, developers, and deployers. The Future Society was one of the leading advocates for including general-purpose AI systems, or so-called “foundation models,” in the EU AI Act.
Investing in safeguards
Simultaneously, we need to be investing in the technical capacity to understand, evaluate, and hold accountable advanced AI systems and the people and institutions that deploy them. Today, not even experienced researchers understand precisely how the most advanced AI models make decisions, let alone know how to anticipate every unintended behavior they might exhibit. That makes understanding and evaluating new models (a science known as mechanistic interpretability) crucial for ensuring AI’s goals and parameters are both effective and safe. Transparency is key here: researchers, evaluators and governments need to have access to key models and to know what the companies building them are working on. This approach goes hand-in-hand with advocating for smart regulations, as regulations cannot be enforced if the mechanisms to enforce them don’t exist.
For the time being, companies developing AI have a significant degree of autonomy over the testing of their models, meaning that it’s up to them to build their own internal guardrails. Many leading firms are attempting to collaborate to set expectations and norms through the Frontier Model Forum, founded by Anthropic, Google, Microsoft and OpenAI, and subsequently joined by Amazon and Meta. Nonetheless, questions remain over how committed leading developers are to safety goals and social responsibility, especially in light of controversies like the various high-profile resignations from OpenAI specifically citing safety as a concern.
Informing policymakers
Governments will be involved not only in designing the legislation that sets the rules of the road for AI developers, but just as importantly in the ongoing enforcement of those rules. To do that well, we need people in government who both deeply understand cutting-edge technology and deeply care about the welfare of the people impacted by their decisions. And those leaders will need an expansive toolkit to understand broad societal preferences and the likely impacts of various policies on both constituents and the world at large.
Organizations like TechCongress, the Center for the Governance of AI (GovAI), and the Centre for Long-Term Resilience all help to bridge the gap between policymakers and domain experts via interventions such as workshops, reports, and talent placements.
Cooling tensions
Because the more advanced AI gets the harder it could become to control, there are good reasons to think that engaging in an AI capabilities race isn’t in anyone’s strategic interest. Since so much of the incentive to race is driven (or excused) by the geopolitical rivalry between the West and China, strategies to avoid further escalation of those tensions and clear a pathway toward improved multilateral cooperation in the future could be strong investments.
IDAIS (International Dialogues on AI Safety), a program of SAIF (Safe AI Forum), has convened two distinctly international summits of AI leaders in the UK and Beijing, with a third event planned for fall 2024. The program has helped forge links between AI safety experts in the West and China.
Key developments to watch
Will open source AI models keep pace with advances from the top labs? Devin, the coding agent you read about earlier, was overtaken by an open source competitor called SWE-Agent less than a month after its release. While the three leading labs have kept their models under wraps and indeed are increasing cybersecurity protections, other players like Hugging Face and Meta (whose Llama 3 model is open source) are out to prove that open source models can keep up. Although open source models come with the benefit of increased transparency and opportunities for collaboration, they run the risk of increasing the likelihood of misuse of the technology, since criminals and rogue states like North Korea can use them just like anyone else.
How will Congress regulate increasingly advanced AI? The Biden administration’s executive order on AI is comprehensive and well-regarded by civil society groups, but it is vulnerable to repeal at any time by a future administration (and indeed, Donald Trump has promised to strike it down on day one if elected). Congressional action is the most durable path forward for AI regulation in the US, but it’s unclear if the political will exists to stand up to the industry lobbyists rapidly descending upon Washington.
At what point will OpenAI release GPT-5? Following a spate of media coverage earlier this year focusing on potential for extreme risks from superintelligent AI, including an explicit call to pause development on all models “more powerful than GPT-4,” OpenAI CEO Sam Altman disclosed in June that training for GPT-5 hadn't yet begun. However, OpenAI filed an application for "GPT-5" with the US Patent and Trademark Office on July 18. The application indicates an intention to integrate audio into the model’s capabilities, pushing the frontier of generative AI further into multimodality. At the time of writing, Metaculus forecasters are predicting that OpenAI will announce GPT-5 in October 2024. Monitoring GPT-5’s progress and capabilities should act as a fair benchmark for tracking the rate of development of the most advanced AI systems.
What will be China’s approach to AI regulation? In early April 2023, the Cyberspace Administration of China released a call for comment on its document outlining “Administrative Measures for Generative Artificial Intelligence Services,” which attracted notice for proposing unexpectedly tough restrictions on general-purpose language models like Baidu’s Ernie. Since then, the government released “Interim Measures for the Management of Generative Artificial Intelligence Services,” which were substantially less prescriptive and contain a clause exempting organizations “that develop and apply generative AI technologies without providing Generative AI Services to the domestic general public” from the legislation, meaning that research groups or corporations (not to mention the state) can study and develop models with minimal restriction insofar as they aren’t released to the public. Although the language of the interim measures speaks mostly to issues of economic growth and wellbeing, the regulations have ramifications for geopolitics as well.
Will momentum build toward a binding international agreement on AI? Japan put AI regulation on the international radar with the Hiroshima Process, a regulatory framework endorsed by G7 leaders at a May 2023 summit. A larger gathering of AI leaders took place in the UK in November 2023, and resulted in the Bletchley Declaration, signed by 28 attending nations including the US, EU and China. In September 2024, the UN will hold the Summit of the Future, where AI, sustainability, peace and security, and global governance will all be on the agenda. The shared commitments thus far are directionally promising, but not legally binding; in the meantime, the question of whether and when the world will get serious about global governance of AI looms large over the future.
What you can do to help
If you're a funder: With the help of colleagues across dozens of organizations, Effective Institutions Project has put together a funder’s guide to AI governance and strategy. We hope this will be a useful resource for funders beginning their learning journey on AI as well as those looking to expand into new areas of the ecosystem.
If you work at one of the key institutions mentioned above: We’d love to talk! The specifics of what you can do to help will depend greatly on your role, strengths, and networks, so for best results we encourage you to talk with us 1-on-1. To express interest, please fill out the form here.
If you're looking for a more impactful career: Several of our talent partners are actively coaching people who are transitioning into AI-related work from studies or a previous career focus. If you’d like to join them, feel free to reach out for advising with 80,000 Hours or take this course in AI safety fundamentals at BlueDot Impact. You could also check out this guide to AI policy and strategy careers.
If you'd just like to stay in the loop: Besides continuing to read The Observatory, to get a level deeper on AI strategy and governance we suggest following Zvi Mowshowitz’s regular news roundups, Ethan Mollick’s newsletter, and the Center for AI Safety newsletter, among others.
The impacts of AI on the world could be miraculous, disastrous, or anything in between (including a combination of disasters and miracles!). No one knows exactly how it will play out, but building smart foundational governance systems now will help us guard against the very worst outcomes while enabling leaders to react more quickly and wisely to novel developments. In the coming months, we’ll continue to monitor developments at the intersection of AI and institutions and share any updates to our assumptions and premises as they occur.
If you are interested in collaborating with us to improve global institutional decision-making on AI and other issues, please contact us. You can also join our Slack channel and find us on Twitter/X.
the 'talk to us' hyperlink on 'express interest' is a closed google form FYI
Very interesting!