AI Governance: An EIP Primer
The decisions labs and governments make over the next 12 months might cast a long shadow over the future. Here's what to watch for and how to help.
Seemingly overnight, AI has moved from the realm of science fiction and boring industry market analyses to the top of the news cycle, promising massive changes to society in its wake. And while the hype is intense, it’s not just hype: in the past couple of years, AI has enabled a fundamental breakthrough in biological science, spawned entirely new categories of business, disrupted the classroom experience for millions of students and their teachers, helped spur the biggest strike in Hollywood in more than half a century, and turbocharged cybercrime with increasingly believable and personalized deepfakes.
This wild pace of progress doesn’t appear likely to let up anytime soon. Reporting and public documents suggest that the next frontier for general-purpose AI models will involve pulling in text, images, audio, and eventually video into one multimodal system, with the first salvo likely to come from Google’s Gemini project in a few months. As these systems begin to interact with our economy and daily lives in more and more salient ways, it’s becoming increasingly challenging to project what things will look like even by the end of this decade, to say nothing of the generations to come.
EIP has treated AI as one of its core focus areas since the inception of the organization more than two years ago. Anticipating the current explosion of interest, our landscape analysis published in March 2022 identified several major tech companies and leading AI research labs, such as Alphabet (including its subsidiaries Google and DeepMind), Amazon, Meta, and OpenAI, as among the most important institutions in the world. What we’ve learned during that time has convinced us of the importance of establishing strong governance norms for this emerging and potentially transformative technology while the window for thoughtful oversight remains open. In this primer, we share the initial conclusions we’re drawing about how best to navigate the thorny path ahead.
Why we think AI governance is important
AI seems to clearly qualify as one of the most important issues in the world for institutions to be focused on right now. There is massive uncertainty about the potential impacts of AI on society, with a huge range of possibilities when it comes to both the degree and the nature of change that may be coming. In the short term, AI models could transform critical public services such as education and policing, reshape labor markets, increase inequalities, and help influence the outcomes of elections and policy debates. The implications become even more drastic once the technology becomes more powerful and bound up with our daily lives, as an advanced AI system deployed without sufficient safeguards could be misused to harm others at massive scale or even escape the control of its creators entirely.
None of us knows if the future has anything like the above in store for us, of course. But from EIP’s perspective, a few factors push in favor of taking that possibility seriously:
The ambitions of the developers. The creators of the most advanced AI models seem dead serious about their potential to usher in a radically different future, with some explicitly working towards building machines with superhuman-level capabilities across a range of practically useful domains.
The relationship between text and code. The most striking achievements of the past few years have been concentrated in large language models (LLMs), which can now write halfway decent poetry and consulting strategy memos. The same technology is allowing AI agents to author increasingly sophisticated computer programs and execute them, which could eventually lead to wild possibilities like recursive self-improvement or the ability for AI systems to make copies of themselves at will.
The uncertainty itself. Of course there are no guarantees that AI will play out in any of the directions that people are predicting or fearing. But if it did, it could easily become one of the most important developments in human history. The very fact that the trajectory seems so murky even just a few years out is a strong reason to pay attention. As a society, it helps us to be prepared for anything and everything.
So that’s why EIP cares about AI. But why focus on AI governance in particular? As long as AI capabilities continue to accelerate faster than the pace of society’s response and adaptation to them, already tough questions will become increasingly difficult to answer and impossible to avoid. Among the most important of these is how much automation is too much? Some bounds on the answer are already clear: we readily accept our phones’ suggestions of nearby restaurants to try in an unfamiliar place, and virtually all of us would agree that placing AI in charge of nuclear missile launches is a bridge too far. In between, however, lies a vast and uncertain playing field that is only just now beginning to be explored. There is arguably a strong case to be made that we may already be pretty close to the optimal level of AI capabilities for our society as it exists today, given that the benefits of the technology have to be considered alongside an array of harms and disruptions to current ways of life. As a society, we need time to come to collective agreement and sensemaking about exactly how many and which of our decisions we are willing to turn over to a machine – time that AI’s rapid advancement might not allow us.
Urgency is, therefore, one reason to focus on the issue. Another is that the timing is opportune. Evidence suggests that the public is paying attention and concerned: a 2023 poll conducted by Rethink Priorities indicates that 67% of Americans believe that AI is likely to achieve greater than human-level intelligence and 70% believe AI should be regulated. Many policymakers, journalists, and other civic leaders around the globe are forming their first impressions about advanced AI right now, casting about for expertise and guidance as they seek to understand the implications of what is unfolding in the labs and on the ground. It’s an environment in which ideas can quickly get traction and find their way into corporate practices, draft legislation, formative agreements, and institutional structures that could in turn have long-lasting impacts. At EIP, we expect that many of the foundational laws, regulations, norms, etc. governing the development and use of the technology for decades if not centuries to come will likely be forged over the next few years.
Yet this window of opportunity may close soon. Spurred forward by competitive market dynamics, corporations are racing to hire the best machine learning talent, build the most powerful technology they can, and to bring it to as many people in the shortest time possible. And as the race in the private sector intensifies, it also has implications for geopolitical dynamics, putting pressure on governments to escalate forward momentum as time goes on. If we wait, it might not be long before many previously attractive-looking strategies or regulations become politically or technologically infeasible. For all these reasons, we think that now is the critical time for institutions to act on AI.
The top institutions for AI governance
One reason why AI is particularly relevant from an institutional perspective is that, at the moment, most of the frontier development of the technology is being pushed forward by a handful of leading research labs housed within or funded by multinational corporations. Decisions by these corporations and the governments that regulate them, therefore, can have incredibly wide-reaching consequences. This is a strong argument in favor of trying to influence those decisions in positive directions, especially since many if not most of those institutions are still figuring out exactly how they want to handle the opportunities and challenges in front of them.
In our estimation, here are the most important institutions for global AI governance as of fall 2023:
The US government holds direct legal jurisdiction over the large majority of the world’s leading AI research labs, meaning that however the US decides to regulate (or not regulate) AI will be tremendously influential, if not decisive, for the trajectory of AI regulation worldwide. Furthermore, as the country boasting the largest economy and most powerful military in the world, anything the US government does to make direct use of an advanced technology like AI could have enormous downstream impacts for better or for worse. Within the US government, we see Congress and the White House as the most consequential actors, and indeed both have been ramping up engagement lately: Senate Majority Leader Chuck Schumer is convening a series of AI Insight Forums this fall seemingly in preparation for a bigger bipartisan legislative push, while the White House has worked to secure voluntary commitments on responsible development from leading AI companies and is preparing an executive order addressing risks of the technology and offering guidelines to federal agencies on how to use it.
The Chinese government is arguably the biggest wildcard in global AI governance. Although China is home to some of the world’s best software engineering talent and many leading technology companies, the US enacted a set of export controls in October 2022 aimed at slowing the country’s progress on AI by targeting the semiconductor supply chain needed to develop and deploy frontier AI models. Among the reasons given by the US government for the policy measure was the fear the technology would be used to fuel China’s military and domestic surveillance apparatus, which is extensive. So far, there has been little concrete evidence to suggest that Chinese leaders are all-in on an AI race with the US amid the myriad other challenges the country is facing. Everything we’re seeing suggests that the trajectory of the US-China bilateral relationship in coming years will be a major driver of how much the West invests in military and security-related applications, and more generally creating the kinds of dynamics that are behind some of the most serious risks from AI.
Across the private sector, several players stand out. OpenAI, Google DeepMind, and Anthropic are widely considered to have access to the world’s most advanced ML tech and talent, some believe by a significant margin. OpenAI is a household name following the release of ChatGPT, while DeepMind is well known for earlier accomplishments including AlphaGo and AlphaFold. Anthropic is a newer entrant to the field founded by former leaders at OpenAI. All three labs are active in research and policy conversations regarding potential long-term risks of the technology they’re developing, and OpenAI and Anthropic have unusual governance systems that ostensibly limit the control that financial investors have over the company and/or cap the profits that it can return to investors. Notably, these labs are the institutional entities that are right now most explicitly aiming toward the development of superintelligence.
Among publicly-traded companies, Alphabet (which owns and shares its CEO with Google), Microsoft, and Amazon are the most significant given their status as the largest investors in Google DeepMind, OpenAI (source), and Anthropic (source, source) and ability to offer significant deployment platforms and massive user bases for those technologies. The three companies, not coincidentally, are also the three largest providers of compute clusters used to train AI systems worldwide. Meta, meanwhile, has what is widely believed to be the most advanced AI research lab working under leadership that is openly dismissive of safety concerns from frontier technologies.
Although it’s not home to any of the frontier labs, the European Union is nevertheless emerging as a major player because it’s about to become the first major Western governmental body to explicitly regulate the use of AI. The EU’s Artificial Intelligence Act is currently in the last phase of negotiations before becoming law, known as a “trilogue” in EU parlance. The AI Act uses a risk-based regulatory framework, which can be modified as needed in the future by a new Artificial Intelligence Board attached to the European Commission. EU legislation has a track record of influencing corporate practices and legislation in other jurisdictions due to its large market size and first-mover status, a phenomenon known as the Brussels effect.
What kinds of changes we’d like to see, and who’s working on them
While shaping the trajectory of AI is a complex global challenge, there are pathways for doing so that, at least for now, seem both viable and promising. Here are some of the interventions we’re currently most excited about.
Buying time
In general, we think it’s a pretty safe assumption that a rapid push toward powerful AI systems that we don’t fully understand and whose impacts on the world are impossible to anticipate is very dangerous for society. It probably isn’t realistic or desirable to hold back AI progress entirely, but slowing down advancement on the most powerful general-purpose models, at least temporarily, could allow society an opportunity to catch up to and prepare for the disruptive challenges that current and future systems are likely to create for the world. Enacting smart and well-targeted regulations in consequential settings like the EU and Congress is one way to make this outcome more likely.
The Future Society, one of EIP’s current grant recommendations, focuses on operationalizing AI governance for AI system regulators, developers, and deployers. The Future Society has been one of the leading advocates for including general-purpose AI systems, or so-called “foundation models,” in the proposed EU AI Act.
Investing in safeguards
Simultaneously, we need to be investing in the technical capacity to understand, evaluate, and hold accountable advanced AI systems and the people and institutions that deploy them. This approach goes hand-in-hand with advocating for smart regulations, as regulations cannot be enforced if the mechanisms to enforce them don’t exist.
For the time being, companies developing AI have a significant degree of autonomy over the testing of their models, meaning that it’s up to them to build their own internal guardrails. Many leading firms are attempting to collaborate to set expectations and norms through the Frontier Model Forum.
Educating policymakers
Governments will be involved not only in designing the legislation that sets the rules of the road for AI developers, but just as importantly in the ongoing enforcement of those rules. To do that well, we need people in government who both deeply understand cutting-edge technology and deeply care about the welfare of the people impacted by their decisions.
Organizations like the Horizon Institute for Public Service, the Center for the Governance of AI (GovAI), and the Centre for Long-Term Resilience all help to bridge the gap between policymakers and domain experts via interventions such as workshops, reports, and talent placements.
Cooling tensions
Because the more advanced AI gets the harder it could become to control, there are good reasons to think that engaging in an AI capabilities race isn’t in anyone’s strategic interest. Since so much of the incentive to race is driven (or excused) by the geopolitical rivalry between the West and China, strategies to avoid further escalation of those tensions and clear a pathway toward improved multilateral cooperation in the future could be strong investments. (This is a very complicated topic that deserves its own newsletter at the very least, but we figured we’d give you our topline view now.)
Concordia AI is a Beijing-based social enterprise focused on AI safety and governance. The group has helped forge links between AI safety experts in the West and Chinese tech companies and think tanks such as Tsinghua University’s Institute for AI International Governance.
Key developments to watch
Will open source AI models keep pace with advances from the top labs? While the three leading labs have kept their models under wraps and indeed are increasing cybersecurity protections, other players like Hugging Face have taken a different tack. Meta’s AI model, Llama 2, has been released as open source, and Amazon’s approach to AI bets on increased dominance of third-party open-source models, which the company plans to offer customers in conjunction with Amazon Sagemaker Jumpstart. Although open source models come with the benefit of increased transparency and opportunities for collaboration, they run the risk of increasing the likelihood of misuse of the technology, since criminals and rogue states like North Korea can use them just like anyone else.
Will the EU AI Act keep in place restrictions on general purpose AI? The AI Act is currently in the final stage of negotiations, which involves reconciling three different versions endorsed by the European Commission, the Council of the European Union, and the European Parliament. Importantly, only the European Parliament’s version explicitly regulates foundation models (i.e., large-scale general purpose AI systems like GPT-4). Tech rivals like Google and Microsoft are joining forces to lobby the EU not to impose requirements like pre-deployment safety testing on general purpose systems, arguing that the legislation should only apply to those deploying their models in “risky” ways. Whether these provisions survive into the final legislation is likely to have important implications for future regulation in the US and elsewhere.
At what point will OpenAI release GPT-5? Following a spate of media coverage earlier this year focusing on potential for extreme risks from superintelligent AI, including an explicit call to pause development on all models “more powerful than GPT-4,” OpenAI CEO Sam Altman disclosed in June that training for GPT-5 hadn't yet begun. However, OpenAI filed an application for "GPT-5" with the US Patent and Trademark Office on July 18. The application indicates an intention to integrate audio into the model’s capabilities, pushing the frontier of generative AI further into multimodality. At the time of writing, Metaculus forecasters are predicting that OpenAI will announce GPT-5 in October 2024. Monitoring GPT-5’s progress and capabilities should act as a fair benchmark for tracking the rate of development of the most advanced AI systems.
Will there be concrete commitments coming out of the UK AI Safety Summit? The UK AI Safety Summit was announced in early June as a global nexus for increasing coordination on AI safety efforts. The United Kingdom has allocated £100 million to build an expert working group to help the country contribute to AI capabilities and safety development. Of note, the UK government has taken a uniquely strong lead on addressing AI safety, with Prime Minister RIshi Sunak calling a meeting with the CEOs of OpenAI, DeepMind, and Anthropic on the topic earlier this year. Although few details about the event have been publicized, the summit could serve a key role in building international consensus on AI safety policy, offering an early opportunity to set foundational regulatory norms.
What will be China’s approach to AI regulation? In early April 2023, the Cyberspace Administration of China released a call for comment on its document outlining “Administrative Measures for Generative Artificial Intelligence Services,” which attracted notice for proposing unexpectedly tough restrictions on general-purpose language models like Baidu’s Ernie. Since then, the government released “Interim Measures for the Management of Generative Artificial Intelligence Services,” which were substantially less prescriptive and contain a clause exempting organizations “that develop and apply generative AI technologies without providing Generative AI Services to the domestic general public” from the legislation, meaning that research groups or corporations (not to mention the state) can study and develop models with minimal restriction insofar as they aren’t released to the public. Although the language of the interim measures speaks mostly to issues of economic growth and wellbeing, the regulations have ramifications for geopolitics as well.
What you can do to help
If you're a funder: With the help of colleagues across dozens of organizations, Effective Institutions Project has put together a funder’s guide to AI governance and strategy. We hope this will be a useful resource for funders beginning their learning journey on AI as well as those looking to expand into new areas of the ecosystem.
If you work at one of the government institutions or tech companies mentioned above: We’d love to talk! The specifics of what you can do to help will depend greatly on your role, strengths, and networks, so for best results we encourage you to talk with us 1-on-1. To express interest, please fill out the form here.
If you're looking for a more impactful career: Several of our talent partners are actively coaching people who are transitioning into AI-related work from studies or a previous career focus. If you’d like to join them, feel free to reach out for advising with 80,000 Hours or Successif (for mid-career talent). You could also check out this guide to AI policy and strategy careers.
If you'd just like to stay in the loop: Besides continuing to read The Observatory, to get a level deeper on AI strategy and governance we suggest following Zvi Mowshowitz’s regular news roundups, Ethan Mollick’s newsletter, the EU AI Act Newsletter, and the Center for AI Safety newsletter, among others.
The impacts of AI on the world could be miraculous, disastrous, or anything in between (including a combination of disasters and miracles!). No one knows exactly how it will play out, but building smart foundational governance systems now will help us guard against the very worst outcomes while enabling leaders to react more quickly and wisely to novel developments. In the coming months, we’ll continue to monitor developments at the intersection of AI and institutions and share any updates to our assumptions and premises as they occur.
If you are interested in collaborating with us to improve global institutional decision-making on AI and other issues, please contact us. You can also join our Slack channel and find us on Twitter/X.
Very interesting!