The Observatory #2: So, AI Had Kind of a Big Week…
Plus reading the tea leaves on a second Trump administration, decoding the governance of Anthropic, checking in on global efforts to head off the next pandemic, and more.
No no, not that one! We’re talking about roughly the week between October 26 and November 2, when, among other things, the UK hosted the world’s first government-official AI safety summit, the US issued an executive order regulating advanced AI, six leading AI research labs published AI safety policies, the US and UK each announced government-sponsored AI safety research institutes, the US Vice President’s office announced a $200 million collaboration on trustworthy AI with private funders, leading Chinese scientists participated in a statement calling for tighter international AI regulation, the UN introduced the members of its High Level Advisory Body on AI, and the G7 announced a new code of conduct for AI developers. It was so much so fast that a leading AI governance grantmaker jokingly called for a “complete and total shutdown of AI policy news until we can catch up on reading and figure out what the hell is going on.”
So what the hell was going on? In our read, there were two particularly important developments over the course of the week. First, President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence is the first regulatory action by any major government in the West that explicitly targets the most advanced (“frontier” or “foundation”) AI models. The executive order takes a multifaceted and holistic approach to AI regulation, foregrounding issues of AI safety and security, privacy, civil rights, consumer and labor rights, innovation and competition, and American leadership. Most importantly, the executive order utilizes the Defense Production Act to impose reporting requirements on the most advanced models. Companies training models above a compute threshold of 10^26 floating-point operations (FLOPS) must inform the federal government and formally report the results of red-teaming exercises and physical and cybersecurity protections. The choice to index on compute is significant because it is measurable and instrumentally useful as a predictor of model capabilities. Although today’s most cutting-edge models do not yet meet the threshold for the new requirements, there is a lower threshold (10^23 FLOPS) for models working with primarily biological sequence data due to biosecurity concerns.
Under the EO, the Department of Commerce holds responsibility for devising and updating the technical conditions for frontier models, and large computing clusters must be reported to department officials. Additionally, a know-your-customer regime is to be implemented for foreign users of US-based cloud compute services. The EO also establishes a White House AI Council and requires appointment of a Chief AI Officer at agencies and an interagency council of the Chief AI Officers to facilitate policy coordination.
The success of an executive order depends greatly on how well it is implemented, and the EO’s ambitious deadlines may pose a challenge for the federal bureaucracy. It’s also important to remember that executive orders do not come with funding and can always be overturned by the next president. Even so, the EO represents a major step forward in light of the US government’s importance to any global solution for AI governance.
Second, on November 1-2, the UK hosted the first global summit on AI safety bringing together senior government officials from nations with leading AI labs, notably including both the US and China. AI technology companies, researchers and civil society actors also attended. This event generated an absolute blizzard of announcements and commitments. Among the most significant:
The UK announced the transition of the Frontier AI Taskforce, the organizing entity behind the summit, into a permanent UK AI Safety Institute with the mandate to “fully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models.” Not to be outdone, the US announced its own AI Safety Institute housed within the National Institute of Standards and Technology (NIST) to support the Department of Commerce’s new model evaluation responsibilities under Biden’s executive order.
In addition, summit organizers announced an agreement between eleven “like-minded” governments and eight key AI companies to share model access and facilitate testing by the safety institutes above. Taken together, these developments represent a major escalation of Western-style democracies’ ability to monitor and flag harmful risks posed by AI models developed within their jurisdictions.
In the lead-up to the summit, six frontier AI developers (Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon and Meta) published safety policies across nine axes including “responsible capability scaling,” “model evaluations and red teaming,” and “model reporting and information sharing.” While these commitments are voluntary and have some gaps, it represents progress among some of the companies that had been less known for taking risks from advanced AI seriously.
The marquee outcome from the Summit, the so-called Bletchley Declaration signed by all 28 nations in attendance, is notable as the first explicit acknowledgement of potentially catastrophic risks associated with frontier AI models by the most influential nations in the AI space. However, as a symbolic statement it should probably be seen as carrying less weight than the items above, as demonstrated by signatory nations France and the UK coming out against new regulations on frontier models mere weeks after the event.
Overall, the summit seems to have laid significant groundwork for future concrete and meaningful commitments to AI safety, though for the most part the commitments themselves will have to wait. (As a result, the EIP/Metaculus forecasting question on this topic resolved no, although it was an unusually close call.) Two further gatherings were announced for 2024 in South Korea and France, and we’ll be watching closely to see whether those opportunities continue to bear fruit.
Forecasting questions:
It’s time to start thinking about a second Trump administration
With less than a year removed from the 2024 US presidential election, Donald Trump not only holds a commanding lead in Republican primary polling but is increasingly edging Joe Biden in general election polling matchups as well. If this seems surprising to those who have been following Trump’s four criminal indictments and seemingly endless court appearances, it’s not to a coterie of allies who have been sketching out the details of a second Trump term from almost the day he left office.
Two major political operations have emerged to help the next Republican president prepare for the first weeks of a new administration. Project 2025, led by the Heritage Foundation and supported by a large coalition of conservative organizations, aims to provide a “governing agenda and the right people in place, ready to carry this agenda out on day one of the next conservative administration.” The effort is led by the chief of staff of the Office of Personnel Management during the Trump administration, Paul Dans, and Trump’s former special assistant and associate director of Presidential Personnel, Spencer Chretien. Project 2025 has released a 180-day playbook, developed a policy agenda, designed a curriculum for political appointees, and is compiling a talent database for the next conservative administration. Meanwhile, a competing group more explicitly associated with Trumpworld, America First Policy Institute’s America First Transition Project, is setting up a parallel infrastructure led by Michael Rigas and Doug Hoelscher (both Trump administration alumni) as well as Martin Marks. The America First Transition Project is focused on crafting “personnel, policy, financial, and administrative strategies" for every federal department and agency.
While the two think tanks may be putting together different slates of preferred nominees for federal agencies and different drafts of executive orders, on substance they are largely aligned. Both efforts are pushing hard to install loyal political appointees to facilitate the implementation of Trump’s or another Republican president’s agenda – for example, Project 2025 is working with America First Legal, a venture led by former Trump policy aide Stephen Miller, to vet ideologically like-minded lawyer candidates to populate the top legal positions of branches like the Pentagon, Homeland Security, HHS, Labor and the Commerce Department.
More controversially, there are ambitions to simultaneously reduce the influence of established politicos and career bureaucrats who are seen as part of the “deep state.” Central to this plan is Schedule F, a Trump-era executive order that reclassified thousands of federal public civil servants in policy roles as political appointees. For context, during a typical presidential transition approximately 4,000 politically appointed workers are replaced, whereas under Schedule F it has been estimated that as many as 50,000 federal posts would be subject to political whims. The original executive order was passed too late in Trump’s presidency to be implemented and was promptly rescinded by Biden upon taking office, but it is reported that Trump would immediately reimpose Schedule F if elected again. The goal of reimposing this policy would seemingly be to fill the policy ranks of the federal bureaucracy with workers who are more responsive to the president’s demands and less likely to impede potentially controversial policies. While this is especially salient in the case of a norm-busting leader like Trump (who has repeatedly spoken about exacting revenge on his political enemies if he were to be elected again), it is worth noting that support for Schedule F or similar legislation has broad support among Republican presidential candidates, so regardless of who wins the nomination it is likely that civil service reform will be on the agenda.
Anthropic tries to go corporate without selling out
Two years prior to the sudden ouster of Sam Altman from OpenAI’s board last month, Altman allegedly survived another attempt to end his leadership of the company he co-founded. According to five people familiar with the matter, a group of employees went to OpenAI’s board to try to get Altman removed, frustrated by what they saw as creeping commercialism driven by changes to OpenAI’s management structure to draw in capital from large investors, most notably Microsoft. When the board refused to act, those employees left the company to found Anthropic. (Anthropic released a statement denying that its co-founders had attempted to remove Altman from OpenAI, but not that they had raised these concerns with the board.)
Fast-forward to December 2023, and Anthopic’s flagship generative AI model, Claude, lags only OpenAI’s GPT-4 on global AI capability rankings. Despite the distrust of commercial interests behind its origin story, Anthropic has ended up walking the same path as its bigger sibling when it comes to working with corporate investors, with the volume ramping up dramatically in the past quarter. In October, Google announced a $2 billion commitment to the company, bringing its total investment in 2023 to over $2.5 billion even as it continues to pour money into its own leading lab, Google DeepMind. A month earlier, Amazon announced a $1.25 billion investment in Anthropic that could balloon up to $4 billion by the first quarter of 2024. Other investors including South Korea’s SK Telecom, Spark Capital, Salesforce, and Zoom have pitched in to the tune of at least $550 million in 2023 alone.
What does this flood of money mean for who has control over Anthropic’s decisions? Like OpenAI, Anthropic has a complex governance structure. It is registered as a Delaware public benefit corporation (PBC) which is governed by a board of directors. The PBC status requires that the entity be “managed in a manner that balances the stockholders’ pecuniary interests, the best interests of those materially affected by the corporation’s conduct, and the public benefit or public benefits identified in its certificate of incorporation.” Anthropic has defined the latter as the “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”
Little is known about the shareholders of the PBC. While Google acquired about 10% of the Anthropic PBC as part of its first investment, it is plausible that the recent commitment of $2 billion has increased that figure. Amazon similarly acquired a minority stake as part of its recent investment. Even though the PBC by law provides leeway to align profits with the public interest, Anthropic acknowledges that the PBC “does not make the directors of the corporation directly accountable to other stakeholders or align their incentives with the interests of the general public.” To address this, Anthropic earlier this year established a “Long-Term Benefit Trust,” which holds a special class of stock in Anthropic that cannot be sold, according to Vox. While the trust does not have a direct role in governing Anthropic PBC, it does have two key capabilities:
Elect or remove directors. At the time of establishment, the trust controls one director seat. This number will grow to three – the majority of the board – “according to time- and funding-based milestones” and at the latest in September 2027.
The trustees may receive “advance notice of certain key actions by the board that may materially affect the business of the company or its organization” and “are granted broad power to request any information or resources that are reasonably appropriate to the accomplishment of the Trust’s purpose.”
Anthropic PBC is governed by a board of five directors, which currently comprises Anthropic CEO Dario Amodei, Anthropic President Daniela Amodei, Open Philanthropy AI policy grantmaker Luke Muehlhauser, and Spark Capital General Partner Yasmin Razavi. The trust will elect the fifth director this fall and eventually also elect the seats currently held by Daniela Amodei and Luke Muehlhauser. The Long-Term Benefit Trust, meanwhile, is governed by five trustees, including Jason Matheny, CEO of the RAND Corporation; Kanika Bahl, CEO & President of Evidence Action; Neil Buddy Shah, CEO of the Clinton Health Access Initiative (Chair); Paul Christiano, Founder of the Alignment Research Center; and Zach Robinson: Interim CEO of Effective Ventures US. Trustees serve one-year terms but can be reelected, and new trustees are elected by current trustees, presumably by a majority vote.
While Anthropic acknowledges that the trust is an “experiment,” albeit “informed by some of the most accomplished corporate governance scholars and practitioners in the nation,” in theory it does seem to insulate Anthropic against giving up too much control to corporate investors like Google and Amazon. Will it prove resilient? Only time will tell.
Global pandemic preparedness efforts face headwinds
COVID-19 laid bare how ill-equipped the world is to handle a serious infectious disease outbreak. The multilateral system failed to adequately prepare for and respond to the pandemic, leaving nations to grapple with constellations of major health and socioeconomic threats. It was widely accepted that we need to do better, and in December 2021 the World Health Assembly (the decision-making council of the World Health Organization) established an intergovernmental negotiating body (INB) to develop an accord on pandemic prevention, preparedness, and response that will need to be ratified by each member state.
The negotiations for a pandemic accord began with bold aspirations to develop a robust and legally binding instrument that would protect the world from future infectious disease threats. Now in its seventh round of meetings, however, the INB’s aspirations are beginning to wilt. While the current negotiating text covers a lot of ground on public health surveillance, technology transfer, and financing, amongst other areas, concrete targets are notably lacking.
Public statements and private reports indicate that parties to the accord are deeply divided on core topics. On intellectual property (IP) and technology transfer, northern countries that are home to most major pharmaceutical companies are resisting any limitation on IP rights while advocates from the Global South stress the risk of perpetuating the inequities in vaccine access observed during the COVID-19 pandemic if the IP regime is not changed. On funding, parties are reportedly struggling to agree on how to share the costs of preventing pandemics that have global consequences. Low-income nations are hesitant to commit to major surveillance investments when they already struggle to provide basic health services, and high-income countries are reluctant to guarantee supportive funding. The challenge of shared costs is particularly thorny when we consider that high risk areas for zoonotic spillover (the leap of diseases from animals to humans) are concentrated in low- and middle-income countries. Moreover, these funding needs are great; furnishing an adequate pandemic preparedness and response systems is estimated to require at least $10.5 billion annually in additional global financing. By comparison, the World Bank’s Pandemic Fund (the largest such fund to date) has less than $2 billion in trust.
In other pandemic prevention news this fall, the Global Preparedness Monitoring Board released a report in October attesting that worldwide efforts to enhance preparedness in the wake of COVID-19 have been limited and in some areas capacities have actually declined. And the UN General Assembly’s September High-level Meeting on Pandemic Prevention, Preparedness and Response produced a political declaration that extolled the importance of pandemic preparedness but was devoid of any concrete commitments beyond meeting again in three years. We’ll continue to stay tuned, but overall it’s not a good sign for the multilateral system that even a global crisis as severe as COVID doesn’t seem to be motivation enough to meet the moment.
The African Medicines Agency is coming into focus
In June, Rwanda signed a Host Country Agreement with the African Union Commission for the new African Medicines Agency (AMA). Patterned after the European Medicines Agency, the AMA is a continental regulatory system designed to harmonize national regulatory authorities across the African Union’s 55 country members, speed up drug verification and delivery systems, and promote medicine and vaccine independence from international suppliers. Rwanda is in the process of appointing the members of the governing board recommended by AU’s Conference of State Parties and setting up adequate spaces and resources for the agency to function, all with the help of a funding commitment of more than €100 million from the EU and the Bill and Melinda Gates Foundation. Once placed, the governing board will select a Director General and work together to establish the initial scope of the AMA’s regulatory activities in 2024, setting up the coming year to be an especially crucial moment for shaping health policy for the 2nd-most populous continent.
With more than 60% of the world’s HIV/AIDS cases and 90% of global malaria cases, Africa could certainly benefit from stronger continental healthcare infrastructure. Bottlenecks in the current drug manufacturing and approval system have led to multiyear delays in access for consumers and about 70% of all drugs being imported, mainly from India and China, and counterfeit drugs in sub-Saharan Africa kill more than 430,000 people a year according to the UN Office on Drugs and Crime.
While the existence of the AMA is now assured, it will only have jurisdiction among the countries that have ratified the treaty establishing it. So far, 37 countries have ratified or are on track to ratify, but the 18 holdouts include two of Africa’s most important economies in Nigeria and South Africa. So long as that continues to be the case, the AU’s goal to produce 60% of all vaccines used locally within Africa by 2040 will be hard to reach. With the developments of the past year making the AMA’s relevance an increasing inevitability, however, the recent momentum may well be enough to tip the scales.
What else we’re watching:
After a year of deliberations by a special U.N. committee, the World Bank is set to host a “loss and damage fund” that will support developing countries affected by the long-term environmental consequences of climate change. The decision was met with opposition by developing countries, as the World Bank continues to underdeliver on its promises to reform its financial processes and capacity to address global climate challenges.
Meanwhile, the amount spent on fossil fuel subsidies by members of the G20 surpassed $1 trillion in 2022, prompting Canada to urge other members of the G7 to follow through on previous commitments to phase them out by 2025.
Tesla will start selling electric vehicles in India in January 2024 and plans to build a factory for battery storage there, setting up a potentially massive growth vector for the EV market.
It appears that Microsoft is exploring a nuclear energy strategy for powering its data centers, which, if successful, could overcome one of the key bottlenecks to compute-intensive applications like training frontier AI models and potentially have knock-on effects for transition timelines to renewable energies.
Russia officially withdrew its ratification of the Comprehensive Nuclear Test Ban Treaty, a 1996 document calling for a ban on all nuclear tests, following accusations from the Kremlin that the United States had failed to fulfill its commitment toward military denuclearization. The announcement comes amidst a successful test of a new Russian nuclear missile, the expiration of restrictions to supply nuclear materials to Iran, and a decision by Putin to run in the 2024 elections and almost certainly extend his rule by at least six more years.
With preparations for India’s 2024 general elections in full swing, various Indian opposition leaders including Rahul Gandhi, Shashi Tharoor, Akhilesh Yadav, and Priyanka Chaturvedi have received notifications by Apple that their phones had been targeted by state-sponsored spyware just a few months before Indian’s general elections in April 2024. The software used is believed to be Pegasus, and it fits a pattern of alleged manipulation of India’s media and cultural infrastructure by the ruling BJP party to consolidate power.
In October, the Gulf Cooperation Council (GCC) and ASEAN convened an inaugural summit, yielding pledges between two regions that comprise large parts of the Islamic world to collaborate on matters related to business, climate change, regulation, and stability between 2024-2028. The summit notably illustrates a diversification of the regions’ diplomatic links towards an alternative multilateral platform.
Chinese President Xi Jinping announced a BRICS AI Study Group within the BRICS Institute of Future Networks and shared ambitions for BRICS members to coordinate on best practices for “ethical and responsible use” of AI.
Concordia AI has published a major report outlining the “State of AI Safety in China.” Among the findings: the Chinese government seems increasingly interested in positioning AI regulation as an arena for multilateral cooperation, but has expressed a clear preference for pursuing it through the UN rather than other international fora.
The Chinese government has passed several new AI regulations supported by detailed technical requirements published by a central standards setting authority, covering areas including recommendation algorithms, deepfakes and generative AI. China’s AI regulations are now arguably the most extensive in the world among jurisdictions with major AI research labs.
The Biden administration has further tightened export controls for AI chips, expanding the rules to more than 40 countries and lowering the performance threshold to encompass the H800 and A800 chips Nvidia has purposely designed for the Chinese market after the original export controls instantiated in October 2022. The new regulations were widely expected but come over the objections of leading chip manufacturers.
Canon has announced that its new Nanoimprint Lithography (NIL) machine will be able to print semiconductors as small as 2nm by 2025 and compete for market share with Dutch giant ASML. The Japanese company's entrance into advanced chip manufacturing could complicate regulatory efforts focused on the global supply chain for training frontier AI models and other advanced technology.
In September, Amazon added three executives to its Senior Leadership Team (“s-team”). Two of the additions are AWS members with management control over compute governance and AI/ML development, highlighting the rising importance of AWS within Amazon.
OpenAI announced a “Preparedness team” as part of efforts to minimize catastrophic risks from frontier AI. Led by Aleksander Mądry, director of the MIT Center for Deployable Machine Learning, the team is tasked with identifying, forecasting and guarding against risks across arenas including “individualized persuasion; cybersecurity; Chemical, biological, radiological, and nuclear (CBRN) threats; Autonomous replication and adaptation (ARA).”
Elon Musk’s AI startup xAI released its flagship model Grok only four months after the firm was launched this past July. Predicting that superhuman AI is 5-6 years away, Musk’s proposed solution to his own concerns about existential risks is to create a "maximally curious" AI that will want to preserve humanity because humanity is “interesting,” he said.
The Gates Foundation has announced a $30 million investment to support African AI solutions for healthcare and social issues, in a bid to strengthen African AI expertise and decrease the technology gap with wealthy countries.
A new multilateral treaty negotiated by the OECD will allow taxation of digital services in the countries where the sales occur rather than in the countries where the companies realizing the profits are located. The agreement will effectively transfer between $17 billion and $32 billion in annual tax revenue from high-income to lower-income countries.
The $55 million Africa Integrity Fund that was assembled in 2016 by the African Development Bank to fight corruption across the continent has failed to deploy any of its money in the seven years since its launch. Anti-corruption NGOs are not pleased.
Reversing a significant deregulatory move made by the Trump administration, the Financial Stability Oversight Council – a body overseen by the US Treasury – reinstated a mechanism to subject certain systemically important financial institutions like asset managers, funds, and insurers to stricter regulatory oversight.
The US government spent the bulk of October without a Speaker of the House of Representatives after a renegade group of Republicans banded together with Democrats to depose Speaker Kevin McCarthy. In the US system of government, the Speaker is a critical role because the House effectively cannot function without one, and in turn the federal government cannot function for long without the House. While Rep. Mike Johnson was finally elected to the post after 22 days, it remains to be seen whether he will be able to wrangle his caucus any more effectively than his predecessor.
The George W. Bush-era President’s Emergency Plan for AIDS Relief (PEPFAR) has been a global health success story that received bipartisan support and helped save millions from the HIV/AIDS epidemic, but Republicans are now blocking its reauthorization over allegations that the program is funding abortion services (a claim that proponents deny). The impasse highlights how the culture wars are continuing to polarize areas of previous policy consensus.
In perhaps the most brazen use case yet for deepfake technology, scammers successfully impersonated African Union Commission Chairperson H.E. Moussa Faki Mahamat to schedule and conduct several fraudulent video calls with European leaders. It’s not clear what the scammers were after, but this incident illustrates how even current-day technology can disrupt institutional functioning in unexpected ways.
The Observatory is our way of answering the question “what’s going on in the world?” through the lens of institutions. This edition features our take on a selection of important and underrated news stories from September and October 2023. Please don’t hesitate to get in touch if you’d like to learn more about our work or get involved.