The Observatory #3: Can the EU Maintain Consensus on the Green Deal?
Plus how a civil war in Sudan interrupted a promising reform effort, OpenAI's nonprofit mission, state surveillance in India, and more
The European Green Deal was introduced in 2019 to chart a course for Europe to reach net-zero emissions by 2050. At the time it signaled European Commission President Ursula von der Leyen’s strong commitment to climate action. However, EU climate efforts now face major headwinds after years of rising living costs, a war in Ukraine which caused energy prices to soar across the continent, and growing anti-EU and populist sentiment, all of which are eroding the political mandate for the institution that has been arguably the global leader on climate to date.
It is not hard to understand why individuals and several national governments are questioning the EU’s climate roadmap: consumers feel a day-to-day pinch from regulations they see as expensive and national governments don’t want to be the face of policies that can harm the wellbeing of their citizens and industries. Take Germany as an example. This past summer, the government was nearly torn apart by a proposal that would have banned new installations of gas boilers and promoted the use of heat pumps (which produce fewer emissions but are more expensive than gas boilers). This debacle partially catalyzed the rise of the far-right and climate change-denying Alternative for Germany (AfD) party, which capitalized on citizens’ concerns over the costs and the lack of choice imposed by the heat pump mandate. In the Netherlands, the victory of Geert Wilders’s Freedom Party (PVV) in November’s election threatens to upend Dutch climate ambitions as well. Although most Europeans continue to be concerned about climate change and support action, the German and Dutch cases suggest that climate measures that are costly and remove choice from consumers are becoming unpalatable to an increasing proportion of voters. With far-right parties expected to gain in the European Parliament elections later this year, analysts from the European Council on Foreign Relations warn that an anti-climate coalition could well take hold in the Parliament during its next five-year term, with serious implications for whether the EU will be able to make good on its Green Deal commitments.
The EU’s wavering on climate policy is coming at a time of concerning signals that more, rather than less, global leadership is needed to avoid the worst impacts of climate change. The State of Climate Action 2023 report found that a major acceleration of global efforts will be required to meet the goals of the Paris Agreement; of the 42 indicators of progress assessed, only one, electric passenger car sales, is on track to meet the 2030 goals associated with 1.5℃ warming, and most indicators are well off-track. That assessment is consistent with the recent Intergovernmental Panel on Climate Change's synthesis report’s findings that the world is indeed on a trajectory to exceed 1.5℃ warming in the 21st century. Unfortunately, major emitters’ policies and commitments are still estimated to be incompatible with the agreement, and few country-level plans for climate change mitigation are aligned with keeping warming to 1.5℃. While the trajectory is certainly better than it was a decade ago, with greenhouse gas emissions perhaps at the point of plateauing and renewable energy constituting an increasingly large share of the world’s energy mix, it seems a virtual certainty at this point that the world will suffer substantial disruption and damage from global warming this century. The only question – a question to no small degree in the hands of European voters in 2024 – is how much.
Forecasting question:
Sudan’s civil war is provoking a child displacement crisis, with no end in sight
Since last year, Sudan has been ravaged by a civil war between two rival armed forces in the country: the Sudanese Armed Forces (SAF), the official military force of the Republic of Sudan, and the Rapid Support Forces (RSF), a paramilitary group developed from militias that fought in the Darfur War on behalf of the Sudanese government. Conflict between the SAF and RSF erupted in the capital of Khartoum in April 2023 after, among other political factors, disagreements regarding the timeline under which the RSF should integrate into the SAF. Since then, an estimated 12,000 people have been killed through the end of 2023, which is about half of the reported death toll in Gaza. Although the number of lives lost is limited in comparison to two previous civil wars in Sudan, the conflict has also displaced more than 7.3 million people, 1.8 million of which have fled outside of the country. This displacement is unprecedented in its impact on children specifically, with UNICEF reporting it as the largest child displacement crisis in the world. Sudan’s capital, Khartoum, is a shell of its former self, with many schools in the city facing indefinite closure and depositors being unable to access cash due to the closure of physical banking sites.
In addition to the direct devastation it’s caused, the conflict has also upended what had been a promising and unprecedented attempt to liberalize Sudan’s institutions. After a civilian-initiated coup d'état longstanding former Sudanese president Omar Al-Bashir in 2019, a power-sharing agreement was brokered by the African Union between Sudan’s military and civilian protesters, agreeing to transition Sudan into a democracy within 39 months. The January 2019 protests in response to Al-Bashir declaring his intention to stay in power were widely noted internationally for their scope, scale, and coordination, and the power-sharing agreement included a timeline for elections and the appointment of Abdalla Hamdok, a technocrat with policy, economic and development experience across Africa, as prime minister. However, this transitional government was removed in a second coup in 2021 by the Sudanese military and replaced by a reconstituted Sovereign Council, which included leaders from both the SAF and the RSF but quickly devolved into violence. Now, with violence escalating steadily, the initial excitement of the reform movement seems increasingly distant.
Meanwhile, the response from the international community has been tepid and uncoordinated, with muted media coverage compared to the scale of suffering. In an attempt to secure regional stability and capitalize on existing relations with the military government, Egypt has been opaquely helping the Sudanese Armed Forces, while the UAE supports the Rapid Support Forces by allowing its leaders to store their wealth in the country and shipping weapons to the group. The US’s involvement in peace negotiations has been particularly haphazard, with the rocky US-Saudi relationship hampering peace talks held in Jeddah. With no easy solution in sight, the prospects for a swift end to the war now seem very unlikely, as do the chances of meaningful democratic participation for Sudan’s fast-growing population of more than 45 million in their country’s next government.
OpenAI’s long-term governance and investment model in flux after board drama
The US based artificial intelligence company OpenAI is widely regarded as one of the leading players in the race to develop artificial general intelligence (AGI) – defined by the company as “AI systems that are generally smarter than humans.” If you carefully followed our reporting on the unusual governance of the AI firm Anthropic in the previous issue of The Observatory, you will remember that OpenAI is part of the origin story of Anthropic and won’t be surprised that OpenAI also has a weird corporate structure. The nonprofit entity that controls OpenAI came into the spotlight briefly around Thanksgiving when CEO Sam Altman was suddenly fired, only to be reinstated several days later as part of a deal that saw the departure of a majority of the nonprofit’s board members who initiated this action. OpenAI’s nonprofit board is bound to a charter which states that “OpenAI’s mission is to ensure that artificial general intelligence [...] benefits all of humanity,” but notably adds that the organization will “consider the mission fulfilled if [its] work aids others to achieve this outcome.” The latter language, which some refer to as the “assist clause,” is backed up by an explicit commitment that “if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.” The prospect of sacrificing the company’s ambitions for the public good stands in stark contrast with recent headlines that OpenAI is in talks to raise funding at a valuation at or above $100 billion, which would correspond to around three times the current market capitalization of the Chinese internet giant Baidu. Considering the recent board shake-up and the powerful commercial incentives that OpenAI’s exorbitant valuation might give rise to, can we still take the commitments made in this charter seriously?
There are at least three important variables in this equation: the composition of the board of OpenAI; the amount of control investors like Microsoft, Khosla Ventures, and Thrive Capital have over OpenAI’s commercial operations; and the outcomes of potential antitrust investigations into the Microsoft-OpenAI partnership. According to a November announcement by OpenAI, the board is currently made up of former Salesforce co-CEO Bret Taylor as chairperson, former Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo. Microsoft will be included as a “non-voting observer” and there are six vacant seats. The announcement includes a message by chairperson Bret Taylor which acknowledges “[AI] safety” but states that the current board “will enhance the governance structure of OpenAI so that all stakeholders – users, customers, employees, partners, and community members – can trust that OpenAI will continue to thrive,” which is notably a narrower group than the “all of humanity” language used in the charter. Board member Adam D’Angelo, who believes the non-profit board should keep the commercial OpenAI unit and Altman in check, according to “a person who has spoken to him,” is reportedly playing a key role in the search for new directors.
Of all OpenAI’s investors, Microsoft, which has poured around $13 billion in the company, clearly looms largest. But how much control does Microsoft really wield? While OpenAI’s precise governance structure is hard to ascertain due to its use of a web of interconnected holding companies, most reporting is consistent in that Microsoft is formally involved in OpenAI through OpenAI Global LLC – a “capped-profit” company where the potential returns of investors are limited by an upper bound which was set to 100x for OpenAI’s first round of investors. However, various reports by The Information and the Economist suggest that this cap might increase by 20% each year starting in 2025. If true and applicable to all investments into OpenAI, that development would seem to render the profit cap irrelevant and would provide boundless incentives for OpenAI’s investors to commercialize the lab’s technology. So although the OpenAI nonprofit retains formal control over OpenAI’s commercial operations, considering Microsoft’s provision of funding and compute, its exclusive license to non-AGI IP rights, and the non-voting board seat, Microsoft has a combination of levers at its disposal to influence important decisions at OpenAI towards the commercial interests that Microsoft shareholders subject it to.
The Microsoft-OpenAI partnership has also drawn scrutiny from industry regulators in the US, UK, and EU. If any potential future antitrust investigations were to find that the Microsoft-OpenAI partnership violates antitrust law, this might not only severely endanger OpenAI’s R&D, which is reliant on Microsoft’s capital and compute, but also Microsoft’s AI business, which hinges on its access to OpenAI. While it is speculative how OpenAI’s governance structure would change as a result of any potential ruling, a significant overhaul is not inconceivable. All in all, while the OpenAI nonprofit and its board remain nominally in charge of OpenAI’s operations, the environment around the company may be gradually making it harder for those stakeholders who have been given the power to act in the public interest to actually do so.
Telecommunications reform in India further solidifies state power
A controversial telecommunications bill passed through the lower and upper houses of the Indian parliament on December 21 just a few days after it was introduced over vocal opposition from over 60 national and international organizations including Signal Foundation, the Committee to Protect Journalists, and Freedom House. The bill’s ostensible purpose is to streamline India’s outdated and disjointed telecommunications laws (one of which was first passed in 1885), make the sector more investor-friendly, and bolster the national security capacities of the national government. Along with its improvements in efficiency, however, the legislation grants the government the right to access encrypted messaging data on vague national security grounds. The open letter from civil society stakeholders claims that the provisions might lead to encrypted platform providers like Signal choosing not to operate in India, complaining that they essentially make end-to-end encryption that protects the privacy of both senders and recipients impossible. The Telecommunications Bill also gives the national government new surveillance powers, and facilitates shutting down the internet with fewer checks and balances. Before coming into force, the government will need to publish rules about how it’s going to be enacted, which could take another 6 months to two years to play out.
The bill maintains the existing interception rules established under the Indian Telegraph Act of 1885, which places review of interception orders in the hands of a committee consisting exclusively of senior government officials, raising concerns about the separation of powers. Subjects can theoretically challenge a government interception order, but given their secretive nature, exercising this right may not be practical. As distinct from other major democracies, no judicial oversight is necessary for government surveillance of its own citizens. The bill also directs telecommunications providers to require “verifiable biometric based identification” as a condition of signing up, which in combination with its other provisions provides the government in theory with “a powerful toolkit to target any person.”
Government officials push back against these critiques, noting that they are intended to be deployed only in emergencies and that they are necessary in such situations. Unfortunately, the government in India has a recent track record of taking actions disproportionate to security threats. As a case in point, the Telecommunications Bill itself passed the same week that more than 140 Indian opposition politicians were suspended from parliament and unable to vote on the proposal, following lawmakers’ peaceful (if loud) protests against the government’s refusal to address a recent security breach at the premises. The suspensions are part of a larger pattern of officials from the ruling Bharatiya Janata Party using their position of authority to make life difficult for the opposition; as relayed in the previous edition of our newsletter, for example, opposition leaders received an alert from Apple in October that the state might be hacking their phones in advance of this year’s elections. Following that disclosure, BJP officials summoned Apple’s India representatives to a meeting where government officials allegedly asked for alternative explanations for the warnings and pressured Apple to walk them back.
A major victory for animal welfare gets eaten up by industry lobbying
Since its establishment in 2012, the European Citizens’ Initiative has given seven European citizens from seven separate member states the power to submit a petition for legislative action to the European Commission, provided they are able to back their appeal with at least one million signatures from at least seven European member states collected within a year. As each state demands a signature quota depending on the number of Parliamentary members it holds, the initiative was intended to be a direct line of communication between officials and citizens, garnering some successes over the last decade.
The ECI’s most notable recent program, “End the Cage Age,” called for the banning of cages for both regulated (hens, broilers, calves and pigs) and unregulated farm animals (i.e. rabbits, ducks, geese). (Another popular initiative, “Fur Free Europe,” would ban keeping and killing of animals for the sole or main purpose of fur production.) In 2021, End the Cage Age won a commitment from the European Commission to “table, by the end of 2023, a legislative proposal to phase out, and finally prohibit, the use of cage systems for all animals mentioned in the Initiative,” which would make the EU the largest jurisdiction in the world to enact such a ban. In a dramatic U-turn, however, both proposals have been shelved for negotiations and implementation until after the next European elections in June, which in practice could mean they never get enacted.
The animal welfare lobbying groups closest to the initiatives, Compassion in World Farming and Eurogroup for Animals, report that more than 300 million farm animals are being held in cages across Europe each year, counting so-called “enriched” cages, while ~21 million minks, ~12 million foxes, and ~9 million racoon dogs were involved in the 2021 fur production cycle. The quiet mothballing of the initiatives appears to be the result of lobbying by big agri-businesses within the EU, and in particular by Europe’s biggest farming lobby, Copa-Cogeca. Copa-Cogeca, along with its industry partners, allegedly used questionable research and unregistered lobbying efforts to pressure the European Commission to reconsider the feasibility of a rapid cage phase-out. Despite the availability of funds and plans for the transition, the campaign appears to have been successful in spooking top lawmakers from supporting the very legislation they’d previously committed to moving forward.
Currently, there are no plans to discuss either initiative in the 2024 commission work programme. And for similar reasons as discussed in our top story about the Green Deal, this year’s elections could make it only harder, rather than easier, for initiatives like End the Cage Age to make it through to law.
What else we’re watching:
By pairing a language model optimized for providing creative solutions in computer code with an automated evaluator that guards against hallucinations and incorrect ideas, Google DeepMind's FunSearch conquered a previously unsolved math problem.
Google DeepMind launched Gemini 1.0, its much anticipated multimodal language model, in December. Gemini Ultra, its most advanced (but not yet publicly available) version, achieves field-leading performance on various academic benchmarks, though its lead over competitors like OpenAI’s GPT-4 is small.
Google DeepMind has used AI to predict the structures of over 2 million new materials, in the process more than octupling the number of stable material designs known to humanity. The new designs could potentially revolutionize the production processes for batteries, solar panels, chips, and other physical technologies.
It turns out that Google pays a whopping 36% of its Search ad revenue in order to remain Apple’s Safari browser’s default search engine. The disclosure was made as part of a US federal antitrust trial scrutinizing Google’s dominance of the search ad revenue market; if Google loses, it could impact the financing and market strategy of Google DeepMind.
Through most of 2023, Amazon positioned itself as a behind-the-scenes provider of AI infrastructure by supplying compute via AWS and supporting AI developers. Now, though, the company has launched a chatbot and is training its own large language model codenamed “Olympus,” entering into more direct competition with frontier AI labs.
Baidu’s LLM chatbot ‘Ernie Bot’ reached 100 million users in December 2023, roughly four months after its full-scale public release. This is an important development for Baidu, which has been banking on its AI bets to steer it back to revenue growth after several strategic missteps. The firm’s CEO, Robin Li, has projected a combination of confidence and insecurity by encouraging other Chinese firms to stop trying to build their own models and leverage Ernie instead.
Revised US export restrictions place restrictive limits on the performance of GPUs that can be sold to Chinese firms. Under pressure from the new trade rules, the Dutch government revoked an export license for lithography system maker ASML, preventing it from shipping some of its machines (which play a critical role in semiconductor manufacturing) to China. NVIDIA, on the other hand, has developed new chips that straddle the performance thresholds set out under the regulations, enabling the company to continue selling high-performance chips that can be used to train AI models in China.
Chinese firms are admitting that the restrictions are forcing them to adapt. Tencent stated in an earnings call that it will need to use its chips ‘more efficiently’ and seek domestic suppliers to continue offering cloud computing services and to train future generations of its AI models.
G42–an Emirati firm active in a number of emerging technology areas including genomics, cloud computing, and generative AI–recently signed a partnership with OpenAI, but the company has also attracted the attention of American intelligence officials who are concerned that G42’s ventures with several Chinese companies pose a national security risk.
Prosperity7, a Saudi venture capital firm, was forced to sell its shares in Rain AI, an AI chip company partly backed by Sam Altman. The Committee on Foreign Investment in the United States, which triggered the move, is taking an increasingly critical look at technology investment deals that have national security implications.
New Saudi rules that require companies doing business with the Gulf state to operate a regional headquarters in the country have prompted tech giants and other multinational corporations to rapidly scramble for licenses to open new offices. The rules are officially intended to keep money from state contracts in the country, but they are probably also motivated by a desire to compete with the UAE for the role of the Middle East’s tech hub.
The UK and the US announced a joint partnership with the goal of making fusion energy commercially viable by 2045. Fusion energy has the potential to transform global energy production and help reach net zero emissions.
After furious negotiations with the US to prevent its use of veto power, the UN Security Council passed a resolution to boost humanitarian aid to Gaza which included a “soft call” for a ceasefire in the war that has now cost more than 25,000 lives. In addition to its immediate implications for civilians in Gaza, the episode shows how a veto-holding member of the “P5” permanent UN Security Council membership can be pressured to change its decisions.
At COP28, nearly 200 nations agreed to "transitioning away from fossil fuels in energy systems ... so as to achieve net zero by 2050,” but a loss and damage fund hosted by the World Bank to help finance the transition has attracted just $700 million to date.
Gavi will begin assessing how climate change impacts disease patterns and vaccination needs as part of its Vaccine Investment Strategy (VIS). The VIS ultimately determines the portfolio of vaccines that receive Gavi funding and support for implementation in lower-income countries, and its decision may influence public health policy more broadly in a climate-conscious direction.
Up to $1 billion in leftover funding from the COVAX initiative will be used to boost African vaccine manufacturing. This funding can help the AU reach its goal of having 60% of the continent's vaccines produced domestically by 2040 and improve the continent’s health security.
Bird flu outbreaks on poultry farms generally require the killing of all birds at affected farms, but not all methods of culling are equal. In the US, there has been a troubling normalization of the cruel ventilation shutdown plus (VSD+) method which involves pumping extreme heat into barns and essentially killing animals by heat stroke over several hours.
Open Philanthropy has established a new Global Public Health Policy grantmaking program to support grantees working with governments in areas such as air quality regulations, tobacco and alcohol taxes, and the elimination of leaded gasoline.
The Organisation for Economic Cooperation and Development's leadership on global tax coordination has come under threat after a majority of UN members backed an African-led initiative to bring that responsibility to the United Nations instead, where developing nations expect to be able to exert more control over the process.
India is increasingly positioning itself as a development partner for the Global South through a track record of investments in trade and security cooperation in Africa and by competing with China and the World Bank as a lender for development projects on the African continent.
The recent Bangladesh elections provided a test run for what a deepfake-saturated campaign environment might look like, as pro-government news outlets and influencers have made increasing use of AI-generated content to share propaganda.
The formation of the AI Alliance by Meta, IBM, Hugging Face, and over 50 other institutions from industry, academia, and government marks a major collaborative effort to foster open AI innovation, amid competing visions of relying on open-source or proprietary AI models to achieve AI safety and alignment.
At a campaign rally, former President Trump promised to reverse Biden’s executive order on AI “on day one” if elected, connecting the topic to issues of free speech.
The US Department of Defense’s custom AI cloud computing initiative has spent less than 2% of its $9 billion budget to date amid concerns that cloud solutions by firms including Microsoft, Google, and Amazon aren’t secure enough for military use.
In December, despite a last-minute lobbying campaign by France, the EU institutions reached a preliminary agreement on the EU AI Act that preserves stringent diligence measures for “high-impact general-purpose AI models with systemic risk,” including the most powerful open-source models. The text will still have to be formally adopted by Parliament and Council before coming into force.
The UN’s AI Advisory Body has released its interim report “Governing AI for Humanity.” The report highlights that AI lacks global, inclusive governance–a “global governance deficit”–and presents preliminary principles and functions of institutions for global AI governance. The body is soliciting feedback on the interim report until March 31. The final report is slated for publication in summer 2024, ahead of the UN Summit of the Future.
Two Chinese AI research labs, Alibaba and Bytedance, released papers on AI safety and governance. Alibaba’s focused on robustness, interpretability, and preventing misuse, while Bytedance proposed methods for LLMs to "forget" undesirable behavior. The releases suggest that some Chinese AI companies may share similar concerns to researchers in Western AI labs.
US President Biden and Chinese President Xi held a major summit in November, agreeing to reestablish military communications, strengthen cooperation on counternarcotics to curb the fentanyl crisis in the US, set up talks on climate change, and begin a US-China dialogue on risks of AI.
The Observatory is our way of answering the question “what’s going on in the world?” through the lens of institutions. This edition features our take on a selection of important and underrated news stories from November and December 2023. Please don’t hesitate to get in touch if you’d like to learn more about our work or get involved.