AI could make life weird sooner than you think
Arguably no topic has seen more breathless hype over the last year than the future of AI, and the backlash to the hype can be just as intense. With so many strong and contradictory claims flying around, it’s understandable if you’re experiencing some whiplash. But hard as they may be to answer, questions like how fast AI technology will continue to develop, in what ways it’s going to interact with the real world, and how our behaviors and daily lives might change as a result are quite important for making decisions about how we should be prioritizing attention and resources on this topic. So, naturally, we wanted to try to come to an independent view of those questions ourselves.
Since EIP published its primer on AI governance last September, we’ve spoken with hundreds of philanthropic funders, civil society organizations, policymakers, and industry experts about the future of AI. We’ve hosted a forecasting and strategic clarity workshop for AI funders in San Francisco, co-programmed an AI learning series for philanthropic advisors with Schmidt Futures, and helped several funders define and develop their own AI strategies. And of course we, like everyone else, have been following the news, trying out new models as they’re released, and generally experiencing what it’s like to live, at least for brief moments, in a world that looks more like Star Trek than the one we were born into.
What we’re seeing is that the pace of technological capabilities is currently running ahead of their practical applications. Put another way, what most people are actually doing with generative AI does not yet match what we know it can do with proper prompting and integration with other tools. Almost every month we get news of a mind-boggling new technical achievement, whether it’s AI models passing the bar exam, discovering thousands of new stable materials, or helping find the first new structural class of antibiotics in decades–yet 2023-era prophecies of real-world impacts like AI CEOs or mass layoffs have yet to materialize.
So does this mean it was all just a media bubble and it’s time to move on? Our sense is no, it doesn’t – it’s just a reflection that it takes society a while to absorb the full ramifications of new technologies. (As one illustration, innovation scholar Ethan Mollick notes that even among attendees to his talks about AI, the vast majority have never played around with anything more powerful than the free version of ChatGPT – which is now ranked 25th(!) among the world’s most advanced language models.) We think there are good empirical reasons to expect more dramatic changes on the horizon as novel applications of existing state-of-the-art generative AI models become more widely adopted and their capabilities continue to advance.
One of the biggest reasons is that the level of financial investment in developing bigger and better versions of current AI models is truly unprecedented. OpenAI’s GPT-4, the current market leader, was developed with just over $3 billion of total historical investment in the company leading up to that point. By comparison, Microsoft is reportedly planning to spend $100 billion solely on a supercomputing cluster auspiciously named “Stargate” to support OpenAI’s future needs (and possibly those of other companies such as Mistral). That commitment would represent five times Microsoft’s dividends in 2023, and comes as Meta and OpenAI’s CEO are talking about their own ten-figure investments. Governments and militaries are getting in the game now too: Canada is putting in $2 billion to advance AI capabilities, the Pentagon’s spending on AI is up to almost $2 billion a year, and Saudi Arabia is setting up a whopping $40 billion investment fund with the venture capital firm Andreessen Horowitz. The frenzy has helped make Nvidia, the leading supplier of the high-performance graphics cards needed to train cutting-edge models, the world’s fourth-most valuable company, with a higher market capitalization than Alphabet (which owns Google), Meta, or Amazon.
Companies racing to deploy AI are exploring a wide variety of real-world applications, including bookkeeping and finance, automotive part manufacturing, AI office receptionists, autonomous scientific research, and physical security for critical infrastructure (“wall-climbing robots”). Many of these will no doubt fail or take a long time to get widely adopted, but even if there are only a few exceptions, the ingredients for significant labor market disruptions are definitely being assembled on the proverbial kitchen counter. In our view, one of the most potentially disruptive of these practical applications is so-called agents. On March 12, the software coding agent Devin made a big splash in the programming world. While tools such as ChatGPT and Copilot can generate code from specific instructions, the makers of Devin promise that it can “autonomously find and fix bugs” and “build and deploy apps end to end.” In this way, it is more like an agent, i.e. a software program that makes decisions and takes actions independently in service of some preset goal. Agents are sometimes identified as an important part of a path towards artificial general intelligence by academics, and leading scientists see them as substantially more risky to develop. For its part, Devin didn’t even keep the lead for a month before being beaten by SWE-Agent, an open-source model released by Princeton, and OpenAI (of course) is busily developing its own versions.
Agents are particularly worth paying attention to because they can integrate with non-AI tools like calculators, the Internet, and potentially even hardware, just as a human could. Chatbots that you can type questions into and get answers from are already pretty useful; chatbots that are integrated with tools that send emails or build cars will be very useful. AI is already being integrated into metal detectors, drug development, and scams. It’s still early in the game and overpromising abounds, but remember that many things AI can now do pretty well, such as generate realistic-sounding songs on demand, were years in the making.
So what does this all add up to? As usual, no one really knows for sure, but one important takeaway for us is that we think that most people have not really internalized how weird things might get within the span of our lifetimes. So many of our cultural traditions, folk wisdom, educational choices, and more are rooted in the constancy of life's rhythms across generations, as predictable as the daily sunrise. When you start to take a look at the kinds of things that the trends above make possible, though, the idea that the next 30 years will look much like the previous 30 seems harder and harder to justify. Which, as we’ve argued, is all the more reason to ensure that the measures companies, governments, and funders are taking now to guide the development of this technology anticipate the full range of possibilities in front of us.
Forecasts to watch:
Will an AI be able to reliably construct bug-free code of more than 10,000 lines before 2030? (current crowd forecast: 90%)
When will AIs program programs that can program AIs? (current forecast: Sept 2026)
When will the first AI-generated book be on the New York Times Best Seller list? (current forecast: Nov 2029)
Will an AI be able to work as a competent cook in an arbitrary kitchen before 2030? (current forecast: 25%)
When will the first fully autonomous surgery or procedure be performed on a human? (current forecast: June 2030)
When will most Americans personally know someone who has dated an AI? (current forecast: July 2032)