While new technical breakthroughs and iterations seem to be coming weekly in the AI world, a recent white paper has tech geeks talking. In his 165-page paper “Situational Awareness: The Decade Ahead,”Leopold Aschenbrenner, a former researcher at OpenAI, asserts that people like him in the San Francisco tech bubble — some of the “smartest” people of our times — understand what’s coming, but the rest of us don’t have a clue.
The paper projects that by the end of this decade AI will achieve ‘superintelligence,’ with hundreds of artificial general intelligences (AGIs) automating AI research, and the resulting effects will be staggering in ways most of us can’t yet comprehend. The change on the scale of orders of magnitude will be breathtaking, he says.
OpenAI fired Aschenbrenner when he leaked that the company wasn’t paying enough attention to concerns about safety. He worked on a team tasked with AI safety that the company recently disbanded. He has written other papers on the general topic, but this one is generating a lot more online buzz.
Aschenbrenner may seem like a Cassandra figure in light of his firing, but he has a firm grasp on the current impact of AI, which our government appears to be hopelessly behind in understanding. Recent reports have vastly underestimated the speed at which AI computing power will grow, citing Moore’s Law (computing power doubles every 18 months) as its basis for growth projections. AI has already far surpassed that; it has grown on the order of 400% per year for the past two years, and is on track to 1,000% growth this year, according to Gartner and other analysts. It’s even training 100 million times faster than Moore’s Law would predict.
Already AI companies appear to be boosting the US economy significantly. Aschenbrenner believes that by 2026 companies like Google and Microsoft will be generating over $100 billion in revenue per year, possibly over $1 trillion by 2027. In June the soaring value of stock in Nvidia, which makes the most in-demand AI computer chip and system, surpassed that of Microsoft, making it the world’s most valuable public company. It has been the fastest rise in market history, from $400 billion two years ago to $1 trillion last year and $3 trillion this year.
AI is an economic engine that will benefit many US companies and those doing business with them, but it has some ominous potential downsides. Aschenbrenner points out that China is racing to beat the US in AI. Even if it doesn’t, with unrestrained AI power it could do a lot of damage.
“All the trillions we will invest, the mobilization of American industrial might, the efforts of our brightest minds, none of that matters if China or others can simply steal the model weights (all a finished AI model is, all AGI will be, is a large file on a computer) or key algorithmic secrets (the key technical breakthroughs necessary to build AGI).
“America’s leading AI labs self-proclaim to be building AGI: they believe that the technology they are building will, before the decade is out, be the most powerful weapon America has ever built. But they do not treat it as such. They measure their security efforts against ‘random tech startups,’ not ‘key national defense projects.’ As the AGI race intensifies — as it becomes clear that superintelligence will be utterly decisive in international military competition — we will have to face the full force of foreign espionage. Currently labs are barely able to defend against script kiddies, let alone have ‘North Korea-proof security,’ let alone be ready to face the Chinese Ministry of State Security bringing its full force to bear.”
Aschenbrenner says “super-security” from superintelligence will be required to fight the potential sabotage and theft that other countries could employ against ours.
Along with security concerns, the AI drain on our energy grid is predictable, since expanding AI server farms suck tremendous amounts of energy, and it will be necessary to find alternative energy sources and ways of reducing energy usage as AI use grows.
Another issue is monitoring AGI itself with what Aschenbrenner calls “superalignment.” This will demand personnel and superintelligent AIs to monitor and police AI activity to prevent rogue behavior.
Aschenbrenner’s paper doesn’t address this point, but job loss is another important issue. This factor has yet to affect the booming US economy, but lateral job shifts will eventually turn into disappearing jobs. That could have far-reaching effects on the workforce, and the government will likely respond very slowly to what could be a seismic shift.
Someday people will look back on “Situational Awareness” and either marvel at Achenbrenner’s prescience or despair of it. I hope it’s the former, and that our government begins to take these issues seriously.
More: situational-awareness.ai