Beware the AI apocalypse prophecies

On November 30, 2022, San Francisco based startup OpenAI released its ChatGPT language model for conversational AI.

For the next couple of weeks, people amused themselves by asking it intricate questions and sharing its uncanny replies on Twitter. But the online discourse quickly shifted from “this is cool” to “this could change a lot of jobs” to “AI is going to take over the world and kill all the humans!“.

On March 22, 2023, an open letter to “pause giant AI experiments” was published with the signatures of many prominent intellectuals and entrepreneurs. A couple of days later, technology observers Yuval Noah Harari, Tristan Harris, and Aza Raskin coauthored an op-ed in the New York Times on the threats that AI poses to humanity. “We have summoned an alien intelligence,” they concluded, “We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization.”

“That escalated quickly” meme, via

In the span of five months from ChatGPT’s release date, we went from excitement to borderline panic.

I find most of the alarmist claims and the rationales behind them to stand on brittle philosophical ground. But one doesn’t need to dive into philosophy to downscale the magnitude of the supposed threat to civilization. There is a more mundane reason to be skeptical of the recent fear-mongering. Do a bit of reading on the history of technology, and you’ll see a recurring pattern: we tend to get quickly excited about new technologies, generating expectations that are most often inflated and misplaced.

Technology historian David Nye observed that “each new form of communication, from the telegraph and telephone to radio, film, television and the internet, has been heralded as the guarantor of free speech and the unfettered movement of ideas.” Similarly, some expected inventions such as the submarine, dynamite, and even the machine gun to foster peace by making war impossible.

Time and time again, history shows how people get overly excited about something new and overestimate its immediate impact on society. Remember in 2021 when everyone was crazy about NFTs, and people quit their job to either become JPEG traders or start companies to facilitate NFTs exchange? It seemed we were at the dawn of a new age of sustainability for creatives, but the most interesting NFT news one hears these days are about the companies harvesting tax loss from countless worthless NFTs.

American researcher Roy Amara summarized the pattern in what became famous as Amara’s law:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

New technologies generate inflated hype which seldom lives up to expectations. I might be reaching, but I would add that the shift of so much of the discourse over communication mechanisms optimized to amplify the most controversial and hyperbolic statements at the cost of details and nuance, i.e. Twitter and Facebook, adds orders of magnitude to the overestimation error.

The American research firm Gartner developed a neat visual representation that describes how hype progresses over time.

The Gartner hype cycle, via Wikimedia.

When a new technology is introduced, it generates inflated expectations. But as people use the new tool, they become disillusioned with the gap between expectations and reality. Eventually, as more instances are deployed and new versions developed, an enlightenment of sorts is reached, where ways to effectively leverage the technology emerge. At this point, the technology gains mainstream adoption, and it becomes embedded in the fabric of society, reaching a plateau of productivity.

My experience integrating ChatGPT and Copilot in my day-to-day work has followed a similar trajectory. I was expecting to be typing natural language prompts all day and watching AI work for me, but I got quickly frustrated at the gap between the code I needed and the one the tools suggested. I’d like to think I am now on the slope of enlightenment stage. I realized what to expect from those tools, and I’m now looking for ways to use them efficiently.

Plot the shift in the conversation tone around AI since ChatGPT was released, and you’ll see the exponential rise in hype–or alarmism–from the Gartner model. My hope is that the discourse is currently at the peak of inflated expectations and that, with a bit of luck, the hype will die down before regulators and activists catch up and give AI the same treatment nuclear power received, effectively killing the any future benefits we could reap because of misplaced fears.

Postscript: Some nuance

Of course, I could be wrong.

For a start, I could be making the same error as the infamous empiricist turkey, who, upon seeing the farmer on the day of Thanksgiving, thought he would be fed because that’s what had happened every day since. While it’s reasonable to expect repeating patterns to keep repeating, it’s also foolish to expect the future to keep resembling the past. We can’t predict the impact of ideas that haven’t been developed yet.

Empiricism aside, both Amara’s law and the Gartner model are based on anecdotal observations without being grounded in the laws of physics. It is unwise to make confident predictions based on them, in the same way it is unwise to fear LLMs taking over the world based on an Oxford philosopher’s thought experiment.

At the same time, it would be ludicrous to expect nothing to change due to more refined LLMs rolling out into the world. Many jobs will change, some will disappear, and new ones will be created. But this is no different from what happened when previous technological innovations were unleashed.

I mentioned above that one doesn’t need to look into the philosophical foundations of the fear of AI to find reasons to dismiss it, however I recommend you go down that path. Most of what I have argued so far could be dismissed as prophecies based only on past behaviors. But understanding the flaws in the explanations behind this fear of AI will provide a much stronger footing for peace of mind. I recommend this critique by Brett Hall as an entry point and The Beginning of Infinity for a first-principle argument for technological optimism, or as I like to call it, “roll-up-your-sleeves-ism.”

Again, I could be wrong, but for the time being, I’ll focus my energy on finding ways to leverage this new tech. What I worry about is not robots harvesting my atoms to make paperclips but losing ground in the marketplace to people that wield AI better than I do.

3 responses to “Beware the AI apocalypse prophecies”