Worry about AI bias, not alignment

Several loud voices in the tech scene have grown concerned with the problem of AI alignment. What would happen if an artificial general intelligence’s goals, preferences, or ethics were not aligned with that of humanity? According to the most pessimist, the outcome would be catastrophic.

A collaboration with DALL·E 2: Oil painting representing an apocalyptic scenario of robots takeover.

If an intelligence that operates with speed and storage capacity beyond human capability had goals of its own, they fear, we humans could end up like ants—a puny organic species that just happen to be in the way and will be dismissed off with little second thought.

AI alignment makes for captivating science fiction plots, but the majority of the discourse around it amounts to little more than prophecy. Besides, how can we hope to align artificial minds with our interests as a species when geopolitics clearly shows that we ourselves cannot align our interests in the first place?

Whether intentionally or not, the prophets of AI apocalypse are leveraging our innate interest in doom and gloom scenarios, capturing an oversized amount of attention and resources that could be better spent on existing problems. And as a matter of fact, we already have a concrete, manifest problem with AIs—the ones that are out in the wild today, not the hypothetical systems of tomorrow. That problem is bias.

AI bias is a documented phenomenon. ProPublica reported how the COMPAS criminal profiling tool exibited bias against certain demographics and ChatGPT infamously refused to write a positive poem about Donald Trump but promptly produced one for Joe Biden.

It’s crucial to remember that large language models don’t think for themselves. They merely compute the most likely next token for a given input based on the humongous data sets on which they have been trained and the human-in-the-loop adjustments they received. This process presents several opportunities for errors to sneak in and biases to develop.

Humans are biased, and it should come as no surprise that our creations are biased as well. The problem is that AI bias is more subtle. ChatGPT can generate perfectly worded utterances which sound entirely plausible yet are completely wrong. It will promptly generate a scientific-sounding explanation for whichever nonsense theory you prompt it for.

As we delegate more work and decision-making to our artificial interns, we ought to remember that their decision process is not objective.

I find AI alignment concerning not because of the prospect of robots taking over the world but because of the amount of attention and resources it is stealing from investigating and solving concrete problems such as AI bias.

That is not to say that AI alignment should be ignored or that we should neglect considering long-term scenarios and big-picture thinking. Tomorrow’s problems are the offspring of the solutions we’ll invent for today’s problems. But that assumes we’ll actually solve today’s problems, something that becomes increasingly hard when more and more focus is directed toward sensationalistic scarecrows.

The next time you interact with an AI, worry about the accuracy of its suggestions, not its world domination agenda.


One response to “Worry about AI bias, not alignment”