I recently argued that we should be more worried about the present issue of AI bias than we should be about the possible future problem of AI alignment. Alongside bias, there’s another factor to be mindful of in our AI interactions. AIs might be lying to you.
I came across this view through Dan Q, who was riffing on Simon Willison’s observation that “we accidentally invented computers that can lie to us.”
When Simon and Dan talk about AI, in particular ChatGPT, lying, they use the word not to imply intentional deception. Instead, lying acts as a syntactic shortcut to communicating with “visceral clarity” that LLMs have no internal representation of the meaning of their utterances or good explanations for them. They are stochastic parrots.
Dan adds that we ought to raise our collective level of technological literacy to better interact with these new tools:
That ChatGPT lies won’t be a surprise to anybody who’s used the system nor anybody who understands the fundamentals of how it works, but as AIs get integrated into more and more things, we’re going to need to teach a level of technical literacy about what that means, just like we
doshould about, say, Wikipedia.
The idea struck a chord, perhaps because the same day I read Dan’s post I also attended a session at my son’s school on the difference between numeracy and mathematics. Numeracy is the ability to manipulate numbers, make estimates, and understand magnitudes applied to the everyday world. On the other hand, mathematics is a formal discipline that deals with abstract entities rigorously.
Education institutions are realizing that numeracy and mathematics are two connected but different skills and require distinct teaching strategies. Moreover, it’s becoming clear that most people who consider themselves bad with numbers are likely only struggling with abstract aspects of mathematics because they have not been given a numeracy foundation.
I can see a parallel with AI. The theory behind generative pre-trained transformers is out of reach for most of us unless we set aside years to pursue the equivalent of a Ph.D. in CS. But we can still develop a high-level understanding of how AI works that will enable us to interact with it effectively at work and, eventually, in our personal lives.
The speaker at my son’s school explained how kids can develop numeracy through exposure to and interaction with numbers in real-life scenarios, like counting coins or subdividing cakes. I wonder what is the equivalent for AI literacy.
Can this skill be developed by interacting with AIs in the real world by asking them questions in domains that we are familiar with to see how they behave? Or do we need some basic training on the theory behind their implementation?