
Last week, a Texas A&M University-Commerce professor failed his entire class on account of having used ChatGPT to complete their essays. How did he know? He asked ChatGPT if it authored those essays—and the AI said yes.
The school eventually reverted the decision, but not before the story went viral. According to The Register, “The kerfuffle highlights whether or not educators should use software to detect AI-produced content within submitted coursework.”
The “kerfuffle” also highlights a deeper issue. It shows the necessity and urgency of developing AI literacy.
Let’s try to muster some empathy for the instructor. The way he used ChatGPT clearly shows he had no idea how the tool works and its nature as a stochastic parrot.
You can see how someone without even a rudimentary comprehension of how ChatGPT operates could assume the tool can answer any question, including whether or not it wrote a given essay.
The conclusion that ChatGPT would know or remember the essays it helped fabricate falls apart as soon as one peers under the hood. We can forgive the instructor for misinterpreting what ChatGPT can do, but he was too quick in drawing his conclusions. Using a tool to make decisions without the faintest idea of the process it uses to generate the information it gives is irresponsible, especially when those decisions affect other people.
I’m speculating here, but I wouldn’t be surprised if he was exposed to many prophecies of imminent AI doom, including how AI could make him redundant while also giving his students new ways to cheat – one student did indeed admit using ChatGPT – and how this could have contributed to his adversarial mindset and hasty decision making.
As AI inevitably blends into more of our day-to-day, we can expect more incidents like the one at the Texas A&M University-Commerce institute to occur. To mitigate that, we need less “AI will take your job” scaremongering and more “here’s when and how AI product X can be useful” information.