“AI Hallucination” freaks me out. This is why.

When I was in high school, my English teacher taught us how to do research on Google. “This is easy!” I thought to myself. “I can just Google any source, answer, or quote I need for my papers!”

It didn’t turn out to be so easy. 

Our English teacher told us that we couldn’t use find some random blog or Wikipedia entry, and cite it for our papers. We needed to find quotes from credible sources. I soon discovered how difficult it was to find a reliable source from a trusted study or publication. Random Google searches yielded all kinds of results. Over time, I had to learn the right questions to ask, and the right things to look out for. 

The old saying goes, “Don’t believe everything you read.” As the years went on, this saying evolved to become “don’t believe everything you Google,” and then, “don’t believe everything you see on TikTok.” Now, we need to change it again: “don’t believe everything you see on AI.”

Why? Oh, just because of a little thing called “AI hallucination.” AI hallucination is “false information created by false reasoning,” in the words of Conor Murray at Forbes. You may have encountered something like this before on ChatGPT or Google’s Gemini. Sometimes you ask AI a question, and it confidently gives you answers that are just…wrong? Like, completely made up? 

For silly searches based on poorly structured questions, this is no big deal, right? AI is only getting better with time, right? Except, according to The New York Times, new learning models are producing more false information than before. They’re getting worse. Open AI has a test called PersonQA, in which it asks AI about public figures. According to Murray at Forbes, Open AI found “the o3 model hallucinated 33% of the time during its PersonQA tests,” and their new o4-mini model “hallucinated 41% of the time during the PersonQA test.” 

These aren’t goofy, hypothetical scenarios. When asked questions about public figures, ChatGPT fabricates false information more than one third of the time

I stumbled on this earlier today when I asked ChatGPT a question about a musician I adore. It completely made up a song title, told me it was on the artist’s most recent album, and told me that the song’s lyrics dealt with “longing and the search for meaning.” When I asked ChatGPT why it made up the song name, it told me about AI hallucination, and that it can happen up to 30% of the time. 

So, back to the lesson I learned in high school English: we need to take responsibility for the sources we cite. We can’t believe everything we read, Google, or find on ChatGPT. We need to ask AI responsible questions to get reasonable answers, and we need to double check the credibility of the answers we receive. 

Here’s why: if ChatGPT made up a song title, what reason do I have to believe ChatGPT when it tells me that hallucinations happen 30% of the time? How do I know that answer isn’t a hallucination? What does ChatGPT, the AI model, personally have to lose? 

Because honestly, can a source even be credible if it doesn’t have personal credibility to lose? 

Reese Hopper

Reese Hopper is the author of What Gives You the Right to Freelance? He’s also a prolific creator on Instagram, and the editor of this website.

Previous
Previous

The frame of avoiding tragedy

Next
Next

Broaden the definition of greatness