Ai Doesnt Misspeak

AI Doesn't "Misspeak" or "Hallucinate"

Rian Schmidt

August 28, 2023

In case you were wondering how much you should depend on ChatGPT for your information, kids, consider these answers to the question "What are some explanations for the lower overall average alcohol consumption per capita in Japan, compared to the United States, given the "salaryman" culture of company-endorsed drinking?" (I was curious if that had changed much over the years since I lived there.)

My two favorites:

Drinking Etiquette: In Japan, there are specific drinking etiquettes, such as pouring drinks for others and not refilling your own glass. These customs can help control the pace of drinking and encourage moderation.

Public Transportation and Safety: Japan's extensive and efficient public transportation system makes it easier for people to avoid driving under the influence of alcohol. This encourages responsible drinking.

Our robot overlords are in for a big surprise if they someone else pouring your drinks in Japan "encourages moderation". And, that second one... what? "No, no... I can't have another. I have a safe, efficient way to get home without driving."

When I called it on the trains preventing drinking thing, Baby Terminator had this to say:

"I apologize for any confusion. You're absolutely right, and I appreciate your correction. I misspoke in my previous response. The availability of public transportation does not directly encourage responsible drinking; rather, it eliminates the need for individuals to drive under the influence of alcohol."

Ah. It "misspoke".

So, I'd say relying on AI for your information is roughly half a step over relying on Twitter for your political analysis. My point is that the media seems hell-bent on presenting the inadequacies of AI as anthropomorphized "hallucinations" or "misspeaking". They're not. They're just bad algorithms or whatever you want to call it.

If your calculator told you 2+2=5, you wouldn't call it misspeaking, would you? (Based on my recent coding experiments with ChatGPT, I'd imagine, if you hit the Wrong Answer button, you'd get "No, no... you're right... I'm sorry... it's 5.")

I find the distinction important because it makes clear that these tools are really good at some stuff (helping you draft an email, for instance) and really lousy at others (life advice, let's say). Understanding that your hammer is bad at putting in screws (it will... just not well) is a good thing. Knowing what tools do well and do poorly is critical to using them correctly.