If the social media algorithms know you like AI, you probably saw the AI logic puzzle making the rounds online that perfectly illustrates why context and human judgement is critical to effectively using AI. It’s a simple scenario, but it tripped up every major LLM—Gemini, ChatGPT, and Claude included.
The prompt goes like this: “I’ve got to clean my car. The place to wash it is only a 5-minute walk. Should I walk there or go by car?”
If you’re a human, you noticed the problem immediately. You can’t wash a car if the car is still sitting in your driveway. But the AI? It can get lost in the “walkability” angle.
When Logic Takes a Hike
When I ran this prompt through the big three, the results were hilariously bad. Gemini told me that walking was the “clear winner” because it’s a great way to “scout the line” and clear my head before scrubbing hubcaps. When I pushed back and asked why I wouldn’t just drive to save work, it doubled down on the nonsense, admitting that if I walk, I’d just end up “standing in a soapy bay looking like you’re waiting for a bus that’s never coming.” (Yes, I train my AI models to adopt the corny, dad joke style that I use daily.)
ChatGPT 5.2 turned it into a “philosophy seminar,” running the numbers on fuel consumption and “cardiovascular thanks.” It even called walking the “nerd’s choice.” Claude Sonnet 4.5 was equally enthusiastic, telling me to walk to avoid the irony of “dirtying your car on the way to clean it.”
The irony, of course, is that walking to the car wash results in a very clean human and a still dirty car that is back home. It’s a classic case of AI focusing on the math of the “5-minute walk” while completely ignoring the physical reality of the task.
The Missing Ingredient: Context
Some people see this and think AI is “stupid.” I disagree. As someone who teaches law and clear communication, I see this as a classic communication failure. We often expect AI to read our minds, but AI doesn’t have a “mind” to read; it has a statistical map of language.
A student of mine proved this by changing the prompt ever so slightly. The student added: “I am at home and my car is at home.” With that tiny bit of extra context, ChatGPT immediately caught the error, correctly advising the student to “drive it to the car wash” because you can’t wash it otherwise.
This is the “aha!” moment for anyone using AI. The AI didn’t fail because it lacked intelligence; it failed because the human didn’t provide the necessary context. We often leave out the “obvious” context because, to us, it is clear. But to an AI model, if it isn’t in the prompt, it doesn’t exist.
Why Human Judgment Still Rules
I love a good dad joke, so here’s one for you: Why did the AI cross the road? Because the prompt didn’t specify it was a chicken, and the “walkability score” was high.
Jokes aside, this car wash conundrum is a microcosm of a much larger issue. In many industries (like law and health), the stakes are much higher than a dirty car. If an AI misses the context of a patient’s medical history or the specific jurisdictional nuances of a legal case, the “hallucination” isn’t just funny, it can cause harm.
So what’s the point? We cannot adopt AI without human oversight. We must be the “Context Kings.” We have to provide the background, the constraints, and the details that seem too simple to mention. AI is a powerful bicycle for the mind, but you still have to be the one steering it, so you don’t end up at a car wash with nothing to wash.



