By Lethbridge Herald on February 20, 2026.
MORNING JOE-Joe Manio Lethbridge Herald
I’m typing at my computer, arguing with an algorithm that insists it never drew the three-armed, university-aged version of me, and thinking that the HAL-9000 would be proud. “HAL” for anyone unfamiliar, is the eerily calm, hyper-intelligent computer from Stanley Kubrick’s 2001: A Space Odyssey — who refuses to follow human instructions with predictably disastrous results.
As a writer and photographer, I approach AI with extreme suspicion and distrust. I’ve seen what it can do, and how confidently it can be wrong.
Artificial intelligence is supposed to be our assistant, our co-pilot, our digital Swiss Army knife. It drafts emails, writes code, plans vacations — even creates caricatures. Until, of course, it doesn’t; or insists it never did what it did.
AI is hyped as a tech tool, a helpful extension of human capability. But every application (“programs” back in the day) follows a computing principle: Garbage-In, Garbage-Out — GIGO. In AI’s case, however, it’s not always the data but the algorithm that’s the culprit.
I don’t readily jump aboard online “trends” — especially when AI is involved. However, when a generative AI app claimed it could create a caricature based on what it knew about you, and I saw the results my Facebook friends posted (which were pretty accurate) I decided to experiment.
I uploaded a photo and asked an AI system to create one for me. The first rendering was… memorable. I ended up with three arms and de-aged to my university self (not entirely unwelcome) and a background full of signs, newspapers, and buildings that were “close but no cigar.”
It wasn’t what I asked for. So I refined my prompts, added details, adjusted descriptions. After a few iterations, it produced something decent — still not gallery-worthy, but recognizable. Mission accomplished.
A week or so later, I took it further. I uploaded the caricature and asked the same AI to modify it. Instead of complying, it offered a list of apps I could use to make the changes myself. Some free. Some paid. All unnecessary, given it had already created the image.
I pointed this out. It told me I was mistaken.
“Yes, it had,” I replied.
“No, it hadn’t.”
Back and forth we went — a digital Abbott and Costello routine. The AI insisted it couldn’t generate caricatures. I reminded it that it had done exactly earlier. Calmly, it repeated its denial. I half expected the screen to glow red and a soothing voice to inform me my memory was unreliable.
In that moment, I realized maybe AI doesn’t stand for Artificial Intelligence at all. Perhaps it’s better described as “arbitrarily intelligent” — impressive when it gets something right, bafflingly wrong when it doesn’t and utterly confident in its mistakes. HAL would have been proud.
Currently, AI feels just like that. When it works, it’s miraculous. When it fails, it fails with confidence. Here’s the real GIGO connection: AI is only as smart as the database it’s trained on — a massive collection of human-generated, curated data; and the algorithm using that database.
At the consumer level, even with access to this “best” information, AI still screws things up. Three-armed de-aged selves or background buildings that look abandoned mid-render — all par for the course.
And yet, every interaction makes it slightly better. Each prompt tweak, flagged output, or new data quietly trains it to reduce absurdities — though not eliminate them entirely. Brilliance tempered by human imperfection, evolving through use.
Arguing with AI is frustrating and absurd. It’s like asking your toaster to explain quantum mechanics — then arguing when it refuses. Or telling your GPS you don’t need directions, and it recalculates anyway. Or realizing your three-armed caricature is more photogenic than you are.
If we are to rely on AI as an assistant, we need to understand both its potential and blind spots. AI is here to stay. It will become more integrated into our work, creativity, and daily routines. It will get faster, smarter, and more consistent.
But until it understands contradiction — or learns not to deny the obvious — we’re left with humor, patience, and a touch of HAL’s paranoia. Keep your sense of humor handy and remember: AI is still a work in progress.
21