AI is smart, but it’s making people lazy thinkers. The more we trust it, the less we question it—even when it’s wrong. Managers must stop turning workers into button-clickers and force them to think before their brains go on permanent vacation.
I have seen this movie before. Different cast, same plot. First it was calculators. Then GPS. Then Google. Now it’s AI—the smooth-talking, always-confident assistant that never blinks, never hesitates, and rarely admits it’s wrong. And here’s the uncomfortable truth nobody wants to say out loud: AI isn’t just helping—it’s quietly replacing human judgment. People now swallow answers like gospel, even when the nonsense is staring them straight in the face.
I’m not guessing. I’m watching it happen in real time.
Let’s rewind. Calculators were supposed to make us better
at math. In many ways, they did. Students got faster. Confidence went up. But
then Mark LaCour ran that quiet little experiment in 2019. He fed students
wrong answers through calculators. Not crazy wrong—just slightly off. And what
happened? Silence. No alarms. No raised eyebrows. Even when the answers looked
ridiculous, people nodded and moved on. Garbage in, gospel out. That’s
when I knew something was off.
Then came GPS. The miracle guide. No more maps. No more
getting lost. Just follow the voice and arrive. Except there’s a catch. Louisa
Dahmani and Véronique Bohbot found that heavy GPS users had weaker spatial
memory. Translation: the more you rely on the machine, the less your brain
remembers how to navigate. I’ve seen drivers follow GPS straight into dead
ends, lakes, even restricted military zones. The screen says “turn,” and they
turn—common sense be damned. When the brain sleeps, the machine becomes
king.
Now add the internet. The so-called “Google effect.”
People don’t remember what they can easily search. Why bother storing
information when it’s one click away? Memory becomes optional. Thinking becomes
negotiable. The brain starts outsourcing itself.
Then AI shows up—and it doesn’t knock. It kicks the door
open.
Unlike calculators or GPS, AI doesn’t just give
directions or numbers. It talks. It explains. It sounds smart—too smart. It
writes essays, diagnoses problems, drafts legal arguments, analyzes data, and
answers questions with the confidence of a seasoned expert. That confidence is
the trap. Because confidence sells—even when it’s wrong.
Steven Shaw and Gideon Nave, who are researchers at the Wharton
School of the University of Pennsylvania, didn’t mince words when they
called it “cognitive surrender.” That’s not a metaphor. That’s a warning shot.
In their experiment, people using AI performed better when the AI was right. No
surprise there. But when the AI was wrong, they didn’t just slip—they crashed.
They did worse than people who used their own brains. Why? Because they stopped
thinking. They handed over the wheel and went to sleep.
I’ve seen this play out in offices. A young analyst pulls
up an AI-generated report. Clean. Polished. Impressive. But buried inside is a
faulty assumption that poisons the entire conclusion. I ask, “Did you check the
logic?” He shrugs. “The system generated it.” That’s the new defense. Not “I
made a mistake.” Not “I didn’t know.” Just blind faith in a machine.
We’ve gone from thinking tools to thinking
replacements.
And managers? Most of them are clapping. Faster output.
Lower costs. More efficiency. They push employees to use AI like it’s oxygen.
“Integrate it. Leverage it. Optimize it.” Nobody asks what it’s doing to the
human mind. Nobody asks what happens when workers forget how to think.
Let me spell it out. When people rely on AI for answers,
they stop questioning. When they stop questioning, they stop learning. And when
they stop learning, they become dependent. That’s not productivity. That’s
intellectual atrophy.
A chess study by
Stefanos Poulidis, a researcher and academic affiliated with INSEAD, one of the
world’s leading graduate business schools, should scare anyone paying
attention. Students who could access AI tips anytime performed less than half
as well as those who had limited access. Less than half. Think about that. The
more help they had, the worse they got. Because they leaned on the machine
instead of building their own skill. Too much help becomes a handicap.
This isn’t new. In 1983, David Dunning and Justin Kruger
later formalized what we now call the Dunning-Kruger effect: people with low
ability overestimate their competence. AI pours gasoline on that fire. It gives
weak thinkers strong-sounding answers. Now they don’t just think they’re
right—they have “evidence” to back it up. Wrong answers, dressed in a suit and
tie.
And let’s not pretend AI is flawless. Studies from 2023
and 2024 showed that large language models can produce
“hallucinations”—confident but false statements—at rates ranging from 3% to
27%, depending on the task. That’s not a rounding error. That’s a landmine. Yet
people read those outputs and nod like it’s scripture.
I’ve watched people accept fake legal cases, fabricated
data, and imaginary citations because the AI said so. No verification. No
skepticism. Just quiet obedience. When the lie sounds smooth, truth doesn’t
stand a chance.
So what’s the fix? It’s not banning AI. That’s foolish.
AI is powerful. It can make us better—if we stay awake. But that’s the
condition: we must stay awake. Managers need to stop hiring button-clickers.
They need thinkers—people who enjoy wrestling with problems, not outsourcing
them. Shaw’s research shows that people with a high “need for cognition” are
less likely to surrender. They question. They doubt. They push back. That’s the
kind of employee who won’t let a machine run the show.
Incentives matter too. When people are rewarded for
accuracy—not speed—they start paying attention again. In Shaw’s experiment,
adding feedback and rewards made users more likely to override bad AI answers.
Not perfect, but better. It’s a start.
And here’s the part nobody likes: sometimes you need to
cut the machine off. AI-free zones. AI-free tasks. Let the brain sweat again.
Because a brain that never struggles becomes a brain that can’t.
I’m not romanticizing the past. I’m not saying we should
go back to paper maps and manual calculations. I’m saying we need to stop
pretending that convenience comes without cost. Every time we offload thinking,
we pay a price. The question is whether we notice before the bill comes due. Right
now, I see a generation growing up that knows how to prompt, but not how to
think. They can generate answers in seconds, but can’t tell if those answers
make sense. They trust the machine more than their own judgment. That’s not
progress. That’s surrender.
Offloading is fine. Giving up is another matter. And
make no mistake—if leaders don’t wake workers up, we won’t just lose skills.
We’ll lose something deeper. The ability to question. To doubt. To think. Because
once the brain goes on permanent vacation, it doesn’t send a postcard.
If you’re looking for
something different to read, some of the titles in my “Brief Book Series”
is available on Google Play Books. You can also read them here on Google
Play: Brief Book Series.

No comments:
Post a Comment