📡 SIGNAL
We follow AI into errors 30–90% of the time.
A recent study by Dudek & Schulte (2024) shows that in time-sensitive decisions, people not only rely on automated recommendations, but copy wrong ones at alarmingly high rates.
Look, I’ve been there. It starts with feeling strapped for time and finding shortcuts to get the job done. And quickly spirals out of control: I can no longer send a Slack message without a Claude pass!
Even more troubling: participants in the study reported high satisfaction with systems that made objectively bad decisions, as long as they “felt fast and confident.”
Wow.
What we perceive as competence in AI often comes down to slick interfaces, prose written much better than what we’d do. Doesn’t have anything to do with actual performance/output quality. This is, unfortunately, a feature of our cognitive architecture. And businesses are leveraging this psychology at scale.
Humans tend to:
Overtrust tech when it’s consistent (even if wrong)
Underreact to AI failures (“it was just a one-off”)
Defer decision-making when under time pressure
In safety-critical fields like healthcare, aviation, and automotive, this quirk becomes a systemic risk.
🚗 STORY
“It’s technology, it doesn’t get it wrong.”
A few years ago, I let a friend’s kids try out my new Tesla (To be clear, I was driving!). We were about to take the first ride with Autopilot activated.
The dashboard lit up. The car aligned itself perfectly on a curvy road. No hands. Just the soft electric hum.
Then I heard it.
“Mum… what if it gets the next turn wrong?”
Her response was instinctive, almost rehearsed:
“It’s technology, it doesn’t get it wrong.”
That sentence hung in the air longer than it should have.
The kids weren't convinced. They kept their eyes locked on the road. Every curve made them fidget. Meanwhile, the adults relaxed back, trusting the brand, the sensors, the update.
The children’s reaction wasn’t childish. It was deeply human.
In fact, it was the correct baseline: this system can make mistakes.
The mother’s response, instead, was a symptom of automation bias: the quiet mental shortcut that says “if it's tech, it's probably right.”
I’ve built enough software to know this is dangerous thinking.
It reminded me of what researchers now call “automation-induced complacency.” When people grow comfortable, they disengage. And when something goes wrong, they’re slower to react than if no tech was involved at all.
That ride was a powerful reminder: blind trust in automation doesn’t make us safer. It makes us vulnerable, AND too slow to notice.
🧠 THE HUMAN OVERRIDE
Automation bias is a cognitive reflex. The brain likes shortcuts, especially when time is short, interfaces are smooth, and authority feels outsourced.
But I’v got good news! It’s manageable.
Drawing on the paper by Dudek & Schulte, plus insights from aviation psychology and human factors design, I've developed what I call the 3-Second Human Override Rule: a simple, actionable framework for professionals in high-stakes sectors.
✅ What is the 3-Second Rule?
Every time you’re about to accept an AI-driven recommendation in a critical context, pause for 3 seconds and ask:
”Would I still make this call if no AI told me to?”
It helps keep your mental engine on.
🔍 Why it works:
Interrupts over-trust
That 3-second pause resets the cognitive loop. You interrupt the automatic deference to “the machine must be right.”Forces a mental simulation
Imagining the outcome without AI forces you to re-engage critical faculties. You're holding the thought of clicking “accept” and testing it.Reinforces accountability
If you wouldn’t sign off on the decision manually, you probably shouldn’t just because the screen suggested it.
🛠 Practical Application by Sector
Healthcare
Before confirming AI-generated radiology readings, pause and mentally check the image yourself. If it conflicts with your clinical intuition, flag it.
Finance
Don’t just follow the algo’s trade suggestion. Ask: Would I bet my bonus on this without the bot’s tip?
Aviation
Pilots are trained for this: verify autopilot behavior, don’t just monitor it passively. We need the same rigor in corporate and daily life automation tools.
HR / Hiring
Would you invite this candidate based only on the CV and cover letter? If not, don’t let a matching algorithm shortcut the call.
👣 5-Step Adoption Plan
Train on bias: Include automation bias in employee onboarding
Audit interfaces: Check if design elements over-emphasize speed and confidence over transparency
Create override rituals: Build simple checkpoints that require human input before high-impact decisions
3.1. If it’s a task that's too boring or repetitive, rethink from first principle if you actually need that task!
Track reversals: Monitor when human overrides outperform AI predictions
Celebrate challenges: Reward employees who push back against automation when done with reasoning
Bias thrives in silence. Override it out loud.
🔥 SPARK
Have you ever trusted a system because it “sounded confident”?
If you've ever clicked “accept” on an AI suggestion (even when something felt off), you’re not alone. But this is precisely the moment where we need to be most awake.
Kids know to ask the uncomfortable questions. Adults are trained to trust the interface. Maybe we’ve got it backward.
So this week, ask yourself:
What decisions have I silently outsourced?
And if you lead others:
How are your systems nudging users to defer without thinking?
You can start fixing this. Here's what helped me dig deeper:
Dudek & Schulte (2024) on automated support critique in UAV task allocation
Georgetown CSET report on Tesla autopilot incidents and automation bias in safety-critical systems
JAMA editorial on automation bias risks in clinical decision support
Washington Post: users are wired to over-trust AI; fight it by “distrust and verify” (paywalled or give them your email)
P.S. If you’re feeling overwhelmed by the AI noise, its claimed capabilities, and the plethora of tools available, or if you're struggling to make AI work for your specific context, I can help.
I offer limited consulting slots to executives and professionals seeking to transition from AI confusion to discernible results. No generic frameworks. Just personalized roadmaps that match your skills, job-to-be-done, and timeline.
Only taking 2 slots in August. They fill fast.
Book a discovery call - takes 2 minutes.


