I woke up to a troubling paper last week.
THE SIGNAL
Gerlich's 2025 study confirmed what I've suspected for years, watching my kids’ school overwhelming them with technology (screens, apps, AI or non-AI since age 5!): students who habitually outsource their thinking to AI tools score lower on critical thinking assessments.
We call that “cognitive offloading” – where our brains, ever the efficiency machines, delegate mental heavy lifting to external tools.
THE STORY: A View From My Three Worlds
My perspective here isn't just theoretical. I'm living this tension daily across three distinct educational realms:
as a father of three navigating the AI-saturated homework landscape,
as the husband of a university professor witnessing higher education's frantic AI adoption,
and as someone who teaches executives and government officials how to implement these technologies responsibly.
When I work with senior leaders or technical teams at corporations, AI serves as a powerful problem-solving partner, freeing up lots of time (think coding!) while producing better documents. But—a big ‘but’—these professionals have decades of context and judgment to apply.
When my 12-year-old reaches for ChatGPT for history homework, that's where I see the troubles starting.
These tools are not comparable to Google searches. They make many mistakes and are unreliable, and teachers don’t realize that themselves, let alone teach it to the kids.
Here's what troubles me most: we were already treating education as transactional before AI arrived. My wife sees it daily – students viewing professors as skill-dispensing vending machines rather than mentors. Now AI promises to perfect this machinery, optimizing the “delivery” of education at scale.
Too fast, though. We're forgetting what learning actually entails.
Real learning is messy. It involves boredom, struggle, feeling intellectually inadequate – the very friction points technology promises to eliminate in the name of convenience, speed, or "personalization." (BTW, I've been hearing about personalization as the holy grail since I started in data science 20 years ago, and I'm still waiting for evidence that algorithmic personalization benefits the person rather than just extracting more value from them.)
The irony isn't lost on me: personalization technology often depersonalizes the learning process, while the unscalable, supposedly inefficient one-to-one human mentorship remains the gold standard for meaningful development.
THE STRATEGY: Finding the Middle Path (Without the Hype)
After juggling roles between academia and building AI startups, I've learned that binary thinking gets us nowhere. This isn't about banning AI or surrendering to it (it is not inevitable!).
Here's my two cent on this challenge:
Fight thinking replacement – We need explicit policies distinguishing between AI as assistant versus AI as replacement. Why don’t we ask schools piloting "tech-free thinking zones" for core analytical development? I still find it incredible that all parents and teachers I speak with complain about kids being overly stimulated by screens and technology (without any impact whatsoever on learning), but nobody takes action.
Rewire educational methodology – Design assessments that render AI tools useless or even counterproductive. Include “adversarial problems” specifically designed to expose flaws in AI-dependent thinking.
Establish clear digital envelopes – Define boundaries around what AI should and shouldn't touch in educational work. (BTW, this requires actual courage from administrators, not just policy documents)
Double down on active learning – The human brain develops through struggle. Foster environments where students must think, debate, and solve problems collaboratively.
Recognize AI asymmetry – AI tools benefit educators tremendously while potentially harming novice learners. Let's leverage this asymmetry, using AI to reduce administrative burdens on teachers while preserving students' cognitive development.
It's possible to have both – but it requires intention.
SPARK: The Question We're Not Asking
While we're captivated by AI's promise of personalized learning (and trust me, I've built products on this very premise), we're missing a fundamental question: Does algorithmic personalization actually serve individual growth, or does it create narrow intellectual pathways that limit exposure to challenging perspectives?
I've assisted enough EdTech entrepreneurs to know that technology alone doesn't transform education—thoughtful anthropology and inspiring educators do. At age 33, I learned English as my second language and went back to school, taking computer science classes like Data Structure and Algorithm Design. I can tell you that real learning happens at the edge of comfort, exactly where AI tools promise to eliminate friction.
Perhaps our goal shouldn't be frictionless education at all, but rather to preserve the essential, transformative discomfort that leads to genuine growth, while using technology to eliminate everything else.
What's your experience with AI in learning environments? Are we cutting away the hype, or cutting away essential cognitive development?
Further Reading: