Hello readers,
Before sharing what German philosopher Heidegger said about technology, I need to ask you a favour:
Please answer these quick questions to help me write content that you find interesting. I would greatly appreciate knowing how I can assist you.
Thank you!!
Back to Heidegger….
A few days ago, while commuting and mindlessly scrolling LinkedIn (guilty as charged!), I came across a post from Simón Villegas Restrepo, a great thinker who often writes about technology and philosophy.
These people make me hopeful because far too often, I see scientists who spread bad philosophy or philosophers who spread misleading understanding of technology. Simón shares great thinking and understanding of both disciplines.
Reading about Heidegger’s views on technology led me to find his whole discourse on the topic: the “Gelassenheit” speech he delivered in 1955.
Heidegger's warnings about technology's grip on society felt more relevant than ever:
We can say “yes” to the unavoidable use of technical devices, and at the same time say “no” in the sense that we prevent them from completely taking over our existence.
-Martin Heidegger, the “Gelassenheit” speech (1955)
We're not doomed to choose between complete AI adoption and absolute rejection. There's a nuanced path forward that few are talking about.
Benefiting from AI technologies while preserving our humanity should be a viable route.
Let me explain why this matters more than ever now.
Last week, I spoke with Sarah, a small business owner who felt paralysed by AI decisions. "Everyone's telling me to jump in," she said, "but no one's telling me how to do it thoughtfully."
Her concern echoes what I hear across boardrooms and coffee shops alike: We need two things:
Trustful material to learn what I can do for you, practically, on your everyday life and work.
Trustful material about AI’s limits: what it cannot do, or does too poorly to be useful.
A framework for engaging with AI that preserves our humanity while targeting inclusive progress, along with the ability and processes to quickly correct our course when we go off the path.
Here's what I've learned from working with organizations wrestling with this exact challenge:
The power of thoughtful AI adoption
The key isn't saying yes or no to AI—it's knowing precisely what to embrace and reject. Think of it as creating your own ethical AI compass.
Say YES to:
Technologies that augment human creativity rather than replace it
AI systems that promote transparency and accountability
Solutions that democratize access to knowledge and opportunities
Tools that reduce mundane tasks, freeing up time for human connection
Say NO to:
Applications or new processes resulting from the use of AI that diminish critical thinking
Systems that create or amplify social inequalities
Solutions that prioritize efficiency over human dignity
Implementations without clear ethical guidelines
What about practical implementation?
Three steps to thoughtful AI integration in your daily life and work:
Audit your intent—Before implementing any AI solution, ask: "Does this serve my (or someone else’s) humanity, or does it diminish it?" The answer often lies in how the technology will be used, not just what it can do.
Design for enhancement—Focus on solutions that enhance human capabilities rather than replace them. The goal is augmentation, not automation at all costs.
Measure what matters—Beyond efficiency metrics, track the human impact: Does this make your team more creative? More connected? More fulfilled in their work? How much time do you give them back?
We're at a pivotal moment in technological history. The decisions we make today about AI adoption will shape not just our businesses but also our society's future.
What's your framework?
I'm curious: How do you navigate these decisions in your organisation? What's on your "yes" and "no" lists?
Reply to this email to share your thoughts. Your perspective could help shape our collective understanding of thoughtful AI adoption.
Until next time,
-a
P.S. I'll launch the first episode of the Honest AI podcast next week, featuring a conversation with Erik J Larson, author of The Myth and
. You won't want to miss it!