Why the 2027 "superintelligence" claims (and other news) need scrutiny.
The last few weeks in AI (from someone with a soul) | Soulcode Edition #5
Want AI insights without the hype? (I mean it)
In a world of algorithms and automation, we need a human compass. This newsletter brings you AI and tech news through the lens of ethics and humanity, where technology meets consciousness.
We'll explore AI's worthiest developments and their impact on our shared future every two weeks(-ish). There will be no jargon or hype. I promise clear insights with a conscience.
Hello readers,
I first need a quick favor from you, please. It takes 2 minutes:
Now, to this edition of Soulcode.
The tension between AI innovation and responsible deployment has never been more evident. As your guide through this evolving landscape, I'm committed to cutting through the hype and highlighting what truly matters and what's ethically sound.
Let's dive into the most impactful developments in the past month or so.
🇦🇺 Australia's Call For an AI Safety Institute
Hundreds of scientists, philanthropists, and professionals in several disciplines are advocating for Australia to establish its own AI Safety Institute (you can sign this letter to support it). This initiative would join the UK, the US, Korea, Japan, and many more in creating dedicated AI security, safety, and governance bodies. Unlike vague policy declarations, this initiative focuses explicitly on developing engineering standards and safety protocols.
Why this matters: Have you ever wondered why your laptop doesn’t catch fire? Engineering bodies and regulations developed and enforced safety standards to deliver safe products. The same goes for essential infrastructure and transport, like bridges and planes. We need the same practical approach to address several critical limitations of AI: systems that perform well in controlled environments often break in unexpected ways in the real world and may generate societal turmoil we’re not researching well enough yet.
With a soul: AI Safety institutes bring much-needed technical rigor to ethical oversight, and every country would benefit from developing independent AI capabilities. Fast adoption without ethical guardrails is a recipe for disaster.
The video also showcases an interview with my colleague Tiberio Caetano, Chief Scientist at the Gradient Institute.
📊 The Superintelligence Forecasting's Philosophical Masquerade
The widely circulated AI 2027 report explores the speculative case for “superintelligence” arriving within three years, showcasing AI forecasting's fundamental problem: conclusions that appear scientific but rest on deeply contested philosophical assumptions. We should ask some other questions before adventuring into science fiction. For instance, is general intelligence even possible? Let alone “super.” What tasks require intelligence vs. what can be achieved without intelligence? These are just a few starters.
Why this matters: The superintelligence discourse has devolved into unproductive extremes. Believers prophesy existential doom. Skeptics dismiss it entirely. Meanwhile, the nuanced middle ground gets lost: these aren't scientific models like climate forecasts but belief systems dressed in technical jargon. When "expert opinions" are treated as data, and philosophical assumptions about consciousness and agency go unexamined, we're no longer in the realm of empirical science.
With a soul: Unlike climate models built on physical evidence, superintelligence forecasts depend on untestable assumptions about how intelligence works. The charts look impressive, the names attached carry weight, but digging through the actual literature reveals a house of cards.
The real near-term risk is something much more concrete: current market incentives already push for unchecked automation and AI deployment before proper governance structures exist. While we debate sci-fi scenarios, real systems are growing faster than our capacity to govern them.
🎓 Anthropic's Claude for Education: Promise and Peril
Anthropic has launched Claude for Education, specifically designed for classroom settings with safety features to prevent harmful content generation and allegedly enhance critical thinking.
Early trials at Stanford University revealed that while students found Claude helpful for understanding complex concepts, 37% of students admitted using it to complete assignments without actually learning the material. This looks like a critical limitation in monitoring actual educational effectiveness.
Why this matters: Educational AI tools promise personalized learning but risk undermining the critical thinking skills they're meant to develop.
With a soul: Does AI belong in education? Nobody is asking this foundational question. And even if it does, how can we measure its impact beyond superficial metrics? Authentic learning isn't about the final product: correct answers. It is more about developing reasoning, thinking, and judgment skills that AI fundamentally lacks. Educational institutions must establish clear boundaries that leverage AI's strengths while protecting the essentially human aspects of education.
🛍️ Amazon's 'Buy for Me' and the Future of Consumer Agency
Amazon's new 'Buy for Me' feature allows users to purchase items from other retailers through its app, exemplifying how AI reshapes consumer behavior.
Why this matters: This convenience comes with tradeoffs. The feature consolidates Amazon's market position while potentially limiting consumer exposure to alternative options. It's designed to maximize ease, not consumer awareness or market competition.
With a soul: AI increasingly mediates our purchasing decisions. Who benefits from these systems? (Rethorical question). When convenience trumps all other values, we risk creating commercial environments that prioritize engagement over wellbeing. This is an alarming limitation inherent in profit-driven AI deployment.
Final Thoughts
The recurring theme across these developments is clear: while ethical problems keep magnifying, AI's current limitations are technical problems that we’re not sure can even be fixed. When systems cannot reason about their own limitations, humans are responsible for establishing appropriate boundaries.
Actual progress leads to beneficial ends by recognizing that technological advancement and ethical (starting from security and safety) considerations must move together. The most promising AI developments acknowledge current limitations rather than gloss over them with hype.
I'd love to hear your perspective. What AI developments are you watching closely? Which ethical considerations matter most to you?
If this perspective on AI resonated with you, please forward it to a colleague who might benefit from a more nuanced view of technological progress.
Until next time,
—a