Soulcode #4: "Groundbreaking" News vs. GroundED Reality
The last few weeks in AI (from someone with a soul)
Want AI insights without the hype? (I mean it)
In a world of algorithms and automation, we need a human compass. This newsletter brings you AI and tech news through the lens of ethics and humanity, where code meets consciousness.
We'll explore AI's worthiest developments and their impact on our shared future every two weeks(-ish). No jargon, no hype – just clear insights with a conscience.

Hello readers,
Recent headlines paint a dramatic picture of AI's trajectory: Chinese startups disrupting markets, chatbots distorting news, and sophisticated scams targeting millions.
But let's take a breath and remember something real: we have agency in how we shape this technology's future.
While markets reacted to DeepSeek's surge and publishers battle Cohere over copyright, all stories push the AI race narrative, with surging nationalisms as cheerleaders.
Let's try to read them with some practical wisdom:
The Market “Disruption” Reality Check
DeepSeek's rapid rise has sent ripples through tech markets, affecting giants like Nvidia and Microsoft. I think competition is good. Yes, it frightens people concerned about world dominance and the poorly hidden agenda of seeing AI as the best mass-thought control weapon.
But, back to the reality of leaders and daily professionals, it’s helpful to learn that no single player will dominate the AI landscape forever. Instead, focus on building sustainable, ethical AI strategies that serve your needs. Of course, inform yourself well about these models’ provenance and inherited bias from training. And remember there are less known, much better options that look less powerful, don’t make the headline, but solve 99.99% of most practical and business needs!
Recent examples:
The timeless SpaCy library.
Models from the Allen Institute.
Nomic: check out GPT4All, Atlas, and their latest announcement.
Truth in the Age of AI
A recent BBC study revealing AI chatbots' tendency to distort news isn't really unexpected news. It’s concerning and instructive. Over half of AI-generated news summaries contained significant inaccuracies. BTW, read
’s thoughtful analysis of what happens when you think AI summarises.This reinforces a critical principle for organisations implementing AI: AI tools are assistants, not authorities. Build verification processes and maintain human oversight in critical communications.
The Copyright Conundrum
The lawsuit against Cohere by major publishers highlights a broader issue: the need for clear frameworks around AI training data.
Smart organizations are proactively developing ethical guidelines for AI development and deployment rather than waiting for legal precedent.
Oh, and the wait is over: Thomson Reuters recently won a lawsuit over the question of fair use of copyrighted material.
Security vs. Ethics: A False Choice
The UK's AI Security Institute's pivot toward security raises an important question: must we choose between security and ethical considerations like alignment, bias and free speech?
The answer is no. Effective AI governance addresses both, and I hope organisations that implement good, responsible AI governance (not just a committee and a blog post) will prove this.
Learning World Models: Really?
For those interested in the technical side,
’s latest analysis of LLMs and world models offers fascinating insights into AI's current limitations and potential.It's a reminder that we shall critically assess any claim we hear these days. Even when they come from brilliant AI researchers and builders like Ilya Sutskever. As Prof. Mitchell shows, the science of understanding what these models ‘learn’ and do is still at its infancy. Please do yourself a favour and follow her work: Mitchell is a remarkable scientist and educator who performs a generous public service with her writing.
Practical Takeaways:
Implement robust verification systems for AI-generated content
Develop clear ethical guidelines for AI deployment
Maintain human oversight in critical decision-making processes
Build security measures that don't compromise ethical considerations
Stay up to date with the science of frontier AI
Don't chase every AI headline that drops. Build a sustainable AI strategy instead.
Looking Forward
While the headlines focus on market disruptions and technological leaps, your priority should be solving real problems ethically. This means asking critical questions about transparency, accountability, and impact while maintaining a balanced approach to AI implementation.
The path forward is thoughtfully implementing AI aligning with your organization's values and objectives.
Want to dive deeper into building ethical AI frameworks that work? Reply to this email with your biggest AI implementation challenge. Let's solve it together.
Until next week,
-a