
Welcome to this week’s Restack! For anyone new to Honest AI, this is where I compile articles that I’m enjoying, reading, listening, and watching on the topic of AI and ethics.
Last week, I shared my involvement with a new fantastic project as a venture builder supporting the launch and fundraising of a pioneering AI security startup. It’s in stealth mode for now. Where possible, I’ll be sharing everything that I’m learning about this evolving space here and on LinkedIn.
Your Restack
AI safety challenges
The dangers of fake expertise
Protecting our children
AI safety challenges
A wonderful substack post by
that covers multiple topics, including a new benchmark called MMT-Bench, which evaluates language models on complex visual reasoning tasks, and a system that turns photographs into interactive 3D worlds.What caught my eye is that Jack details a comprehensive review of AI safety challenges, outlining 213 questions across 18 distinct areas that need addressing to ensure the safety and reliability of language models. These cover a wide range of issues, from technical aspects to socio-technical challenges.
Read the full article:
Minotaur Mode: the dangers of fake expertise
(“”) uses ‘Minotaur Mode’ as a smart metaphor to describe how AI, if not carefully managed, can lead to a deceptive sense of expertise. Brake discusses the balance between enhancing capabilities with AI and the risk of over-reliance, which could potentially lead to a loss of individual agency and genuine understanding. He stresses the importance of maintaining control and understanding when to integrate AI into one’s work without merely extending the illusion of expertise without substance.
Key points:
AI users may appear to have expertise, but they actually lack substantial knowledge, ultimately losing agency to AI
“If we look at what is actually happening with the way generative AI is being used, it seems to me that it is democratizing the appearance of expertise, not expertise itself. What it actually democratizes is more like raw power than expertise. Power is about the ability to act, whereas expertise in its truest form is power combined with wisdom. Which one we’re shooting for matters.”
The danger of over-relying on AI can lead to a superficial understanding and capability, particularly for students and professionals in the early stages of their careers. This makes me think of airplane pilots, who are trained to fly at all conditions and to recall the training, they are forced to do certain maneuvers (like takeoff and landing) manually even if an autopilot system would already be able to complete them automatically.
Read the full article:
https://substack.com/inbox/post/144158168
Protecting our children and calling quits with smartphones
As a father, I’ve advocated for a while now that we need to actively consider the impact that technology is having on our children.
In her brilliant substack, After Babel, Daisy Greenwell shares her story of founding a smartphone-free childhood for children in the UK alongside Clare Fernyhough. It started with a WhatsApp group and quickly grew into a nationwide movement as thousands of UK parents joined.
These efforts led to the formation of local groups and a broader campaign for regulatory changes to protect children from the harmful aspects of digital technology. The movement underscores a strong parental desire for more tech-free environments for their children.
This all builds on the body of scientific work published by Jon Haidt, a moral philosopher and American sociologist. He introduces the article and birth of a grassroots parent movement, which has made an issue from “a fringe discussion at the kitchen table–from the kitchen to the classroom to the cabinet–in a matter of months”.
Read the full article: