#7 Restacking ☕
Welcome to this week’s Restack! For anyone new to Honest AI, this is where I compile articles that I’m enjoying, reading, listening, and watching on the topic of AI, ethics, and the future of work.
Today’s restack explores the role of fantasy and storytelling: the ways that the rise of technologies like ChatGPT has ended the fantasy of automated antagonists, how storytelling and performance are enabling an unequal society, and why we have to start channeling our resources into using AI more responsibly.
It’s no secret that I’ve admired Erik Larson’s work for years, and I am delighted to announce that he may soon be featured on Honest AI… what’s more, I’ll be joining the brilliant Jennifer Pierce as a guest on her podcast, Singular XQ.
Your Restack
The end of Sci-Fi AI?
Musk’s storytelling formula
The flawed connections between investment and ML capabilities
The end of sci-fi AI
If you’ve read a previous restack on Honest AI, you’ll know that I am a huge fan of Erik Larson’s work and his substack, Colligo.
This article explores how the recent success of AI technologies like ChatGPT may have effectively ended the sci-fi dream of creating true artificial general intelligence (AGI) or conscious machines akin to HAL 9000: the highly advanced, self-aware, and unpredictable antagonist of Arthur C. Clarke’s Space Odyssey series.
This point is important in that it highlights the gap between the hype and promises surrounding AI and the reality that current AI systems, while impressive in narrow domains or ‘digital envelopes’, are still far from replicating human-level cognition or consciousness.
Current AI systems are, on the whole, energy-intensive, invasive, and profit-driven. Because they are owned by large corporations rather than a scientific, non-political body, their focus will likely always be focused on commercial interests, ‘winning the category,’ and exploiting data rather than advancing our understanding of cognition or improving our lives.
Larson shares that disappointment and sense of tragedy, too, that the immense resources that we are collectively pouring into AI are not leading to more uplifting or beneficial outcomes for humanity. Larson is understandably disillusioned with the current state of AI, which they see as a missed opportunity to invest in technologies that might have had a more positive impact on society and human wellbeing.
Read the full article:
Elon Musk and the Machine That Goes Ping
Jennifer Pierce, the Founder of Singular XQ, a non-profit dedicated to driving a more sustainable and equitable ecosystem through open-source research, development, and education, wrote this piece on storytelling and the curious case of Elon Musk.
Pierce recounts how Elon Musk called an empty server box the ‘Machine that Goes Ping’ to impress investors with his early-stage startup, Zip2.
The reality of the startup world is that founders most often have to create a fantasy around their product and its capabilities in order to attract investors and resources, even if it borders on deception or false promises. Despite Musk’s later success, the ‘Machine that Goes Ping’ anecdote symbolizes a broader issue of inequity and a ‘Hunger Games’ dynamic. Scientists, tech founders, engineers, and more must ‘perform’ to secure funding from those with wealth and power.
Why is that critical? There is a stark imbalance between power and the ritualistic nature of securing investment. Who you know is usually far more important than what you know. What’s more, the landscape for women acquiring VC funding is getting even harder. In 2024, men will get, on average, 5.9x the amount of early-stage VC funding that women will.
Pierce also draws a parallel between the ‘Machine that Goes Ping’ story and the satire in Monty Python’s ‘The Meaning of Life’, where the focus is on impressing bureaucrats while turning a blind eye to human suffering. Again, this is crucial. If we are to become enamored with technological advancement and surface-level entrepreneurial success – all while neglecting the world that is actually around us – what road does that put us on?
Certainly not one that benefits marginalized or vulnerable populations.
Read the article: https://www.linkedin.com/pulse/elon-musk-machine-goes-ping-jennifer-pierce-phd-dvdwc/?trackingId=yZzSLLvzT0WiBP77O3dpJw%3D%3D
Noam Chomsky on the false promise of ChatGPT
Noam Chomsky, the notorious linguist and philosopher, is also one of the leading critics of U.S. foreign policy, contemporary capitalism, and world peace. In this incredible piece, written alongside Ian Roberts, a fellow professor of linguistics, and Jeffrey Watumull, director of artificial intelligence at a science and technology company, these AI heavyweights lean into the fundamental flaws of machine learning models like ChatGPT.
Their key points include:
Machine learning (ML) models like ChatGPT are fundamentally limited compared to the human mind. They cannot achieve true AGI, and we must be cautious when exaggerating the current capabilities of ML models. There are profound differences between ML and the human mind, with the mind’s ability to create explanations and develop complex theories from limited data.
The human mind is vastly different and more efficient, acquiring language and knowledge through an innate operating system rather than brute statistical pattern matching. This point is crucial for understanding the limitations of ML models, which cannot replicate the mind’s ability to generate complex ideas and theories. ML relies instead on data crunching and pattern recognition.
The focus and investment in ML models like ChatGPT are disproportionate to their actual capabilities and significance compared to the human mind. This point highlights the potential misdirection of resources and attention, which the authors see as a relatively trivial pursuit, especially when contrasted with the remarkable cognitive abilities of the human mind.
Read the full article:
https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
Recommend Honest AI