The way that the OpenAI drama has unfolded over the last few days would have been more at home in a reality TV show. And finding good-quality investigative reporting, which went deeper than being an echo chamber for social media and tabloid-style gossip, was actually really hard to find.
Below, I have compiled some articles and resources that I have found useful to understand the story behind the scenes, motives, and potential narratives.
Top 5 reasons why OpenAI was probably never really worth $86 billion
Gary Marcus might be a tad fixated on proving his point, but his insights on OpenAI's value are not to be missed. I believe the economics of OpenAI played a pivotal role in the story. As a business owner, angel investor, friend of, and advisor to many founders, I could relate to the problem of living up to insane valuations and the conflict between profit and mission. Check out Marcus’ thought-provoking perspective on his Substack here:
Sentient AI is a Phlegm Theory
Erik Larson is another must-follow. If you haven't subscribed to his Colligo or read his enlightening book 'The Myth Of AI', I highly recommend it. He consistently offers valuable insights on the overhyped aspects of AI. One more critical stake with OpenAI is their belief in achieving AGI. As Larson often explains in his writing, this goal is a cult, not a scientific endeavor. Here’s a link to his intelligent take on the so-called 'sentient' AIs.
What everyone should understand about ChatGPT and friends
For a clear understanding of how ChatGPT operates, Gerben Wierda's talk is a treasure trove. It’s an eye-opener about the many misconceptions circulating about AI. Stay tuned for my upcoming LinkedIn post next week, where I'll break down Wierda's insights into an easy-to-understand format.
Last week’s secretive AI security summit
While I believe it was more a clash of power and egos, one reason behind the OpenAI problems seemed to be the difficult (if not impossible) tradeoff between commercialization, growth, and security of AI applications as widespread (with unpredictable behaviors) as ChatGPT. Sharon Goldman recently covered an intriguing story about a secretive AI security summit organized by the US government. You can find her insightful article here:
Conflicts about a more capable model
Again, on the conflict between profit and security, yesterday, Reuters dropped an exclusive bombshell about a hush-hush project at OpenAI, cryptically named Q* (“Q Star”). Rumored to be a model of unprecedented capability, the details remain mysterious, making it challenging to sift fact from speculation. However, what's clear is the unabating march towards more capable models. That might be at odds with security—or security considerations might well be derived from the alluring and contentious concept of AGI.
The cult culture in the tech world
Here's something that struck a chord: a LinkedIn post that nails a stark reality in the tech universe. It paints a picture of founders and CEOs being idolized, almost deified. Having been in the trenches at Tesla, I've witnessed this phenomenon firsthand. It's not just about putting tech moguls on a pedestal; it’s about how this cult-like adoration can morph into mental strain and stifle professional growth. The consequences? Often a blend of mental toll and a stifled capacity for professional development. We must ask ourselves: At what cost does this 'rockstar' worship come?
My 2 cents
Now, here’s my take on the whole saga: It's a complex web of intersecting factors:
The hype and overstatements surrounding AI.
The inflated belief that AGI is the ultimate economic jackpot.
The fear-mongering around AI safety.
But cutting through all this, isn’t it just good old-fashioned greed at play? It seems like some folks at OpenAI, upon realizing the potential wealth they could've amassed in a traditional VC-backed startup, started pushing for more. Meanwhile, their leadership, not exactly transparently, was looking to cash in – and fast, before the potential onset of another AI winter. It's not every day that you work on a product that skyrockets to a multi-billion-dollar valuation in mere months. You can’t blame them for kicking themselves over choosing a nonprofit model instead of a capped-profit one, right?
Thank you. Worthwhile as well: Henry Farrell, What OpenAI shares with Scientology (https://www.programmablemutter.com/p/look-at-scientology-to-understand)
Nice article Alberto. Along these lines, you might wish to check out: Bishop, J.M., (2021), Cf. "Artificial Intelligence is stupid .." <https://lnkd.in/e-MxHXYq>.
FYI I have debated at length with Prof David Chalmers, specifically on my reductio ad absurdum that demonstrates computation cannot generate consciousness *unless* one subscribes to a particularly vicious form of panpsychism. For an early rejoinder to David, see Bishop, J.M., (2002), Counterfactuals Can’t Count: a rejoinder to David Chalmers, Consciousness & Cognition: 11(4), pp. 642-652.
.. or better still, check out my later paper, Bishop, J.M. (2009), Why robots can’t feel pain, Mind and Machines: 19(4), pp. 507-516. <https://tinyurl.com/3j28euxx>