Restacking: Five Resources On Healthy Thinking About Artificial Intelligence to Wrap Up 2023
To finish my work year, in addition to any lingering work, I went through some resources on Artificial Intelligence (AI) that I had bookmarked and reflected on the year as a whole.
This year, there have been big changes, and small changes. Notably, I completed my doctorate in Computer Science at NYU – graduating in May.
I picked up writing again, starting with morning pages towards the end of summer. I started scoping opportunities to contribute to an AI risk think tank at ARK Venture Studio, while continuing my research on dialogue systems at NYU. And I had the privilege of giving talks and workshops with teachers on the role of AI in education.
All of this has shown me that the AI space is fast evolving, and yet, we seem to be at once overcomplicating and simplifying the concept of AI and its promises for human society. I think a lot of this is connected to human culture, and I find the increasing drivers of techno-scientism both fascinating and concerning.
Here are five resources on AI that I’ve loved since the last restack. If you read just one, make it to the bottom. It’s a piece by one of my favorite writers, Erik J. Larson. Erik has been rallying against the oversimplification of the world, human culture and meaning driven by techno-scientism cult for a while now. This piece in particular had me stopped literally in my tracks.
If you read it, let me know your thoughts in the comments.
What is AI? Legal understandings from the AI Act and the Executive Order
If you haven’t listened already, I implore you to listen to Luciano Floridi, an Italian philosopher, share his views and legal understandings of the AI Act and Executive Order.
In the video, Floridi emphasizes the need for thoughtful regulation, while likening regulation to a steering wheel that will help us guide AI responsibly. Importantly, it places people in the driving seat.
Defining AI
In a recent LinkedIn post, I took a turn demystifying the definition of AI.
Spoiler alert: there isn’t one definition as such.
However, if I were to give it a go, my short definition would describe AI as software written by humans to accomplish human objectives, with a certain degree of autonomy and adaptivity, and its output may often include errors in low percentages.
My longer answer would be that AI can never just be one thing. It’s a combination of technologies, strategies, and perspectives on how we achieve tasks with the help of a computer. Its outputs will always include some level of statistical error, simply thanks to its ever-increasing reliance on statistical methods like machine learning (ML).
And while it might be ‘good business,’ some people claim AI may also be part of the solution to tackling global challenges of climate change, biodiversity loss, societal inequality, and mitigating the risks of potential collapse. I believe that’s why it is so important for everyone to understand its potential and be aware of its limitations.
When it comes down to AI and regulations, good regulation will mean good innovation and good innovation will help mitigate the risk of doing “something very foolish” (Elon Musk at MIT, 2014).
EU AI Act
This official text of the EU AI Act is a staple read if you’re commenting on or analyzing AI policies. It might not be the most festive read to take home to the family, but it’s well worth a read if you have a quiet evening.
I’ll be sharing my thoughts on the act in a series of upcoming LinkedIn posts.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
AI in Education: Current Trends and Concerns
Continuing the conversation from last week on AI and storytelling in our education system, I found three resources that expanded on the topic.
Oxford Report on AI in Education: Where We Are and What Happens Next – a report commenting on the remarkable global growth of AI adoption in schools in 2023
Tools vs. Agents: The Dehumanizing Threat of Impersonal AI Educators by Marc Watkins
AI Enabling Schools with No Teachers – a LinkedIn post by Allie K. Miller on the ‘Alpha School,’ a program that starts every morning with two hours of personalized learning with an AI system.
The concept of classrooms being run by AI is increasingly concerning, particularly when you consider the lack of resources or incentives for current human teachers.
I’ll be exploring this topic in more detail in a two-part series, with the second post coming out in the first week of January.
Tech Kitsch, by Larson.
I found yesterday's Substack post by Erik J. Larson (the author of The Myth of AI) particularly enlightening.
For me, these three points stood out:
Erik’s concern over ways that deep, rich subjects like history, culture, and philosophy are being overshadowed by superficial discussions centered around ‘gadgets’ and technology.
The importance of embracing complex, nuanced narratives over overly simplistic ones. Erik highlighted the ease with which solutions can gloss over life's intricate questions, instead of engaging in thoughtful, in-depth discussions that they deserve.
The need for a more profound exploration into the essence of our existence, consciousness, and intelligence, and how these elements distinctly differ from the realm of computation and its technological advancements.
Over the holidays, I’ll certainly be thinking more on these reflections.
It's essential that we reclaim a culture that values human depth and complexity over the tech-dominated discourse. Especially when that discourse is coming primarily from Silicon Valley billionaires, who have in some cases, called themselves philosophers, politicians, and theologians thanks to some luck with software engineering and venture building.
Moving into 2024, let's not lose sight of the complexity of our human experience, history, culture, literature, and vast array of disciplines that are there to explore and understand different facets of reality.
Finally, a note from me.
Thank you for subscribing to Honest AI this year. When I was writing The Ethics of AI throughout the 2019-2020 academic year, I convinced myself that I had imposter syndrome. It was before finishing my PhD at NYU in Computer Science and dialogue systems, and in all honesty, I was feeling overstretched and exhausted.
It wasn’t until a 1-star Amazon review arrived – calling the book trivial, pretentious, and naive – that I realized that I wanted to go back to the book’s original promise and expand on it further.
Even though every other review was a 5-star, and many readers reached out to connect, this particular 1-star made a call for more nuanced discussion on the use of AI on fields like defense, facial recognition, and its potential misuse and threat to future democratic systems.
I’ve now come to a point, in that book’s wake, where I know that there is so much more to explore. Next year, I’m planning to interview influential figures in the AI space, renowned authors, and philosophers, and use this as a platform to amplify voices of those who wouldn’t ordinarily have one.
At the beginning of this year, I set out with a pivotal goal for this newsletter: to explore and address a fundamental question – How can we thoughtfully and ethically develop AI to sculpt a future work system that not only nurtures fulfilling, meaningful careers and financial independence but also cherishes and upholds the sanctity of family time?
In our journey to find answers, one thing remains certain: honesty will always guide our discourse.
I will be officially back on Thursday 4th January, with a follow-up piece to the role of AI in education.
In the meantime, if you’d like to support my work, subscribe to the newsletter, follow me on LinkedIn, and pick up a copy on The Ethics of AI on Amazon. For Christmas, it is available with a heavy discount (Kindle at $1.99 and paperback at $7.99 from tomorrow at 9 AM GMT to Christmas Day, 25th December at 23:59). Consider this my sales pitch!
Signing off
–a