Restacking: Interesting Crossroads of AI Culture, Generating Avatars, Intentionality.
With AI and technology in general, it's easy to get swept away by the tide of technological progress and lose sight of the foundational questions that should guide the development and adoption of such tools.
Today, I want to share three resources that captured my attention: the culture surrounding AI, the ethics of synthetic avatars, and the introspective inquiry into why we are pursuing AI in the first place.
Cultivating a Counter Culture in AI
Erik Larson's insights in his recent Substack post, "AI's Forgotten Counterculture," resonate deeply with the ethos of Honest AI. Larson champions a call to action, urging us not to take the mainstream's overly positivistic message about AI for granted–and I’d add not even the overly pessimistic ones! Larson emphasizes that singular, ego-driven narratives fall short of effecting real change. Instead, he argues, there's a pressing need to organize around a culture that genuinely benefits humanity.
This perspective aligns with my conviction that much of the wishful thinking surrounding AI stems from a culture that, whether overtly or subtly, positions itself against the essence of the human person. By advocating for a counter-culture that values good over grandiosity, we can begin to address the root of the issue.
If you haven’t yet, I highly recommend subscribing to Colligo and consider a paid subscription:
Ethical Questions of Synthetic Selves
The conversation around synthetic avatars, as explored by Marc Watkins in "Synthetic Selves: The Ethics of Real and Generated AI Avatars," strikes a chord with my research at NYU (latest paper and demo here). While my work dodges generative AI in favor of authenticity through real video recordings, Watkins' interrogation of the ethics involved in generative AI avatars remains pertinent.
His exploration begs the question: what constitutes a better approach? This debate is not merely academic. It touches on the very essence of authenticity and ethics in the digital age. Something we already hear a lot now and will only increase throughout 2024.
Reflecting on the Purpose and Potential of AI
Lastly, an often overlooked but critical dimension of AI development is the intention behind it. A provocative reminder comes from a recent post by “The Absent-Minded Professor,” highlighting the importance of distinguishing between the aspiration for Artificial General Intelligence (AGI) and the reality of our current capabilities. The discourse around AGI reveals underlying beliefs about learning, reasoning, and intelligence, inviting us to ponder the true reasons behind our quest to build AI.
You’ll notice that I used the word emulate. This reveals my position on whether such a technology is truly achievable in its fullest sense. But the real point is that whether you believe AGI is possible says something about what you think it means to learn, reason, or act autonomously.
The most striking revelation is the acknowledgment that the pursuit of AGI is as much about belief as it is about science. It forces us to confront the implications of attempting to bypass human agency, urging us to start with the fundamental question: Why do we want to build AI?
As we navigate the complexities posed by the development of AI technologies, we must do so with a clear understanding of the cultural, ethical, and philosophical underpinnings of our endeavors. By engaging in these critical conversations, we can ensure that the future of AI is not only innovative but also authentic and, most importantly, human-centered.
Stay tuned for future posts where I will delve deeper into these topics, offering insights and, perhaps, answers to the questions we must all ask ourselves at this pivotal moment in our technological evolution.