Hello there! I have been silent for a couple of weeks because something was brewing, taking me away from Honest AI. I’m very excited to reveal it to you now!
TL;DR: I'm launching a project to develop a quantitative framework for modeling, assessing, and quantifying AI risks, focusing first on catastrophic outcomes. The project combines actuarial science and AI expertise to create a robust methodology for understanding AI risks. I'm starting with a 4-week prototype focused on the risk of AI-powered computational propaganda influencing elections.
As AI adoption by businesses, government, and the general public advances (too fast), the need to understand and quantify its potential risks becomes increasingly crucial. That's why I'm excited to announce my latest project: a quantitative framework for modelling, assessing, and quantifying AI risks.
My Journey
Throughout my journey, I've been fortunate to explore diverse fields and gather unique perspectives. After countless conversations with mentors and experts, and through hands-on experience in various projects, I've come to realize the value of my unconventional path. It's led me to develop a rare blend of expertise at the intersection of disciplines that often exist in separate worlds.
My career began in the world of actuarial science and insurance – a realm known for its analytical rigor and cautious approach to risk. This foundation in data-driven decision-making has proven invaluable. Yet, life had different plans for me, pushing me beyond my comfort zone and into the exciting world of machine learning and data science first, then innovation and entrepreneurship.
From launching startups to creating products from scratch, from embarking on a Ph.D. in my late 30s to writing a book on ethical approaches to AI (despite my initial struggles with writing and English as a second language), each step has added a unique layer to my perspective. These experiences have taught me to bridge the gap between meticulous analysis and bold innovation, between cautious planning and calculated risk-taking.
This journey hasn't always been easy, and I've had my fair share of doubts and setbacks. But it's precisely this diverse background that allows me to approach complex problems from multiple angles, bringing together insights from fields that rarely intersect (insurance and entrepreneurship, risk management and AI / tech innovation).
Why This Project Matters
I shared my path not to boast, but to illustrate the power of embracing diverse experiences and the unexpected turns in our careers. It's a reminder that our unique paths, however winding they may be, can lead us to valuable insights and opportunities to make a difference.
The intersection of actuarial science, AI, product management, research, venture-building, and ethics offers a unique perspective on AI safety. By leveraging techniques from actuarial science and deep knowledge of machine learning, I want to create a robust methodology for understanding and quantifying AI risks across various industries and society as a whole.
Project Overview
The framework will be demonstrated by analyzing and quantifying a few risks of AI use cases, such as computational propaganda and AI-powered medical diagnostics. It can be expanded to cover all risks identified in notable AI risk databases, providing valuable insights for risk management, mitigation strategies, and policy-making.
Key Deliverables:
Interactive Notebook or Streamlit app
Technical paper or comprehensive blog post
Risk database with rankings
Starting Small: A 4-Week MVP
To lay the foundation for this ambitious project, I'm focusing on creating a minimum viable product (MVP) over the next three weeks. This MVP will center on one specific risk that's particularly relevant in 2024:
The use of AI-powered computational propaganda and advanced AI assistants to manipulate public opinion, spread misinformation, and undermine democratic processes through personalized influence campaigns.
Breaking Down the Risk
To quantify this risk, we need to break it down into measurable components. Here's an initial brainstorm of factors to consider:
Frequency: How often are there opportunities to influence voter opinions? (e.g., number of elections, referendums, AI-powered influence campaigns per election cycle)
Success Rate: What percentage of exposed voters change their voting intention, and what's the impact on election results?
Reach: What percentage of the voting population is exposed to AI-powered content?
Impact: Is there a way to quantify the cost of losing democratic integrity? (e.g., impact on GDP per capita)
Time to Effect: How long does it take for the impact to manifest? (e.g., loss of government → effect on GDP per capita)
🫵 I Need You
This project is ambitious and requires collaboration. If you're passionate about AI safety and want to contribute, here's how you can help:
Share your expertise: Do you have relevant research or insights to share?
Suggest improvements: Have I missed any crucial factors in quantifying this risk?
Collaborate: Are you interested in working together on this project?
Let's work together to create a more robust understanding of AI risks and contribute to the critical field of AI safety.
I'll be documenting this journey on my Substack publication, Honest AI.
Edit April 2025: This project has long been deprecated because highly qualified teams are working on this and progressing well.