Welcome to this week’s Restack! For anyone new to Honest AI, this is a roundup of articles that I’m enjoying, reading, listening, and watching on the topic of AI, ethics, and the future of work.
Today’s restack explores the role of AI, warfare, and security. First, we explore Israel’s unsettling use of the AI system “Lavender” in Gaza, raising ethical concerns about its impact on civilian casualties. Next, we shift to Australia’s pursuit in the AI arms race despite global calls for regulation. We also examine OpenAI’s troubling security lapses and dominance in the AI industry, underscoring the need for greater transparency and accountability in AI development. Here goes…
Your Restack
‘Lavender’: the AI machine directing Israel’s bombing in Gaza
Brace yourself for a chilling read. Israel's military has been using an AI system called "Lavender" to pinpoint targets for attacks in Gaza. This high-tech assassin has tagged tens of thousands of Palestinians as suspected militants with minimal human checks. The strikes often happen at night, tragically claiming the lives of many civilians, including women and children. Shockingly, the Israeli army has relaxed its rules on civilian casualties, a stark departure from previous norms.
Australia pushes ahead in the AI arms race
Some worrying news from down under, my new home place! AI in warfare. There is a lot to write here, but I am still forming my ideas before writing about automation in warfare.
The ongoing conflicts in Ukraine and Gaza are serving as testing grounds for new autonomous weapons and AI-powered targeting systems. This has raised concerns about the increasing use of autonomous capabilities in warfare. Australia is charging ahead in the AI arms race, developing cutting-edge autonomous military tech through partnerships with defense, industry, and universities.
However, the Australian government is sidestepping calls for new international laws to regulate these weapons; instead, it is focusing on applying existing laws. There's a growing global movement to regulate autonomous weapons, with many countries calling for a new legally binding instrument. The urgency for new, legally binding rules has never been more apparent, as ethical and security concerns mount.
OpenAI and insecure AI lab
Here's a scoop that'll make you think twice about your data security. OpenAI has had some major security slip-ups, including a significant hack in early 2023 that they kept under wraps and more recent blunders like storing user chats in plain text on Macs.
These issues raise serious questions about their ability to safeguard sensitive information. OpenAI's dominance in the AI industry, backed by Microsoft, may be stifling innovation and competition in the U.S. while also potentially compromising national security interests. Plus, with Microsoft's backing, OpenAI's dominance might stagnate innovation and inadvertently aid China's AI ambitions. Many are switching to Claude for better security and a more refined tone. I, too, did that and advise the same!
A digital ledger of AI incidents
This one's for the accountability advocates. The digital ledger of AI incidents is a comprehensive tracker of AI-related mishaps and ethical breaches. It aims to foster transparency and responsibility in AI development and deployment. By documenting these incidents, it provides a valuable resource for understanding and mitigating AI's risks, ensuring we learn from past mistakes.
Each of these articles emphasizes the critical need for ethical guidelines and regulatory frameworks in the fast-evolving AI landscape.