I promised you a (very) short summary of what I learned on Day 2 of the Australia AI Safety Forum.
There's a story from history that struck me deeply. Thanks to Johanna Weaver from the Tech Policy Design Centre, who shared it with the audience to introduce the status of the international governance of AI safety before workshopping Australia’s role.
Over 150 years ago, Britain passed a law that nearly stopped the progress of cars.
It made me wonder—are we making the same mistakes with artificial intelligence today?
PS—Opinions are my own!
The Red Flag Act: A Lesson from the Past
Back in 1865, the British government passed the Locomotive Act, also known as the Red Flag Act. This law required any self-propelled vehicle to have a person walking at least 60 yards ahead carrying a red flag to warn others. Can you imagine trying to drive while someone walks far ahead with a flag? It's almost funny, but it seriously held back innovation.
The speaker used this tale to warn that regulators will get it wrong at the beginning, and maybe 100 years from now, they will laugh about early AI laws. But if we see how regulation in the auto industry evolved, eventually, we did get to safer driving.
While I agree with that message, I dug deeper into the story, and I think it can teach us something more nuanced.
Why did the Red Flag Act happen?
Powerful people who owned horses, carriages, and trains wanted to slow down new inventions that could hurt their businesses. By making it hard to use cars, they protected themselves but stopped progress for everyone else. This law stayed in place for over 30 years! It wasn't until 1896 that it was finally changed, and people celebrated by tearing up red flags and driving from London to Brighton.
Are We Doing the Same with AI?
At the forum, we discussed how today's fears and regulations might be slowing down AI, just like the Red Flag Act did with cars.
Surely big tech and large AI labs may see regulation as a way to defend their businesses (look how quickly we had hundreds of LLMs built after GPT3!) or as a marketing tool to inflate claims of the role of these technologies in business and society.
Are we letting worries and powerful interests hold back something that could help us all? If we're not careful, we might repeat history by choking innovation when we should be welcoming it.
That said, I am more in agreement than some sort of regulation would benefit the trajectory of AI development. Surely now, it feels a bit like the Old Wild West.
Australia's Chances
Australia doesn't have to make the same mistakes.
This fabulous country, which I can already call my new home (I moved here in May this year), has a history of making a real difference.
Not too long ago, Australia helped change international cyber laws when others said it couldn't be done. As a “middle power,” Australia has a significant mediating voice between the English-speaking nations and the Far East.
So, we shall definitely work nationally and internationally on technocal and social AI safety. However…
Developing frontier AI in-house
In my opinion, Australia should fund the development of its own cutting-edge AI models, just like the UAE did with their Technology Innovation Institute (TII). This isn't just about keeping up—it's about economic sovereignty. I strongly believe that technologies like AI and quantum computing are far too important to let laziness or lack of vision lead us towards dependence on foreign powers. Countries investing heavily in them ensure they can control their future and protect their interests.
Why Economic Sovereignty Matters
When we rely on others for important technology, we give up some control over our own destiny. Developing our own AI capabilities means we can make decisions that are best for Australia. It helps our economy grow and keeps jobs and knowledge here at home. Plus, if you want to be serious about AI safety, you need to put your hands on the whole machine learning lifecycle.
Like Dan Murfet described on Day 1 of the Forum, training an AI is an industrial process.
As with other industrial processes, political disparities and societal issues arise from which pieces of the supply chain one decides to outsource.
Helping Young Minds Shine
In the truly inspiring story of the creation of the UK's AI Safety Institute (AISI), Nitarshan Rajkumar explained how the UK accomplished this quickly, including fast-tracking visa applications for young, talented individuals eager to work on frontier AI safety.
They didn't just focus on seasoned experts. They also gave chances to bright newcomers eager to make a difference. Moreover, in a unique move for government operations, they adopted a 'go-fast, ask-for-forgiveness' approach to build the AISI. This resulted in assembling an impressive talent pool and producing valuable work in a short time—all from a public-sector initiative.
Australia has so many smart young people who want to get involved in AI. Let's give them the opportunities they need!
The Power of Computers and Global Politics
Computers are the engines that run AI. In particular, GPUs (Graphic Processing Units). Countries are now competing over who has the most computing power. For example, the U.S. has set rules that limit China's access to important computer chips, affecting their AI development. So, where does Australia fit in all this? Do we need to think about building up our own computing abilities to stay in the game?
On a more critical note, GPUs are the main currency for building AI these days. But some recent news challenges this narrative.
Let's Learn from the Past and Shape the Future
The story of the Red Flag Act teaches us that stopping new ideas can have long-lasting effects. We don't want to look back and realise we missed our chance to lead. Australia has the opportunity to influence AI in a way that benefits everyone. But let’s worry about regulation alongside building technologies, capabilities, and competitive advantage. Responsibly.
And that’s a wrap!
♻️ If you find this helpful, share, restack, drop a comment.
Let’s work together to invest in our future, empower our young talent, and align AI with human values.
It's time to move forward—without the red flags 🟥.
The Red Flag Act story is a problematic start (it reminds me of the "in the time of Columbus, people thought the earth was flat and Columbus wanted to prove otherwise — which is a myth, see "The Late Birth of a Flat Earth").
When the Locomotives act was created (1961) and amended (1965), the machines they were talking about were 14 ton 3m wide huge machines that were travelling on roads that were meant for pedestrians, mostly. While it is true that competitors will not shy away from trying to use the law to try to stop competition, it is far from certain (and probably a myth) that this was why the act was passed. There were reasonable safety and other concerns at the time.
The Red Flag Act probably didn't stop the development of cars. And in an interesting twist, it was mainly the pressure from the budding automobile industry that made the act be changed again. And it changed again and again over the years as the situation changed.
But it's a ver common bugbear in discussions like this, especially for those interests that want as free as possible development of their commercial interests. Conjuring the idiotic vision of a car where a man walks on front 20m waving a red flag is very demagogically effective.