Urgent Actions Required to Mitigate AI Threats

A new report from Gladstone AI, requested by the US State Department, has everyone talking about the dark side of artificial intelligence (AI).

by Faruk Imamovic
Urgent Actions Required to Mitigate AI Threats
© Getty Images/Johannes Simon

A new report from Gladstone AI, requested by the US State Department, has everyone talking about the dark side of artificial intelligence (AI). After talking to over 200 big names in AI and national security for more than a year, the findings are pretty worrying.

The report bluntly states that if we're not careful, AI could turn into a nightmare scenario where it threatens the survival of humanity itself. Though this report isn't the official word from the US government, it's ringing alarm bells in the AI world and beyond.

Jeremie Harris, the boss at Gladstone AI, pointed out that while AI has the amazing potential to change our world for the better, it's not without its dangers. He says that if AI gets too powerful, we might not be able to control it anymore.

The Two-Fold Threat

The Gladstone AI report breaks down two big worries about super-smart AI. First off, there's a real fear that these AI systems could be turned into weapons, causing damage like we've never seen before.

Then, there's the scary thought that the people making these AIs could actually lose control of them, which could lead to some really bad news for everyone's safety around the world. The report makes a bold comparison, saying that the situation with AI could get as serious as the nuclear weapons scare.

It talks about how the rush to build smarter AI (what they call artificial general intelligence or AGI) could lead to a dangerous competition between countries, raising the risk of fights and disasters that we just can't afford.

This is why the report is practically shouting from the rooftops that we need to do something about it now.

Recommended Measures for Mitigation

The report is pushing for some big changes to deal with these AI dangers. It says we should set up a special agency just for AI and put some emergency rules in place.

Another idea is to not let AI systems use too much computer power when they're learning new things. This could help slow down the rush towards creating super powerful AIs that we might not be able to control. The people behind the report think that the race to be the best in AI is making companies ignore how important safety and security are.

They're worried that if we're not careful, the really advanced AI systems could end up in the wrong hands and be used as weapons. This is something we need to stop from happening right away.

OpenAI CEO, Sam Altman© Getty Images/Justin Sullivan

Talking to the big names in AI like OpenAI, Google DeepMind, and Meta has shown that even though there's a lot of money and effort going into making AI better, the safety measures aren't keeping up with how fast everything is moving.

This is a big red flag that we need to pay attention to.

Broader Implications and Industry Reactions

The report has really shaken things up, catching the attention of big names all the way up to the US government. The White House's own Robyn Patterson pointed out that President Joe Biden is already on it, having signed an executive order on AI that she calls a game-changer.

It's the biggest step any country has taken to both embrace AI's potential and tackle its risks. This shows the US is serious about working with other countries and getting Congress to sort out the tricky parts of dealing with AI and new tech.

Since Time magazine first talked about this report, everyone's been buzzing about AI more than ever. It's got people thinking about how AI can be amazing for us but also pretty scary. Researchers are especially worried about two things: AI being used as a weapon and the chance that we might lose control of AI altogether.

These worries suggest we could be facing dangers as serious as those from nuclear weapons, hinting at a scary AI race that could put everyone's safety on the line.

An Industry at a Crossroads

The tech world's reaction to these AI concerns is kind of a mixed bag.

On one side, everyone's excited about the good stuff AI could bring, like big bucks and solutions to tough problems. But at the same time, people are waking up to the fact that we've got to make sure AI is safe and secure. The race to be the best in AI is so intense that companies might be letting safety slide, which could end up with AI technology being used in harmful ways.

Even big names in AI, like Geoffrey Hinton—who's pretty much an AI superstar—have been ringing alarm bells. Hinton thinks there's a 1 in 10 chance that AI could cause humanity to wipe out in the next few decades.

That's a pretty scary thought, and he's not alone. Lots of smart folks from different fields are saying we need to get serious about keeping AI risks in check.