EU Approves Groundbreaking AI Regulations for Safer Technology

On March 13, something big happened: the European Parliament gave the thumbs up to a new set of rules for Artificial Intelligence (AI) – the EU AI Act.

by Faruk Imamovic
EU Approves Groundbreaking AI Regulations for Safer Technology
© Getty Images/Sean Gallup

On March 13, something big happened: the European Parliament gave the thumbs up to a new set of rules for Artificial Intelligence (AI) – the EU AI Act. It's a big deal because it's one of the first times a group of countries has come together to lay down some ground rules for AI.

They want to make sure AI is used in a way that's safe, respects people's rights, and still lets technology move forward. A lot of lawmakers thought this was a good idea – 523 of them voted for it, with only 46 saying no and 49 sitting on the fence.

This strong support shows that the EU really wants to lead the charge in managing AI, hoping to set an example for the rest of the world. Before the vote, two key members of the European Parliament, Brando Benifei and Dragos Tudorache, talked about how big of a deal this day was.

They've been working on this for a long time and see it as a critical step in making AI something that works for humans, not against them. They're also keen on working with other countries that see things the same way, to make sure AI is kept in check worldwide.

AI Act in Action

Now that the European Parliament has said yes to the AI Act, it's time to dot the i's and cross the t's. This means making sure the law's written in a way that every country in the EU can understand it. They're planning another thumbs-up session in April and hope to have everything official in the EU's big book of laws by May, EuroNews tells us.

This is a big step in making sure AI plays by the rules across Europe. Starting in November, there's going to be a big no-no list for certain AI uses that could be risky, like AI that ranks people based on their behavior or toys that could lead kids into danger.

These rules kick in right away because the EU wants to make sure everyone's safe from the get-go. But they also get that AI's always changing and finding new ways to fit into our lives. So, they're rolling this out step by step.

Some rules will take a bit longer to apply, especially for the tech that's not so worrisome. This way, they're hoping to strike a balance between keeping things safe and not putting the brakes on new, helpful tech.

Categorizing AI Risks

At the heart of the EU's new rules on AI is a simple idea: not all AI is the same.

So, they've split AI into four groups depending on how much of a worry they are to us all. Think of it as a "how much should we worry?" scale, from "this needs to stop now" to "it's all good." The "this needs to stop now" or "unacceptable risk" group is for AI that could really mess things up or trample over our rights.

We're talking about things like AI that ranks people based on behavior or creepy surveillance stuff. These are getting a straight-up no. Then there's the "we need to keep a close eye on this" or "high-risk" category. This includes AI used in really important areas like schools, hospitals, police work, and keeping our borders safe.

The EU is laying down strict rules here to make sure these AIs play nice with our basic rights. Next up is the "just so you know" or "limited risk" bunch. These are your chatty AI pals and other tech that needs to be clear about being a robot.

The goal here is to make sure everyone knows when they're dealing with a machine, keeping things transparent and above board. Lastly, we have the "no worries here" or "minimal risk" group. This is for AI that's pretty harmless, like the tech in video games or email spam filters.

Most AI out there falls into this camp, and the EU's cool with them doing their thing, showing they understand there's a lot of good, low-risk AI stuff happening. So, that's the rundown. The EU's approach is about making sure the riskier the AI, the tougher the rules, all while giving a thumbs-up to the tech that makes our lives easier without causing trouble.

ChatGPT© Getty Images/Leon Neal

The Spotlight on AI Chatbots

It's all about making sure that as these cool tools like ChatGPT, Grok, and Gemini become a bigger part of our lives, they don't mess with our privacy, steal someone's work, or trick us with fake news.

Now, whether you're a small team working out of a garage or a big name in tech, if you're building these AI brains, you've got to be open about where your AI's getting its smarts from. The EU wants to make sure everyone's playing fair, not using someone else's hard work without permission.

And when it comes to stuff like deepfakes, those eerily realistic fake videos or pictures, there's a new rule: you've got to label them clearly so everyone knows they're not real. It's all about keeping the digital playground safe and honest.

Tech Industry's Reaction

The EU's new AI rules have gotten a bit of a mixed bag of reactions from the tech world. Before the rules were set in stone, a bunch of companies were worried. They thought if the EU got too tough on AI, it could put a damper on coming up with new, cool tech.

Back in June 2023, leaders from 160 tech firms even wrote a big group letter to the EU, saying, "Hey, don't go overboard with this, or you might squash our creative vibes." But once the AI Act was a done deal, even some of the big names in tech started nodding along.

IBM, for one, gave the EU a high five for how they handled it. Christina Montgomery, a bigwig over at IBM, really liked that the EU was being smart about which AI tech needed a closer look and which didn't. She thought it matched up nicely with IBM's own ideas about keeping AI on the up and up.

According to her, this mix of being careful without putting the brakes on new inventions could help make sure AI stays something we can all trust and benefit from.