Skip to main content

Europe’s New AI Rules Could Go Global—Here’s What That Will Mean

A leaked draft of the European Union’s upcoming AI Act has experts discussing where the regulations may fall short

Abstract globe with golden lights and networks on black background. Europe region.

As artificial intelligence applications become more advanced, lawmakers worldwide are grappling with the possibility of unintended consequences: not just potential existential danger to humanity but also the more immediate risks of job losses, discrimination and copyright infringement.

The European Union, representing 450 million citizens across Europe, is a frontrunner in this regulatory race. Last Friday member nations signed on to the AI Act, which had been agreed upon last December by the European Council—a group of E.U. leaders that shapes the union’s political agenda—and the European Parliament. The act is expected to become law this year and would impose sweeping limits on companies whose AI tools are used in Europe, potentially restricting how these tools are developed and used across the globe. Since the act’s announcement, though, its text has changed because of internal political wrangling and lobbying, according to a recently leaked draft. And some experts are still worried about what seems to be left out.

The AI Act is one of many recent pieces of E.U. legislation that tackle tech issues, says Catalina Goanta, an associate professor of private law and technology at Utrecht University in the Netherlands. The act bans the use of emotion-recognition software in workplaces and schools, bans racist and discriminatory profiling systems and provides a strict ethical framework for building AI tools to which companies must adhere.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


To be effective, such regulations have to be applied across industries as a one-size-fits-all solution, Goanta explains—a tall order in the fast-moving tech sector, where new products drop weekly. “The struggle has been in finding a robust balance” between encouraging growth and innovation and implementing safeguards to protect individuals, she says.

On January 22 a draft of the AI Act that was leaked by Luca Bertuzzi, a journalist at the European media network Euractiv, revealed how the act’s wording has evolved as it has wound its way through the E.U.’s bureaucracy. Most notably, it now contains an exemption for open-source AI models—systems with freely available source codes and training data. Although these tools—which include Meta’s large language model LLaMA—operate more transparently than “black-box” systems such as OpenAI’s GPT-4, experts note that they are still capable of harm. Other changes include what Aleksandr Tiulkanov, who previously worked on AI at the Council of Europe, has called a “potentially controversial” tweak to the definition of AI systems covered by the regulation.

While these changes might seem small, Goanta says they are significant. “The level of complexity of the changes will require very elaborate scrutiny” in the coming weeks and months, she says.

“The AI Act is, at its heart, an adaptation of E.U. product regulation,” says Michael Veale, an associate professor of law at University College London. Like other E.U. consumer protection laws that regulate toy or food safety, the AI Act defines certain uses—such as medical imaging and facial recognition at border control outposts—as “high risk” and obliges such AI systems to meet special requirements. Developers will need to prove to regulators that they are using relevant, high-quality data and have systems prepared to manage risks, Veale says.

In essence, any application that could do “potential harm to public interests such as health, safety, fundamental rights, democracy, etcetera” can be considered high risk, Goanta says. But some researchers have argued that the language that the act uses to define “high risk” could be interpreted too broadly. Claudio Novelli, who studies digital ethics and law at the University of Bologna in Italy, worries that this may discourage AI companies from participating in the E.U. market—and could stifle innovation. “Our criticism is not directed at the risk-based approach per se but rather at the methodology used for measuring risk,” he says, though he acknowledges that the act’s current text is an improvement from the original.

Outside of high-risk uses, so-called general purpose AI providers—companies overseeing generative AI tools that, like ChatGPT, have many possible applications—will also be subject to additional obligations. They’ll have to regularly prove that their models’ outputs work as intended, as opposed to magnifying biases, and to test their systems for vulnerability to hackers or other bad actors. While recent international summits and declarations have identified these risks of general purpose models, the E.U.’s AI Act goes further, says Connor Dunlop, European public policy lead at the Ava Lovelace Institute. “The AI Act therefore represents the first attempt to go beyond risk identification and toward mitigation of those risks,” he says.

When the AI Act is adopted, a countdown to enforcement will start: Practices prohibited under the act must cease within six months. General purpose AI obligations will come into force within a year. Anyone developing high-risk AI will have 24 months to accede, while some specialist high-risk uses—such as medical devices that include AI—will have 36 months to comply.

How the act will be enforced is not yet clear. The law establishes an E.U. AI Office to support member nations, but its exact role has yet to be determined. Veale predicts that member nations will delegate enforcement to private bodies—which some experts worry won’t be proactive in policing standards. “In practice, these requirements will be elaborated and determined by private standardization bodies, which are not very inclusive and accountable,” he says. “It’s fully self-certification.”

Whatever enforcement mechanisms are put in place “might help provide some societal scrutiny,” Veale adds, “but I suspect actual enforcement of the regime will be low.”

Dunlop is also concerned about how much enforcement will actually happen. He suggests looking to the E,U,’s General Data Protection Regulation (GDPR) law—which enshrines privacy rights for Internet users—as a model. “Enforcement of other landmark legislation such as GDPR has been patchy and slow to get up and running but now is picking up,” he says. But “the urgency of the challenge from AI means the E.U. needs to turn urgently to implementation and enforcement.”

Still, AI companies across the world will need to adapt to the E.U. rules if they want their tools to be used in Europe. In the case of the GDPR, many international companies have chosen to operate by E.U. standards globally instead of running multiple versions of their tools across jurisdictions. (This is why many websites require visitors to agree to cookie preferences, even outside of the E.U.)

The new legislation “matters for U.S. companies that want to bring AI products in the E.U., either for public or private use,” Goanta says. “But it will be interesting to see if there will be a ‘Brussels effect’: Will U.S. companies adapt to E.U. rules and increase public interest protections in their operations as a whole?”

U.S. regulators are currently following a “wait-and-see approach,” Novelli says. While the E.U. is happy to highlight how it’s willing to crack down on big tech, the U.S. is more wary of dissuading investors and innovation. “It is plausible that the U.S. is monitoring the impact of the E.U.’s AI Act on the AI market and stakeholder reactions,” Novelli says, “potentially aiming to capitalize on any negative feedback and secure a competitive edge.”