Europe Enacts Pioneering AI Regulation Framework

The European Union (EU) has entered a new regulatory phase with a provisional agreement on landmark artificial intelligence (AI) regulations. European Commissioner Thierry Breton hailed the development, stating, “Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter. This is yes, I believe, a historical day.”

The EU is expected to require current AI models like ChatGPT and other general-purpose AI systems to adhere to strict transparency obligations before operating inside member nations. Companies will be required to provide technical documentation and ensure compliance with EU copyright laws. Systems operating while using sensitive data must evaluate and mitigate systemic risks and ensure cybersecurity.

Governmental entities will be under strict controls regarding the use of AI in real-time biometric surveillance in public spaces. The rules will prohibit the use of AI for cognitive behavioral manipulation or some types of data scraping.

Cecilia Bonefeld-Dahl of DigitalEurope criticized the regulations, expressing concerns about their impact on the industry: “We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head.”

The announcement moves the EU closer to being the first major world power to initiate laws and regulations governing AI.

In comparison to the comprehensive AI regulation framework emerging in Europe, the United States currently presents a less defined landscape in terms of AI regulation. As of now, the U.S. lacks an equivalent to the EU’s AI Act or any sweeping federal legislation specifically governing the use of AI.

In America, key federal agencies like the National Institute of Standards and Technology (NIST), the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA) have begun the process of providing guidance on AI usage. The NIST’s Artificial Intelligence Risk Management Framework provides a non-mandatory guide for technology companies to manage AI risks and use the technology responsibly.

Despite these developments, there is still a significant gap in the U.S. compared to the EU’s approach. Europe’s AI Act introduces stringent regulations, including transparency requirements for AI models and prohibitions on certain uses of AI, such as social scoring and manipulative practices. The U.S. approach, in contrast, remains more guidance-based and sector-specific rather than a comprehensive legislative framework.