European Union agrees on landmark AI regulation deal
The European Union has reached a provisional deal on landmark rules governing artificial intelligence (AI), making it the first major world power to enact laws for AI.
The agreement, known as the AI Act, sets a new global benchmark for countries seeking to harness the potential benefits of AI while protecting against its possible risks. The law will affect not only major AI developers like Google and OpenAI but also European start-ups trying to catch up to American companies.
Key aspects of the AI Act include:
Categories of AI: The law divides AI into categories of risk, ranging from “unacceptable” for high-risk technologies to medium and low-risk forms of AI.
Transparency Requirements: Makers of the largest AI models, such as those powering the ChatGPT chatbot, would face new transparency requirements, including disclosing information about how their systems work and evaluating for “systemic risk”.
Regulation of Generative AI Models: The EU has agreed on rules for generative AI models, which create tools like ChatGPT, but some EU member states, such as Germany, France, and Italy, have opposed directly regulating these models, favoring self-regulation from the companies behind them.
Biometric Identification Tools: The law also addresses the use of biometric identification tools, such as facial recognition and fingerprint scanning.
Fines: Companies that violate the AI Act could face fines up to 7% of their global revenue, depending on the violation and the size of the company.
The agreement still needs to go through a few final steps for approval, but the political agreement has set the key outlines of the legislation. European policymakers focused on AI’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. The European Union’s ambitious AI rules come as companies like OpenAI continue to discover new uses for their technology, triggering both plaudits and concerns.