The EU's First "Artificial Intelligence Act" Officially in Effect

2024-08-02

Brussels, Thursday (August 1st) - After months of preparation and deliberation, the European Union's "AI Act" has officially entered the implementation phase, becoming the world's first comprehensive legal framework covering the application and regulation of artificial intelligence technology. This move marks a decisive step for the EU in harnessing and regulating the rapidly evolving AI technology field.


Review of the Regulatory Process

The journey of this legislation began with the formal approval by the European Council on May 21st of this year, followed by its publication in the Official Journal of the European Union on July 12th. Following the established procedure, it officially takes effect 20 days after the publication. This process not only reflects the EU's careful consideration of AI regulation but also provides sufficient adaptation time for its member states and related industries worldwide.

Phased Implementation Strategy

The implementation of the "AI Act" will be carried out in phases, aiming to provide businesses with a reasonable transition period to adjust their AI systems. Some provisions will be implemented within 6 to 12 months after the law's passage, while most of the core regulations will come into full effect on August 2nd, 2026. This flexible timeline is designed to ensure a balance between technological development and regulatory requirements.

Risk-Based Regulatory Approach

The core of the legislation lies in its risk-based regulatory strategy, which implements differentiated regulation based on the potential risks AI applications pose to society. The initial focus will be on the "specific AI system ban" that will be launched in February 2025, explicitly prohibiting unauthorized facial recognition data collection and the exploitation of personal privacy vulnerabilities to protect citizens' fundamental rights and freedoms.

Transparency of Content and Accountability

With the arrival of August 2025, more complex AI models will be subject to new regulations, requiring all AI-generated content (such as images, audio, and video) to be clearly labeled to distinguish them from real information. This aims to effectively address challenges such as the spread of false information and election interference. At the same time, high-risk AI systems, such as autonomous driving, medical devices, and credit assessment, will face stricter transparency requirements to ensure traceability and clear accountability in the use of technology.

Strong Enforcement Mechanism

To ensure the effective enforcement of the regulations, the EU plans to establish and designate specialized AI regulatory agencies in each of its 27 member states to oversee compliance and empower them with the authority to conduct audits, request documents, and implement corrective measures. The European Artificial Intelligence Committee will serve as the coordinating center to ensure consistency in regulatory measures among member states. For violations, the EU will impose strict economic penalties, with a maximum fine of up to €35 million or 7% of global annual turnover, serving as a deterrent.

Far-Reaching Impact from a Global Perspective

Several industry leaders and legal experts have highly praised this legislation, believing that it will not only have a profound impact on businesses within the EU but also affect the global technology ecosystem. Tanguy Van Overstraeten from Allen & Overy law firm pointed out that the introduction of this legislation may lead to a reshaping of AI regulatory standards worldwide. Charlie Thompson, an executive at Appian, emphasized that its scope of influence extends far beyond the EU's borders, and any technology company doing business in the EU will face stricter scrutiny.

Response and Outlook for Technology Companies

In the face of impending strict regulation, technology companies have begun to adjust their strategies. Companies like Meta have already limited the availability of certain AI models in Europe, demonstrating their focus on compliance. Eric Loeb, Executive Vice President of Salesforce, believes that the EU's risk-based regulatory framework provides a good balance between innovation and security, which other countries and regions can learn from.