EU introduces worldwide standard for regulating AI
The European Union (EU) has taken a groundbreaking step by introducing the AI Act, a legislation that focuses on regulating the use of AI technology in high-risk areas. EU Commissioner Thierry Breton has hailed this act as “historic” and it adopts a risk-based approach to oversight.
The act specifically targets high-risk areas such as the government’s use of AI for biometric surveillance and systems similar to ChatGPT. It requires transparency before these technologies can be unleashed on the market. This significant milestone follows a political agreement in December 2023 and concludes months of careful text tailoring for legislative approval.
The agreement marks the end of negotiations, with the vote by the permanent representatives of all EU member states taking place on February 2. This crucial step paves the way for the act to progress through the legislative process. A vote by a pivotal EU lawmaker committee is scheduled for February 13, followed by an expected vote in the European Parliament in March or April.
The AI Act’s approach is centered around the principle that the riskier the AI application, the greater the responsibility placed on developers. This principle is particularly important in critical areas such as job recruitment and educational admissions. Margrethe Vestager, Executive Vice President of the European Commission for a Europe Fit for the Digital Age, emphasized the focus on high-risk cases to ensure that AI technologies align with the EU’s values and standards.
The implementation of the AI Act is expected in 2026, with specific provisions taking effect earlier to facilitate a gradual integration of the new regulatory framework. In addition to establishing the regulatory foundation, the European Commission is actively supporting the EU’s AI ecosystem. This includes the creation of an AI Office responsible for monitoring compliance with the Act, with a particular focus on high-impact foundational models that pose systemic risks.
The EU’s AI Act will be the world’s first comprehensive AI law, aiming to regulate the use of artificial intelligence in the EU to ensure better conditions for deployment, protect individuals, and promote trust in AI systems. The act is based on four different levels of risk, providing a clear and easily understandable approach to AI regulation. It will be enforced through national competent market surveillance authorities, with the support of a European AI Office within the EU Commission.
In addition to the AI Act, the EU has proposed categorizing cryptocurrencies as financial instruments and imposing stricter regulations on non-EU crypto firms. This proposal aims to curb unfair competition and standardize regulations for crypto entities operating within the EU. The measures include restrictions on non-EU crypto firms serving customers in the bloc, aligning with existing EU financial laws that require foreign firms to establish branches or subsidiaries within the EU.
Simultaneously, the European Securities and Markets Authority (ESMA) has introduced a second set of guidelines to regulate non-EU-based crypto firms, highlighting the need for regulatory clarity and investor protection. These actions by the EU are part of a broader initiative to establish regulatory clarity in the crypto space, protect investors, and foster the growth of crypto services within the EU.