European Union unveils global benchmark for AI regulation

The European Union (EU) has set a precedent by introducing the AI Act – which focuses on the high-risk areas of AI tech usage.

The legislation hailed by EU Commissioner Thierry Breton as “historic” will introduce a risk-based approach to AI oversight.

The act adopts a risk-based approach, focusing on high-risk areas like government use of AI for biometric surveillance. It also throws a regulatory net over systems akin to ChatGPT, demanding transparency before unleashing it on the market. The landmark vote follows a December 2023 political agreement and finalizes months of meticulous text tailoring for legislative approval.

The agreement signals the end of negotiations, with the vote by the permanent representatives of all EU member states held on Feb. 2.

This crucial step sets the stage for the act to progress through the legislative process, involving a vote by a pivotal EU lawmaker committee scheduled for Feb. 13, followed by an expected vote in the European Parliament in March or April.

The AI Act’s approach revolves around the principle that the riskier the AI application, the greater the responsibility placed on developers. This principle is significant in critical areas like job recruitment and educational admissions.

Margrethe Vestager, Executive Vice President of the European Commission for a Europe Fit for the Digital Age, stressed that the focus is on high-risk cases to ensure that the development and deployment of AI technologies align with the EU’s values and standards.

Meanwhile, the implementation of the AI Act is expected in 2026, with specific provisions taking effect earlier to facilitate a gradual integration of the new regulatory framework.

Beyond establishing the regulatory foundation, the European Commission proactively supports the EU’s AI ecosystem. This effort includes the creation of an AI Office responsible for monitoring compliance with the Act, particularly focusing on high-impact foundational models that present systemic risks.

The EU’s AI Act will be the world’s first comprehensive AI law, aiming to regulate the use of artificial intelligence in the EU to ensure better conditions for its deployment, protect individuals, and promote trust in AI systems.

The act is based on four different levels of risk, providing a clear and easy-to-understand approach to AI regulation. It will be enforced through national competent market surveillance authorities, with the support of a European AI Office within the EU Commission.

Stricter crypto regulations 

The EU has proposed categorizing cryptocurrencies as financial instruments and imposing stricter regulations on non-EU crypto firms. This new proposal will help curb unfair competition and standardize regulations for crypto entities operating within the EU.

The proposed measures include restrictions on non-EU crypto firms catering to customers in the bloc, aligning with existing EU financial laws that mandate foreign firms to establish branches or subsidiaries within the EU.

In tandem, the European Securities and Markets Authority (ESMA) has introduced a second set of guidelines to regulate non-EU-based crypto firms, emphasizing the crucial need for regulatory clarity and investor protection. 

The move by the EU is part of a broader initiative to establish regulatory clarity in the crypto space, safeguard investors, and foster the growth of crypto services within the EU.


Follow Us on Google News

Share with your friends!

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *