EU Adopts the World’s First Artificial Intelligence Laws

The European Union has reached a first-of-its-kind legislative agreement on a global scale to regulate artificial intelligence (AI) following intense negotiations between member states and the European Parliament.

This move paves the way for legal oversight of AI technology, promising to transform daily life while raising concerns about existential risks to humanity.

The European Parliament and the 27-country Union Negotiators overcame significant disputes, including controversial issues such as generative AI, and police use of facial recognition technologies; they reached a preliminary political agreement on “The Artificial Intelligence Act.”

The EU disclosed that the new AI Act is a pioneering legislative initiative aimed at enhancing the development and adoption of “safe and trustworthy” AI in the EU’s market by both private and public entities.

 As the first legislation of its kind globally, the EU states that it seeks to set a global standard for AI regulation in other jurisdictions.

European lawmakers focused on the most unsafe uses of AI by companies and governments, including those employed in law enforcement and essential services like water and energy management. Generative AI systems, such as those powering chatbots like “Chat GPT,” will face new transparency requirements.

According to European officials and previous drafts of the law, Chatbots and programs creating manipulated images, such as “deepfakes,” must clarify using artificial intelligence.

Key Provisions of the New Legislation

This groundbreaking legislation involves several measures, with a primary focus on generative artificial intelligence. It adresses concerns about data quality monitoring in algorithm development and ensures compliance with copyright law.

The new law compels AI developers to announce their artificially created innovations. For instance, they must clearly mention using artificial intelligence with artificially generated images.

This move aims to combat the harmful use of deepfake technologies, which manipulate images and videos of individuals to create false content with both audio and visual elements.

AI systems engaging in conversations with humans, such as chatbot programs widely present online in business or administrative sites, require a clear definition as automated chat and systematic user notification that they are not interacting with another human.

Advanced artificial intelligence systems, especially those used in sensitive fields like energy, education, human resources, and law enforcement, will face enhanced restrictions. The legislation mandates human supervision during the creation and deployment of systems in these fields.

These systems must also provide regulators with risk assessment results, details of data used for system training, and assurances that the software will not cause harm.

The new law regulates the use of AI by governments in biometric surveillance by law enforcement agencies, covering facial recognition technologies.

European legislators have established a set of guarantees and minor exceptions for the adoption of biometric surveillance technologies, including certain cases related to identifying individuals suspected of committing one of the specific crimes mentioned in the list (such as terrorism, trafficking, sexual abuse, homicide, abduction, rape, armed robbery, participation in a criminal organization, environmental crimes).

Applications Ban

Acknowledging the potential threats posed by some AI applications to citizens’ rights and democracy, participating legislators approved banning biometric classification systems using sensitive attributes (such as political, religious, philosophical beliefs, sexual orientation, and ethnicity).

This ban extends to applications that extract facial data from the internet or surveillance camera footage to create face recognition databases. It also includes applications that can reveal a person’s emotions in their workplace or other institutions.

The law further bans AI systems that manipulate human behaviour to deceive free will or exploit individuals’ vulnerabilities based on factors such as age, disability, or socio-economic status.

Sanctions and Challenges

To ensure the implementation of the new law, the European Artificial Intelligence Office will act as the “policeman” of the European Union on this matter, providing substantial means for monitoring and penalties.

Penalties for law violations may range from 7% of total global sales for developing companies to at least €35 million for the most severe offences, or between €7.5 million and 1.5% of total sales for other violations.

Reports suggest that the implementation of the new legislation remains unclear, involving regulatory bodies from 27 EU member states and requiring significant financial investments at a time when government budgets for the field are still limited.

Similar legislation has faced criticism for poor and uneven enforcement in EU countries, as seen with the Digital Privacy Law, known as the General Data Protection Regulation (GDPR).

Al Jundi

Please use portrait mode to get the best view.