European Union adopts New Laws On AI Regulation

The European Union, in their increasing task to help protecting the millions under their umbrella has passed the European Union Artificial Intelligence Act of 2024. After serious scrutiny by the EU parliament, the law will hopefully come into effect sometime later this year This Act is a further step from the EU’s landmark GDPR law to protect consumer rights in the digital sphere by limiting certain AI behaviors and encouraging innovation.

GDPR, or the General Data Protection Regulation, was the first of its kind law when it was ratified in 2016 and enforced in 2018. The GDPR establishes the general obligations of data controllers and of those processing personal data on their behalf (processors). The include the obligation to implement appropriate security measures, according to the risk involved in the data processing operations they perform. Controllers are also required in certain cases to provide notification of personal data breaches. All public authorities and those companies that perform certain risky data processing operations will also need to appoint a data protection officer. (1)

In March 2024, EU AI Act passed 523-46 and will affect anyone that does business in the European bloc, much like the GDPR. The act focuses on the transparency of AI users are supposed to follow and like GDPR, affects anyone doing business in the UnionTech companies should prioritize transparency obligations such as disclosing AI system use, clearly indicating AI-generated content, maintaining detailed technical documentation, and reporting serious incidents or malfunctions. These transparency measures are critical for ensuring AI systems’ trustworthiness, accountability, and explainability, which are the Act’s primary goals.(2)

In addition, companies must restrict certain AI behaviors. Regarding said behaviors, this act targets two huge ones. The first is subliminal messaging in the systems. These behaviors have proven to cause psychological and physical harm, especially in children. They also forbid social scoring systems which have been proven to be racist and discriminatory.

In terms of AI systems and everyday life, they must declare which systems are seen as high risk. These systems include:

  1. Logistical systems, such as Water lines, energy supplies and transportation
  2. Educational networks and Vocational training centers where their educational materials can be disrupted.
  3. Any autonomous work, including factories, cars and systems that are centered on resumes and workers data
  4. All Biometric data collection.
  5. Public services and anything that can be legally obtained through a database or through law enforcement.

Much like GDPR, this all applies to anyone doing business in the EU bloc. The fines associated with disrupting are expected to be in the range of $38 Million or 7% of the company’s net worth, whichever is higher.

Many tech giants, such as Bill Gates, Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg have come out in support of AI regulation by federal governments. Speaking at the 2023 Asia-Pacific Economic Cooperation (APEC) CEO summit in San Francisco, Google CEO Sunder Pichai compared AI regulation to global warming. “AI “will proliferate” and that “AI advances will get out to all the countries and so it’s naturally the kind of technology that — I don’t think there’s any unilateral safety to be had.”

Should AI could go wrong in one country, he said, it could impact other countries, making it difficult to regulate locally.

“In some ways, it’s like climate change and the planet,” Pichai said. “We all share a planet. I think that’s true for AI.” That’s why “you have to start building the frameworks globally,” he added. (3)

The European Union hopes this law can be a trend setter for other AI regulation laws. Already Japan’s Diet is hoping to push for AI legislation in their 2024 session and hopes to model some of their law on Europe’s landmark law. Many are also wishing that the US Congress takes up something similar as many businesses in the US also work in Europe.

AI is a touchy and potentially dangerous subject if left unchecked. The European Union hopes to be the forerunner in keeping this divisive programing in check. Much like their GDPR regulation it hopes that they can protect the consumers and businesses from being harmed by this technology.