ADCG’s Explainer: EU’s AI Act
Negotiations over the European Union’s (EU) Artificial Intelligence Act continue this week. The process has been ongoing since June 14, when the Members of the European Parliament (MEPs)—comprised of 705 Members elected in the 27 Member States of the enlarged EU—agreed to move forward with negotiations on the draft version the Artificial Intelligence Act (AI Act), with 499 votes in support of further negotiation for the terms of the AI Act, 28 against moving forward with the Act, and 93 MEPs electing to abstain from the vote.
This is the final step in a three-way negotiating process known as a trilogue, in which EU regulations are enacted.
In a press conference held by Roberta Metsola, EP President, Brando Benifei and Dragoş Tudorache, Tudorache said, “odds are very good we will finish negotiations by the end of the year,” although, he made it clear that the Act will provide companies with sufficient time to implement updated policies and procedures to comply with the requirements of the Act and for member states “to set up their roles as market regulators.”
The AI Act “aim[s] to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.” If enacted, the AI Act would be the first law governing AI to be established in the world.
The AI Act deploys a tiered “risk-based approach” to regulating users of AI, prohibiting organization’s from using AI in a way that creates an unacceptable level of risk. Examples of such prohibited uses of AI include:
Those that pose a significant harm to people’s health, safety, or fundamental rights
AI systems that rely on “social scoring,” or those that categorize and classify people based on their social behaviors, social status, and personal characteristics
AI systems that are intrusive or discriminatory, such as those that utilize:
“Real-time” or “post” remote biometric identification systems in publicly accessible spaces
Biometric categorization systems relying on sensitive characteristics, such as gender, race, or political or religious orientation
Systems that base their decisions on profiling, location, or past criminal behavior
Systems that detect emotions of people in work environments, or educational facilities, or for purposes of law enforcement, or border management
Systems that utilize facial images from the internet or closed-circuit television footage to create a database for facial recognition purposes
The AI Act, also regulates providers of foundation models—essentially AI templates—requiring them to register their models before their release in the EU market, comply with the transparency requirements of the Act, such as disclosing AI-generated content, and ensure their systems are safeguarded against the generation or utilization of prohibited content.
These prohibitions are likely to cause issues for large corporations. Even prior to the acceleration of the AI Act, EU regulatory bodies have been in the practice of slowing corporate progress where AI is being used. According to a Politico article published on June 13, 2023, the Irish Data Protection Commission (DPC) halted Google’s new generative AI tool, Bard. Bard launched in 180 new countries and territories in 2023. However, according to the Irish DPC, Google had not sufficiently proved that Bard’s AI systems would protect Europeans’ privacy as required by the General Data Protection Regulation (GDPR). Google has yet to clear this regulatory hurdle.
Additionally, the AI Act would boost a citizen’s right to file complaints where an AI system or resulting decisions from the use of AI systems are categorized as high-risk.
On the other hand, the AI Act includes exemptions for research activities, those AI activities provided under open-source licensing, and the use of “regulatory sandboxes” or other measures established by public authorities to test an AI system before it is deployed to support innovation and reduce any regulatory burdens that would be incurred by a strict application of the Act. According to a statement by Benifei, the MEPs “want AI’s positive potential for creativity and productivity to be harnessed but [the MEPs] will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council.”
* * * * * * *
To read our news alerts discussing the EU-U.S. Data Privacy Framework, a delay in CCPA, a new framework for AI regulation, and Meta fines, click here.
This week’s breach report covers the following organizations: HCA Healthcare, Deutsche Bank, the Bangladesh government, and AMC Theatres. Click here to find out more.
Jody Westby hosts our podcast, ADCG on Privacy & Cybersecurity, bringing together leaders in the privacy and cybersecurity arenas to discuss a wide range of issues ranging from the proposed federal and state regulations to best practices and standards for compliance. Episodes can be enjoyed on many platforms including Spotify and Apple Podcasts. Don’t forget to subscribe!
Our most recently released and a NEW episode:
(NEW) 93 | SolarWinds and SEC: CISOs Back in the Crosshairs (With Guest Mark Rasch)
92 | Interview With Tom Kemp, Silicon Valley Privacy Advocate and Author of Containing Big Tech
To browse our previously published articles and news alerts, please visit our website, and don’t forget to subscribe to receive free weekly Data and Cyber Governance news and Breach Reports directly to your email.