The EU Is Regulating Your AI. Five Ways To Prepare Now

The European Union (EU) is leading the world in regulating essential issues that impact anyone interacting with EU companies and citizens. The EU led the way on data privacy regulation with the General Data Protection Regulation (GDPR). They are doing it again with AI regulation, and it will impact your business.

Should AI be regulated? Undoubtedly. Some limited AI regulation exists today in the US at the state level. Some regulations address a tiny sliver of consumer protection (see Consumer Protection and AI—7 Expert Tips To Stay Out Of Trouble).

Why do you need to pay attention?

Like GDPR, AI regulations that come out of the EU will impact nearly all global companies. EU regulations of this type apply to all companies that seek to do business in the EU or with EU-based businesses or EU-based consumers. In other words, unless you’re willing to give up working with the 450 Million citizens of the EU, these new regulations will impact you. (The current plan is that EU AI regulations will work in harmony with GDPR and will not attempt to change the regulations around data privacy.) According to McKinsey, “companies seeing the highest returns from AI are far more likely to report that they’re engaged in active risk mitigation.” To get the highest return from your AI initiatives and reduce your risk of running afoul of regulations, you need to take action now.

5 Things to do now

The EU will be regulating AI that is considered an “unacceptable risk” and AI regarded as “high risk.” These new regulations apply whether you are an AI company building and selling AI systems or you are a company buying AI systems from AI vendors.

1) First and foremost, you need to ensure you are not involved in an “unacceptable risk” AI initiative. Unacceptable risk systems include:

  • Exploitive, subliminal, or manipulative techniques that cause harm. Could Facebook’s AI that tends to drive people to post incendiary content be affected?

  • All forms of social scoring systems, including China’s Social Credit System, which tracks the behavior and trustworthiness of all citizens.

  • Biometric identification systems in public spaces, like facial recognition systems. (see Why Are Technology Companies Quitting Facial Recognition?)

2) Next, determine if you will build or use a “high-risk” AI system. These may include:

  • Critical infrastructure where the system could put people’s life and health at risk (e.g., transportation);

  • Safety components of products (e.g., robot-assisted surgery);

  • Employment (e.g., resume reviews),

  • Worker management (e.g., performance reviews);

  • Some biometric identification; and

  • Essential private services (e.g., loans).

High-risk AI systems will be under great scrutiny and regulated significantly.

Unfortunately, some seemingly innocuous AI applications may get caught in the high-risk net. An example is Duolingo (a popular language-learning platform). Because Duolingo is sometimes used for admission to educational institutions, it falls into the high-risk category. Duolingo’s market value will likely decline as a result.

3) Prepare to include humans in the decisions made by the AI

The most common way to describe the involvement of humans in AI decisions is in three ways:

  • Human-IN-the-loop: A human is required to be part of the decision. An example Human-IN-the-loop system would be the approval of a job candidate in the interview process. While AI may be capable of presenting excellent candidates, bias concerns may cause the EU to require that the AI is relegated to an advisor-only role, with the human making the final decision.

  • Human-OVER-the-loop: Humans may intervene in AI-based decisions. An example Human-OVER-the-loop system is automated driving directions. Most of the time, the human will follow the system’s recommendations, but humans may choose to go a different way from time-to-time. It’s unclear how these systems may be impacted by the EU regulations.

  • Human-OUT-OF-the-loop: The AI runs without human interaction. An example Human-OUT-OF-the-loop could be a fully automated self-driving vehicle. These types of systems will be much more challenging to implement under the new regulations.

4) Prepare to include explainability as part of your AI system

Some loan applications are approved in minutes by the Ant Group (part of Alibaba) in China. The AI involved in these decisions has access to thousands of data points garnered from Alibaba’s vast treasure trove of data about that individual.

Due to the type of AI being used and the complexity of the algorithms involved, it is impossible to know how the AI decided to approve a loan for one person and reject another. The system is opaque, and it’s not explainable.

Likely, EU AI regulations won’t permit an opaque system. The EU is driving for explainability in high-risk AI systems. Since fair access to financial products is one area of focus for the EU, any supplier of AI loan-evaluation systems will need to show precisely how the AI made a decision.

This regulation is good for removing potential bias and ensuring fairness. However, it will slow down the loan approval process substantially, reducing the benefits of an AI-based system.

5) Ensure ongoing monitoring of systems

Since AI systems usually continuously improve, it’s not enough to comply with EU regulations upon the initial use of a system. The EU regulations propose that “all providers should have a post-market monitoring system in place.”

The assumption is that due to the continuous evolution of the systems, a company will need to monitor the system to ensure ongoing compliance continually. Your compliance job will never be done and is likely to be much more complex due to the frequency AI systems can be updated.

Treat these upcoming EU regulations as an opportunity to manage and govern all AI-related risks. While the regulations can be cumbersome, you may gain a competitive advantage if your organization gets ahead of them now. And, like McKinsey points out, even get the highest returns from your AI initiatives.

This article is written by Glenn Gow and was originally published in Forbes. Glenn authorized the republication of his article for the ADCG community. More information about Gleen Gow can be found on his website.

Previous
Previous

U.S. Privacy Law: Past, Present and Future

Next
Next

DOJ Announces National Cryptocurrency Enforcement Team