ICO (UK) And Turing Institute Publish Draft Guidance On Explaining Artificial Intelligence Decisions

ICO (UK) and Turing Institute Publish Draft Guidance on Explaining Artificial Intelligence Decisions

Artificial intelligence – technology that makes decisions in a way that mimics human decision-making, but with much larger and more complex sets of data – has the ability to transform nearly every aspect of modern life. In many ways it already has. From teaching cars how to self-pilot to helping banks identify money laundering schemes, AI’s business applications are limited only by the imagination.

Yet, even as AI becomes more sophisticated, many business leaders are wary of using the technology in their organizations. According to the results of a survey by PwC, only 4 percent of business leaders plan to incorporate AI at scale in 2020 – a 16 percent decrease from one year ago.

The reason for this collective hesitance can be attributed to many factors, but the proliferation of data privacy laws cannot be discounted. The European Union’s General Data Protection Regulation (GDPR) for example, places many restrictions on how consumer data can be used in business decisions – and bans fully-automated decision-making altogether for legal determinations and significant decisions like loan and housing applications. When partial or full AI decision-making is allowed, GDPR requires that those decisions be explained to consumers.

That’s why the U.K.-based Information Commissioner’s Office (ICO) and the Alan Turing Institute recently shared a three-part draft guidance laying out best practices for explaining business decisions made using artificial intelligence (AI). The guidance, which expands on previous recommendations, defines an AI decision as one based on a computer-generated “prediction, recommendation or classification.”

The guidance first names four governing principles, grounded in GDPR, for maintaining legal compliance when building and utilizing systems that make use of AI decision-making:

  1. Transparency: AI should only be used in clear and meaningful ways.
  2.  Accountability: Oversight of AI systems is critical; be accountable if challenged by regulators.
  3. Context: Look at the bigger picture and determine the context for AI decision-making
  4. Impact: Consider the wider ethical implications of all AI projects and systems on society at large.

The remainder of the draft guidance is divided into three parts, which outline how AI decisions can be explained, how AI should be used, and how AI can impact an organization.

Section One: The Basics of Explaining AI:

This section is relevant to anyone developing AI systems to be used in business operations. It outlines the legal basis for explaining artificial intelligence and analyzes the pros and cons of explaining artificial intelligence decisions.

Compliance is one of the most important reasons for being able to explain AI decisions, but the importance of building consumer trust cannot be understated. Transparency is also not without its risks: commercial sensitivities, inappropriate personal data disclosure, and potential manipulation of AI systems are some of the dangers companies face when disclosing information about AI decision-making.

This part establishes six approaches that can be used individually or in combination to explain AI decisions:

  1. Rationale: What steps did the AI take to reach the decision?
  2. Responsibility: Which human being(s) can be held responsible for AI decisions and disputes.
  3. Data: What data did the AI utilize to reach the decision?
  4. Fairness: What steps have been taken in designing the AI system to prevent biased or unfair decisions?
  5. Safety and Performance: What steps have been taken to ensure the reliability, accuracy, and security of the AI system?
  6. Impact: How does the AI impact individuals and/or society as a whole?

Section Two: Explaining AI in Practice:

This section is geared primarily toward technical and compliance teams. It offers a seven-step approach to selecting and then presenting the explanations outlined in Section One:

  1. Determine which of the six approaches outlined in Section One are key when examining the context for decision-making. Rationale and responsibility explanations are given priority.
  2.  Collect information for explanations.
  3. Make sure that the system’s overall logic can be understood by outside parties.
  4. Convert results/system logic into easily understood language.
  5. Ensure that humans involved in this process undergo relevant training.
  6. Examine context further to determine why an explanation is desired and what should be explained.
  7. Determine the best options for presenting explanations.

Section Three: Explaining What AI Means for Your Organization:

Section Three is designed for senior management teams and discusses roles, policies, procedures, documentation, and best practices for developing appropriate AI decision explanations. The guidance reminds corporations that every team member involved in the process plays a role in explaining AI decisions. Policies should clearly explain rules and to whom they apply. Procedures must provide specific direction regarding implementation, and documentation should provide examples of AI-assisted decision explanations.

AI in Action

More than half of people surveyed in recent ICO research shared concerns about AI decision-making. As the potential for AI in the workplace continues to grow, creating trust, security and transparency will be critical to successful adoption and growth.

Adopting these principles will also be necessary for legal compliance: GDPR mandates regular Data Protection Impact Assessments (DPIA) for several scenarios, including when artificial intelligence is used. ICO has provided further guidance on DPIAs here, and is also developing a framework for auditing AI systems.

After examining feedback obtained during the consultation period, which is open until January 24, 2020, the guidance may be amended and will then likely be published later in 2020.

Leave a Reply

Back To Top