02/07/2019

Announced in March 2019, the UK’s Information Commissioner’s Office (ICO) issued a call for input into its development of an auditing framework for AI technologies. This push comes in response to the protections afforded to individuals under the EU’s General Data Protection Regulation (GDPR) in situations where personal data is processed, and then used to profile or make decisions about a person, by AI technology.

The framework will be intended to support:

  • the ICO, by providing a baseline for assessing organisation’s compliance with the GDPR in their use of AI; and
  • organisations making use of AI technology, by providing practical guidelines to assist in the identification and effective management of risks arising from that technology.

Chronicled in an ongoing series of blog posts, at the time of writing this article the ICO has already released the framework’s two key pillars and their components, and begun the dive into the detail.

The house that ICO built

The proposed framework is built on the two key pillars of:

  1. Governance and Accountability, aimed at prompting consideration by Boards, senior leadership and data controllers as to whether organisational governance and risk management practices adequately account for the new challenges brought by AI technologies; and
  2. AI-Specific Risk Areas, covering eight data protection risk areas specific to AI technology that organisations will need to understand, as well as some suggested “good practice” controls.

   1.  Governance & Accountability

   2.  AI-Specific Risk Areas

  • Risk appetite
  • Fairness and transparency in profiling
  • Leadership engagement and oversight
  • Accuracy
  • Management and reporting structures
  • Fully automated decision-making models
  • Compliance and assurance capabilities
  • Security and cyber
  • Data protection by design and default
  • Trade-offs
  • Policies and procedures
  • Data minimisation and purpose limitation
  • Documentation and audit trails
  • Exercising of rights
  • Training and awareness
  • Impact of broader public interests and rights

The guidance offered under these areas will prompt organisations to ask key governance, design and implementation questions as they consider the place AI technologies have in their businesses and processes, such as:

  • Have we developed appropriate measures to assess the accuracy of our AI systems that make predictions or decisions using personal data?
  • Can we prevent or detect incorrect or misleading processing of personal data, and promptly correct mistakes?
  • Are we ensuring appropriately “meaningful” human review in processes that are not intended to be solely automated?

These sorts of questions, and guidance to help organisations answer them, will continue to emerge as the framework develops.

Oi, Robot – what this means for Australian businesses

It’s not only UK organisations that need to consider the impact of the ICO’s auditing framework. Under the GDPR, Australian businesses may also be subject to the broad exterritorial reach of the European privacy regulators. (Click here for previous discussion on the application of GDPR in Australia).

So, what steps should Australian businesses subject to the GDPR be taking when it comes to AI?

  • Should we automate?: As a first step, organisations need to give due consideration to whether it would be appropriate to automate a given decision-making or prediction process. This should include examination of the level of accuracy that the AI system would be able to achieve.
  • Design to protect data: The GDPR requires that AI be designed with the privacy of individuals in mind. Organisations need to ask at the outset: will the AI system be used solely for automated decision-making, or will it enhance human decision-making? To avoid the decision being treated as solely automated under the GDPR, the system should be designed to support human review that is sufficiently meaningful.
  • Consider the risks: The board and management should consider and approve the intended use of the AI system in line with the organisation’s risk profile. When assessing risk, the security of the system is key, in terms of both technical and organisational measures. Organisations will need to assess the security of the code itself and the system framework, whether developed and maintained externally or in-house. There are unique risk factors associated with AI that may require an update to the organisation’s existing risk management policies. New procedures may need to be implemented to ensure adequate staff training and ongoing risk monitoring.

Authors: Melissa Fai, Clare Beardall and Thomas Power

Expertise Area
""