It is generally accepted that AI systems should be transparent (often termed ‘explainability’) and this is reflected in a number of international standards. For example, the Australian Government’s AI Ethics Principles says ‘explainability’ is intended to provide ‘reasonable justifications for AI systems outcomes’, including ‘information that helps people understand outcomes, like key factors used in decision making’.
Ironically, the concept of AI transparency and the best practice approaches to it have been opaque at best.
The UK’s Turning Institute has developed the clearest discussion we have seen on approaches to AI transparency. It acknowledges the complexity inherent in delivering transparency to multiple stakeholders and provides an accessible and well-structured framework from which to approach it. While developed in the context of AI financial services, it is of general application.
The framework defines transparency by stakeholder group having access to relevant information about a given AI system and suggests it requires a consideration of:
- Who needs the information;
- Why the information is being sought;
- What information should be provided; and
- How the information should be delivered to the relevant stakeholder.
This approach is summarised in the diagram below:
1. Who
“Who” refers to the stakeholder who wants access to the information about the system. The report categorises the relevant stakeholders into internal and external stakeholders.
- Internal stakeholders are individuals within the firm employing the AI system such as the individuals who operate the system or perform oversight functions. Importantly, this includes senior executives and board members who bear ultimate responsibility for an AI system about which they are likely to have a limited understanding.
- External stakeholders are individuals outside the firm who are affected by the use of the AI system such as the customers, shareholders and regulators.
2. Why
“Why” refers to the stakeholders’ reasons for accessing information about the AI system. The report describes how transparency can address six common concerns of stakeholders:
- System performance: understanding and improving effectiveness and reliability of AI systems.
- System compliance: ensuring compliance with relevant laws.
- Competent use and human oversight: ensuring correct operation and use.
- Providing explanations: providing reasons for decisions and assurance that they are correct.
- Responsiveness: enabling responses to enquiries.
- Social and economic impact: identifying impacts and providing assurance in relation to concerns about impacts.
3. What
“What” refers to the type of information relevant to stakeholders. The types of information required can be divided into System information and Process information.
3.1 System Transparency
System transparency refers to access to information about the operational logic of a system. The types of system information can be illustrated with a simple maths equation:
Y = 200 + x
- Input variable refers to the type of information the system needs to operate. In the equation, x is the input variable.
- Input-output relationship refers to the way the system transforms the inputs into outputs. In the equation, the model transforms the inputs by adding 200 to the value of the input.
- Conditions for a given output refers to the conditions under which the system would produce a particular output. In the equation, in order to produce a Y of 600, the value of X would need to be 400.
3.2 Process Transparency
Process transparency refers to access to information about the process by which an AI system was designed, developed, and deployed. The report considers there are two dimensions to this information:
- Lifecycle phases: the activities which occur across the design, development, and deployment of an AI system. Most systems will have multiple phases relating to the design, development, and deployment phase of an AI system. However, there is no universally agreed breakdown of lifecycle phases as processes vary.
- Levels of information: the various aspects of information that are of interest to stakeholders. The report suggests the following four aspects:
- substantive aspects of an activity;
- procedures followed in the performance of activities in a phase;
- governance arrangements in place during a phase; and
- adherence to norms and standards.
4. How
“How” refers to how the information is obtained and communicated.
4.1 Obtaining
There are two methods to obtaining information about an AI system: Direct interpretation and indirect analysis using explainability methods. The preferred method is direct interpretation which uses formal representations for how the system works (such as the maths formula presented in section 3.1).
Explainability methods analyse changes in outputs in response to changes in inputs to make a surrogate model which approximates the model being examined. Explainability models cannot entirely compensate for the information which could be obtained from an interpretable system and generally lacks certainty and completeness.
While the decision to make models more interpretable is often seen as a trade-off between effective models and interpretability, this isn’t necessarily the case. In fact, an inability to fully scrutinise the system may speak against its use as inscrutable systems can result in misplaced trust and a false sense of understanding of the system which can lead to harmful outcomes.
4.2 Communicating
It is important that the information provided is intelligible and meaningful to the stakeholder receiving the information. Stakeholders differ in their ability to understand technical concepts and it will be necessary to adjust the complexity of the information in order to make it intelligible for certain stakeholders. Meaningfulness will also depend on the stakeholder and why they are seeking the information.
Firms are likely to have existing best practices which can guide the process of identifying suitable methods of presenting information. However, the report suggests that meaningfulness and intelligibility can be improved if firms consider:
- Using counterfactuals to illustrate the operation of the system.
- Providing relevant information, and not irrelevant information which might confuse or generate distrust.
- Ensuring the information is intuitive and simple so that it can be used in day-to-day life to make informed choices.
5. Conclusion
The Turing Report makes the salient point, also made by many others before it, that a ‘black box’ approach to AI will fail:
Transparency is also critical for demonstrating trustworthiness and responsible use, be it to corporate boards, shareholders, customers or regulators. This second role of transparency is no less important. Merely ensuring trustworthiness and responsible use may not be enough to overcome obstacles to adoption. Without reliable evidence to support claims of trustworthiness and responsibility, customer and stakeholder distrust may prevail. The ability to demonstrate trustworthiness and responsibility is therefore a separate pre-condition for successful innovation.
But the value in this report is that it goes beyond this ‘home truth’ to provide more structured thinking for those who face the tricky task of writing the AI explainability statements and signing off on them (that’s you, CEOs and board members).
Read more: AI in Financial Services
Visit Smart Counsel