In the words of NSW Minister for Customer Service and Digital, Victor Dominello, NSW is “…already using AI to help maintain our trains, protect our endangered species, keep patients safe from sepsis in our hospitals, and to bolder our cyber defences". To support this use of AI in government, the NSW Government in March this year launched the AI Assurance Framework. This final component of the NSW Government’s AI Strategy rounds out the Government’s aim of promoting and ensuring the ethical and responsible use of AI by the NSW Government.

The AI Assurance Framework outlines a mandatory review process required to be undertaken by all NSW Government departments and agencies when developing, building and implementing AI projects. In this article, we will explain the key mechanisms under the AI Assurance Framework and consider their impact on the procurement and use of AI solutions. This Framework is relevant not only to NSW Government departments and agencies, but also to the suppliers of AI solutions to the NSW Government, who will need to cooperate with agencies and departments throughout the process.

What is the AI Assurance Framework?

The AI Assurance Framework provides NSW Government departments and agencies with a step by step review structure for their AI projects to help them to consider whether their project aligns with the principles under the AI Strategy and AI Ethics Principles.

The five key AI Ethics Principles are:

*source: NSW Government AI Assurance Framework

The AI Assurance Framework contains a range of questions based around the AI Ethics Principles to help departments and agencies determine whether an AI project is achieving its purpose whilst also being fair, ethical and transparent. It is intended to be used at all points in an AI project’s lifecycle and is designed to allow project teams to continually assess their projects and address any issues that may arise.  

When does the AI Assurance Framework apply?

Application of the AI Assurance Framework is mandatory for all AI projects undertaken by NSW Government departments and agencies other than those involving use of widely available commercial AI applications that are not being customised in any way. 

The NSW Government states that the AI Assurance Framework is not the “be-all and end-all” and that project teams must consider not only the requirements under the AI Assurance Framework, but also any agency-specific AI processes, policy requirements and governance mechanisms.

What does compliance with the AI Assurance Framework involve?

  • AI Assessment: Where the AI Assurance Framework applies, the project teams must conduct a self-assessment using the questions posed in the AI Assurance Framework (AI Assessment).  The AI Assessment must be conducted by “Responsible Officers” who should be appropriately senior, skilled and qualified for the role.  A separate Responsible Officer must be appointed for each of the following responsibilities:
    • use of the AI insights / decisions;
    • the outcomes from the project;
    • the technical performance of the AI system; and
    • data governance.

We outline the steps involved in an AI Assessment below.

  • AI Review Body: In addition, on completion of the AI Assessment, the following types of projects must submit their AI Assessment to the soon to be established AI Review Body for further review:
    • larger AI projects (being those valued over $5 million);
    • projects funded by the Digital Restart Fund (the NSW Government funding program that provides monetary support to digital and information and communication technology initiatives to support them in planning, designing and developing digital products and services); and
    • projects where the AI Assessment identifies significant residual risks which (after mitigations) are mid-range or higher. 

The AI Review Body can make recommendations to help mitigate risks.  The recommendations are not binding, however, any decision to not implement these recommendations should be documented with accompanying reasons.

What do departments and agencies have to do to conduct an AI Assessment?

There are a number of steps that a department or agency must take when conducting an AI Assessment of their AI project, including:

  1. Benefits realisation – consider the benefits and intended outcomes of the AI project in accordance with the NSW Government’s Benefits Realisation Management Framework and capturing the potential benefits in to a Benefits Realisation Management Plan;
  2. Risk assessment – using the AI Assurance Framework’s assessment questions, consider the risks associated with the AI project;
  3. AI Ethics Principles questions – consider the ethical questions posed by the AI Assurance Framework (which are structured to align with the AI Ethics Principles). Depending on the answer provided, the AI Assurance Framework provides guidance on the next steps to be taken. Where an AI project does not “pass” a particular question, the project should be paused or ceased until further discussion of the ethical issues is undertaken. Some AI Ethics Principles related questions posed include:
    1. Community benefit: will the AI system improve on existing approaches to deliver outcomes? Could the AI system cause reversible or irreversible harms?;
    2. Fairness: is the data selected of appropriate quality? Is there a way to monitor and calibrate the performance of the AI system?;
    3. Privacy and security: does the AI project use sensitive data? Has the project team completed a privacy impact assessment?;
    4. Transparency: has the project team consulted with the relevant community that will benefit from the AI system? Are the scope and goals of the AI project publicly available?; and
    5. Accountability: have clear processes been established to intervene if a relevant stakeholder finds concerns with insights or decisions made by the AI system, or to ensure that the project team do not get over reliant on the AI system?;
  4. Overall assessment – once all benefits, risks and ethical questions are considered, the Responsible Officer must consider the most significant risks of the AI project and count the number of ‘medium’, ‘high’ and ‘very high’ risks identified in each area;
  5. Data governance controls – upon determining the overall assessment of the AI project, the Responsible Officer must assign the level of data governance controls required to ensure the project is run safely and ethically. The levels of control environment are (1) no control, (2) low control, (3) moderate control, (4) high control and (5) very high control environment; and

*source: NSW Government AI Assurance Framework

  1. Submission of AI Assessment to AI Review Body – where required, the Responsible Officer must submit their AI Assessment to the AI Review Body for further review.

Considerations for NSW Government departments and agencies

In every project that has an AI component, NSW Government departments and agencies must consider whether the AI Assurance Framework applies. This may not always be obvious at first: for example, where the AI aspect is a minor component in the overall project. However, it is important that project teams are aware of AI and its growing use, and the impact of AI use in their projects.  

Departments and agencies, when deciding whether to embark on an AI project, should ask themselves the following questions:

  • Does the AI component add to the overall value of the project?
  • Could the AI component be replaced with a less risky, non-AI component?
  • What impact could misuse of the AI component have on the public?
  • Does my team have the appropriate experience and skills to implement this AI project?

While these questions will be addressed through the self-assessment process, consideration of these questions prior to embarking on an AI project can allow departments and agencies to determine the necessity of AI in their project and address potential risks and issues before significant time, money and resources have been put into the project.

Recommendations for suppliers of AI products to NSW Government

While the AI Assurance Framework only applies to NSW Government departments and agencies, its impacts will flow down to suppliers of AI solutions to government. It is highly likely that future NSW Government tender and RFP processes will include questions around suppliers use of AI in the products or solutions to be provided to government. We expect these questions will seek to clarify the measures and protocols in place to ensure the safe and ethical use of AI, as well as the systems implemented to protect the data being used in AI products.  

Suppliers of AI solutions should be prepared to be able to respond to this process by taking precautionary measures to ensure they are prepared to address the questions of NSW Government agencies and departments. Such precautionary measures could include:

  • implementing an internal company-wide AI strategy to guide AI decision making and address potential AI risks;
  • ensuring the company has adequate information security protections, such as those required under ISO 27001;
  • preparing AI specific information security and privacy policies and procedures aligned with widely recognised ethical principles;
  • ensuring the company has high quality, comprehensive information security systems to prevent unauthorised use of AI products and data;
  • conducting frequent reviews of the AI products developed by the company to address significant safety, bias or security issues; and
  • preparing a formalised ‘AI environment’ package that can be submitted as part of a tender / RFP process, based on the company’s policies, procedures and systems.