We’ve all read it before – the growth in new and emerging technologies is accelerating at an exponential phase and impacting on all aspects of our lives. It is widely accepted that the law is often left playing catch up. What we don’t often hear is how these new and emerging technologies pose human rights challenges and opportunities.
In recognition of these gaps, in December the Australian Human Rights Commission (the Commission) released its Human Rights and Technology Discussion Paper (the Paper) (for the highlights, see the Executive Summary Paper here), produced as part of a broader project being led by the Commission, the Human Rights and Technology Project (the Project).
The Project recognises that new technologies engage human rights in new and profound ways, requiring Australia and the international community to protect and promote human rights in this new environment. It asks: What are the human rights impacts of emerging technologies, algorithmic bias, artificial intelligence, big data and the 4th industrial revolution?
The Paper is an approximate halfway milestone in the Project and makes 29 proposals including:
- that the Australian Government should develop a national strategy on new and emerging technologies, with the aim of both promoting responsible innovation and protecting human rights; and
- that measures implemented canvas both effective regulation (law, co-regulation and self-regulation) as well as ethical frameworks to give guidance on the development and use of new technologies.
The focus of the Paper is on two key areas:
- the growing use of artificial intelligence (AI) in decision-making; and
- the accessibility of new technologies for people with disability.
The principle focus of the Paper is on AI and the growing application of AI technologies to decision making processes. The Commission is most concerned with those decisions where AI has materially assisted in the making of the decision and where that decision has a legal or similarly significant effect for an individual (AI-informed decision making). For example, AI is increasingly being used by government and private organisations to make inferences, predictions, recommendations or decisions about individuals and groups in key areas such as housing, health, criminal justice and policing.
The Paper identifies that AI can be used in a way that operates to exclude certain groups and refers to examples where AI tools in the past have discriminated against people on these bases – including in one case where an AI recruitment tool ‘learned’ to preference male employment candidates over women.
To protect against threats to human rights where AI is being employed in decision-making, the Commission proposes three principles to guide government and the private sector:
- AI should be used in ways that comply with human rights law.
- AI should be used in ways that minimise harm.
- AI should be accountable in how it is used.
The Paper also puts forward a number of key proposals concerning accountability in AI-informed decision making, including for:
- a new inquiry to consider whether reform is needed to protect the principle of legality and the rule of law and to promote human rights, such as equality or non-discrimination (Proposal 3);
- the introduction of five new legislative measures, including that individuals be given the right to be informed where AI is used in a decision that has a legal or similar effect on their rights (Proposal 5) and the creation of requirements around the explainability of these types of decisions, suggesting individuals should be entitled to demand a non-technical and technical explanation for these decisions (Proposal 7);
- greater use of co- and self-regulatory measures to protect human rights in these types of decisions, including the development of a human rights impact assessment tool (Proposal 14) and new rules of procurement to require that government procure AI-informed decision making systems with adequate human rights protections (Proposal 18); and
- the establishment of an AI Safety Commissioner as an independent statutory office (Proposal 19).
The Paper’s consideration of the accessibility of new and emerging technologies recognises that when technology is inaccessible it can exclude people with disabilities from enjoying equal participation in the community and prevent the enjoyment of all human rights and fundamental freedoms. For example, a number of technologies are functionally inaccessible for people with different types of disability and can impact a person’s ability to access certain types of employment or government services.
Several proposals put forward in the Paper seek to address this risk, including proposals that:
- all levels of government commit to using Digital Technology that complies with recognised accessibility standards and adopt an accessible procurement policy (Proposal 20);
- new standards be developed to cover the provision of accessible information, instructional and training materials to accompany consumer goods, such as TVs (Proposal 23); and
- the Council of Australian Governments lead a process to commit all levels of Australian government to commit to adopting ‘human rights by design’ in the development and delivery of government services using Digital Technologies (Proposal 25).
Implications of the proposals
Whilst the proposals put forward in the Paper are not settled – the Final Report and implementation phase are planned for 2020 and 2021 – they do give us an idea of where the Commission will be heading in the development of its framework.
As they currently stand, the proposals could have implications on the way that governments run procurement processes, not only when acquiring AI decision-making systems but also in the procurement of technologies that will assist in the development and delivery of government services. Further, the recommendations could herald a new regulatory framework and law reform process for AI informed decision making in both the private and public sector. More broadly, certain suppliers of technology products or services may need to review how their technology facilitates accessibility.
The Paper invites submissions on the proposals and consultation questions to assist and contribute to the Commission’s work on developing a framework for responsible innovation in Australia.
Submissions must be received by Tuesday 10 March 2020, with the Final Report due to be released in 2020.
Authors: Lesley Sutton and Sophie Bogard