07/11/2022

Two weeks ago we reviewed the White House’s Framework for an AI Bill of Rights. This week we look at Canada’s proposed Artificial Intelligence and Data Act (Part 3 of the omnibus proposed privacy law C-27).

Canada’s proposed AI law (C-27 AI) is much less detailed than the White House AI Bill of Rights (which is a ‘how to guide’ for drafting a law than a law itself) and the EU AI Act (which needs to ‘cover the field’ to the exclusion of Member States’ laws). C-27 AI is more in the nature of an authorising framework where most of the detail – and therefore the full rigor of the regulatory requirements – is left to regulations to be made by the Government in the future.

The legal definition of 'AI system'

C-27 AI defines as an “artificial intelligence system” as “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”

The Canadian definition is broadly similar to the AI definitions in the White House AI Bill of Rights and the EU AIA Act, although potentially more open-ended. The US definition helpfully says what is not AI:

“automated systems …exclude passive computing infrastructure. Passive computing infrastructure is any intermediary technology that does not influence or determine the outcome of decision, make or aid in decisions, inform policy implementation, or collect data or observations, including web hosting, domain registration, networking, caching, data storage, or cybersecurity.”

The EU AI Act takes a similar tack seeking to exclude passive computing infrastructure, defining AI as being capable, “for a given set of human-defined objectives, generate content, predictions, recommendations, or decisions influencing the environments they interact with.”

Who is covered?

C-27 AI fixes most of its obligations on the ‘person responsible for an artificial intelligence system’, This is very broadly defined as:

“a person is responsible for an artificial intelligence system….if they design, develop or make available for use the artificial intelligence system or manage its operation.”

This is likely to mean that there will be more than one person responsible for an AI up and down the chain of supply – from the developer to the business which uses the system – and therefore a number of people simultaneously will be subject to the AI obligations for the same AI.

This coverage is similar to the White House AI Bill of Rights and the EU AI Act.

The EU AI Act explicitly has extraterritorial reach applying to AI providers and users in third countries where the system is used in the EU. It would appear that C-27 AI also will have extra-territorial reach.

C-27 AI does not apply Canadian government AI systems, which are covered by a separate regulatory regime.

What activities are covered?

C-27 AI controls international and inter-provincial trade in artificial intelligence systems, which reflects the constitutional limits of the power of the Canadian Parliament in its federated system.

In those commerce streams, C-27 AI endeavours to capture the whole AI eco-system, including data inputs. “Regulated activities” covered by the bill are:

  • processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system; and
  • designing, developing or making available for use an artificial intelligence system or managing its operations.

Interestingly, C-27 AI also would appear to impose obligations on a third party which provides anonymised data to an AI developer for the purposes of the AI developer building or ‘teaching’ the AI, although the data provider has no other involvement with the AI.

Tiered obligations

Like the EU AI Act, C-27 AI takes a layered approach depending on a risk assessment of the AI. A baseline set of more limited obligations applies across the board to all AI while the more extensive obligations are saved for ‘high-impact’ applications.

C-27 AI leaves the definition of ‘high-impact’ AI systems to regulations. The Canadian regulations may well be influenced by the EU AI Act definition of ‘high risk’ systems. Under the EU AI Act, high-risk AI systems are generally defined as those that pose significant risks to the health and safety or fundamental rights of persons, with a laundry list of AI which are deemed high risk, including biometric identification systems (including facial recognition technology), credit scoring systems, AI for recruitment, and systems to assess eligibility for welfare.

By contrast with both C-27 AI and the EU AI Act, the White House AI Bill of Rights applies its primary obligations broadly access all AI. There are additional requirements for AI ‘sensitive domains’, such as law enforcement, which apply more stringent measures, such as on data re-use.

Baseline obligations

C-27 AI creates its base line obligations through a set of primary offences protecting citizens from errant AI and a universal record keeping obligation on use of data.
It is an offence to:

  • possess or use personal information for the purpose of creating an AI system if the personal information was not lawfully obtained;
  • knowingly (or with reckless disregard) use an AI system that is likely to cause serious physical or psychological harm to an individual or substantial damage to property, if such harm occurs. Harm is broadly defined physical or psychological harm to an individual, damage to an individual’s property; or economic loss to an individual.; or
  • make an AI system available for use with the intent to defraud the public and to cause substantial economic loss to an individual, if such loss occurs.

A person who “processes or makes available for use anonymized data” in the course of [a regulated] activity is required to establish measures (and keep records) with respect to the manner in which the data is anonymized and the use and management of the anonymized data.

Obligations for high impact systems

A person who is responsible for an artificial intelligence system must undertake a self-assessment of whether the system is high-impact. The regulations can prescribe how the assessment is to be undertaken.

The primary legislative obligation if the AI is assessed to be high-impact is that the person responsible for the AI must measures to identify, assess and mitigate the risks of “harm” or “biased output” that could result from the use of the system.

Biased output is defined as “content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination..in the Canadian Human Rights Act”, which prohibits discrimination based on race, national or ethnic origin, colour, religion, age, sex, sexual orientation, gender identity or expression, marital status, familial status, genetic characteristics, pardoned status or disability, pregnancy and childbirth.

However, biased output expressly does not include “affirmative action” built into an AI – both to prevent discrimination and to correct it.

The White House AI Bill of Rights has a much broader ambit in its primary protections. Not only are designers, developers, and deployers of automated systems required to take proactive and continuous measures to protect individuals and communities from algorithmic discrimination, but they also should use and design AI in an equitable way, which is defined to mean but “the consistent and systematic fair, just, and impartial treatment of all individuals.”

C-27 AI requires a person responsible for a high-impact AI system to put in place measures to monitor compliance with the mitigation measures developed to comply with the primary ‘no harm/bias’ obligation. This includes monitoring the effectiveness of those mitigation measures, and re-making mitigation measures if they are not effective.

Finally, there are transparency obligations around high-impact AI. The person “who makes available for use” a high-impact AI (i.e. the developer or distributor) has to publish on a website a plain-language description of the system that includes an explanation of

  • how the system is intended to be used;
  • the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make;
  • the mitigation measures to address harm/bias.

There is a separate obligation on the person “who manages the operation of a high-impact system” (e.g. the business which uses the AI) to also publish a plain-language description of how the AI works in practice, such as the recommendations and decisions it produces in the business setting in which it is used.

Like under the EU AI Act, a person who is responsible for a high-impact system must, as soon as feasible, notify the Minister if the use of the system results or is likely to result in material harm.

There are obligations to maintain records on compliance with the regulatory obligations applying to high-impact systems.

Remedies

Probably the most striking aspect of C-27 AI is the extensive remedies.

The Minister can call for records about the assessment of whether a system is high-impact and on the mitigation measures which were developed.

If the Minister has reasonable grounds to believe that a person has contravened the C-27 AI obligations (e.g. not appropriately determining whether AI is high-impact), the Minister may require an audit to be undertaken by an independent person, at the cost of the person subject to the audit.

Based on the audit outcome, the Minister may then direct measures be undertaken to ‘fix’ any problems identified in the audit..

The Minister has an even broader power, without the need for an audit, “to require that any person who is responsible for a high-impact system cease using it or making it available for use if the Minister has reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm.”

The Minister also may publish any information, but not confidential business information, about an AI if the Minister considers if the Minister has reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm; and the publication of the information is essential to prevent the harm.

There are substantial fines for breach: Cdn$25 million and 5% of gross global revenue. A breach arises from a knowing contravention or being ‘reckless’ as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property.

Future Impact of C-27 AI

C-27 AI cleaves more closely to the approach in the EU’s AI Act than to that of the White House AI Bill of Rights, but this ‘troika’ of AI rights laws are likely to guide legislation in other countries.

Read more: Government Bill | House of Commons of Canada

""