The Australian Government continues to consult on how to appropriately regulate AI, including whether that should be done on a standalone basis, such as the consultation on proposed mandatory AI guardrails for high-risk AI use and the Treasury’s consultation on whether the Australian Consumer Law should be amended to take account of AI. Meanwhile, other regulators are reviewing the existing set of regulatory tools that are available to them and recalibrating the effectiveness of those tools in addressing the potential risks posed by AI.

The Privacy Commissioner recently adopted this approach by publishing two sets of non-binding guidance, setting out the application of the Australian Privacy Principles in the context of the development and use of AI. These publications are designed to provide practical guidance for APP entities on complying with (and the Commissioner’s interpretation of) existing privacy obligations when both developing AI models and when using commercially available AI-enabled tools, while also suggesting several matters of best privacy practice. The guidance is instructive since the Privacy Act is principles-based and, in line with the bulk of Australia’s regulatory framework, technology neutral. It sets out obligations on APP entities regarding personal information, irrespective of the manner or technological tools in which such personal information is processed - that is, whether it be manually processed, processed with more traditional technologies, or training, testing and use with AI.

The first guidance paper is directed at ‘developers’ of generative AI (GenAI) models (Developer Guidance). The term ‘developer’ is used broadly, to capture not only those APP entities that design and build GenAI models (such as GPT-4), but also those APP entities who train, adapt (including fine-tune) or combine GenAI models, including into other AI systems. While the guidance focuses on GenAI models, it is also intended to apply broadly to any AI model that handles personal information.

A second guidance is directed at assisting APP entities to comply with their privacy obligations when using commercially-available AI products, including GenAI models and general-purpose AI tools (AI Product Guidance).

Key aspects of the guidance papers are set out below:

1.     Privacy by design and transparency

Both guidance papers propose that APP entities implement a ‘privacy by design’ approach as part of their practices and procedures to ensure they comply with the APPs when handling personal information in the context of AI models and products, including considering the potential privacy risks upfront and across the entire lifecycle of the model or product.

The Developer Guidance recommends that GenAI developers should, as a matter of best privacy practice, consider potential privacy risks both at the planning and design stage and subsequent development, testing and tuning stages, by embedding good privacy practices in the design specifications of their technologies, business practices and physical infrastructure to mitigate these risks. GenAI developers should also provide necessary information to downstream users to enable them to assess privacy risks.

The AI Product Guidance suggests, as a matter of best practice, that APP entities do not input personal information, particularly sensitive information, into publicly available GenAI tools, due to the significant and complex privacy risks involved.

Where personal information will be used with AI models and products, both guidance papers suggest that APP entities conduct privacy impact assessments and, as a matter of good practice, APP entities should update their privacy policies and notifications with clear and transparent information about their use of AI generally. Noting that, as part of the Privacy and Other Legislation Amendment Act 2024 (Cth) which received Royal Assent on 10 December last year, privacy policies will need to be expressly transparent about the use of personal information for substantially automated decision-making that has a legal or otherwise similarly significant effect. Beyond this, the current requirement of APP 1 is to provide information about the management of personal information, including information about how an entity collects information and the purposes for which it collects and uses the information. This does not necessarily require transparency as to the method through which an APP entity handles the information it uses and discloses, such as using AI to provide a service, rather than through a manual process (although transparency may be required where personal information is collected via an AI system or is used for specific training purposes - see further below). Notwithstanding this, the guidance papers suggest that APP entities should update their policies for use of AI more generally.

2.     Collection

In line with obligations under APP 3, APP entities should consider whether the collection of personal information is necessary for their intended activities.

In the context of developers, the Developer Guidance highlights that whatever method is used to collect datasets to train and fine-tune models (for example, data scraping, sourcing from third parties or using their own data), it is crucial for developers to keep front-of-mind the presence of personal information in the datasets they compile, and whether collection complies with privacy laws and is otherwise legally and contractually useable.

The Developer Guidance highlights that GenAI model developers should consider mechanisms to filter out unnecessary personal information and limit privacy risks. For example, considering whether the model’s outcome can be achieved without collecting and using personal information, or by using a reduced set of personal information, or through de-identifying personal information. (Noting that the guidance does not clarify whether incidental personal information in data used to train an AI model that is not ‘read’ by the AI model is a use of personal information.)

In addition, the general requirement for consent to collect sensitive information means developers must take particular care to establish they have the necessary rights to collect sensitive information in their datasets. This includes biometric information to be used for biometric verification or identification, biometric templates, health information about an individual, genetic information and information on certain topics such as racial or ethnic origin or sexual orientation. Such consent may be difficult to obtain if the information is scraped from the web or collected via a third party. In addition, the general requirement to ensure that personal information is collected by lawful and fair means may, depending on the circumstances and taking into account the notification obligations under APP 5, rule out the creation of a dataset through web scraping.

Further, the AI Product Guidance provides that where personal information is collected through public-facing AI tools (such as chatbots), APP entities should consider making the use of such tools clearly transparent to external users.

It is proposed in the guidance papers that where AI models and products are used to generate personal information, this will be a ‘collection’ of personal information for the purposes of the APPs and, therefore, must comply with APP 3 (as opposed to APP 6 in relation to use or disclosure). Further, the guidance papers propose that where an artificially-generated output appears to be about an identified or reasonably identifiable individual, it constitutes personal information, even if the information is fake or incorrect, such as hallucinations and deepfakes. That is, the guidance papers suggest the appearance of being personal information is sufficient to trigger the application of the Privacy Act.

3.     Use and disclosure

Both guidance papers reiterate an APP entity’s obligation under APP 6 to use and disclose personal information only for the primary purpose for which it is collected, unless an exception applies. Common exceptions include where:

  • the APP entity has obtained consent for a secondary purpose; or

  • where a secondary purpose is (i) within the reasonable expectations of the individual (taking into account the individual’s expectations at the time of original collection); and (ii) that it is related to the primary purpose for which the information was collected (or directly related in respect of sensitive information).

In complying with this obligation, APP entities must consider whether the use of personal information in connection with an AI model or system was feasibly one of the anticipated purposes at the time the information was collected from an individual. APP entities must take care in handling personal information they already hold (or in receiving personal information from a third party) which was collected for a specific primary purpose, to ensure it is not repurposed for a secondary use that is unrelated or unexpected, such as training an AI model.

The guidance papers highlight that a secondary use may be within an individual’s reasonable expectations if it was expressly outlined in a notice at the time of collection (in accordance with APP 5) and in the APP entity’s privacy policy. Additionally, the guidance acknowledges that an individual’s reasonable expectations regarding secondary purposes may change over time. However, subsequently updating a privacy policy or providing an updated notice describing a secondary purpose may not be sufficient to change reasonable expectations regarding the use or disclosure of personal information that was previously collected for a different purpose. Even if a secondary purpose is established to be reasonably expected, it still needs to be related (closely associated with) the primary purpose. This can be challenging when the secondary use is purely to train a GenAI model for commercialisation outside of the original service for which the information was collected (rather than, for instance, enhancing that service).

Accordingly, where a secondary purpose cannot be established, the guidance papers suggest APP entities should seek adequate consent for the secondary purpose and offer a meaningful and informed opt-out mechanism. The Developer Guidance cautions that relying on broad consents to handle information in accordance with the privacy policy may not be adequate consent for a secondary purpose of training GenAI if it is not sufficiently voluntary, current and specific. It also advises developers to clearly communicate relevant information about the GenAI model, so individuals can meaningfully understand how their personal information will be handled to enable them to provide informed consent.

The AI Product Guidance cautions APP entities to review the terms of commercially available AI products carefully, ensuring such terms do not contradict the rights and consents that the APP entity has to use and disclose personal information, such as allowing the product owner to use personal information inputs to further train and develop their own technologies.

4.     Accuracy and quality assurance

APP 10 requires an organisation to ensure that personal information collected, used and disclosed is accurate, up-to-date, complete and relevant. Both guidance papers highlight that in the context of AI, this obligation applies not only to the inputs into models and AI products (both for training and use), but also to the outputs of models and AI products where those outputs are personal information.

Where training and input data for AI models and products is not accurate, up-to-date or complete, it can lead to an error in the model logic and the AI-produced outputs being deficient. There are potentially material implications of this depending on the context in which the AI solution is used. For example, in automated decision-making where the outputs could significantly affect the rights or interests of an individual.

What then constitutes reasonable steps to ensure accuracy and completeness will depend on the specific circumstances, including the intended purpose of the AI solution, the sensitivity of the personal information and the types of outputs and how they will be used.

The guidance papers indicate that taking reasonable steps to ensure the accuracy of AI outputs may include:

  • Developers using high-quality and representative training datasets and undertaking appropriate training to fine-tune the model to the required accuracy.

  • Communicating via disclaimers for downstream deployers and users any limitations that may affect the accuracy of the AI output, for example:

    • watermarking

    • information on training datasets (such as offshore or local datasets or datasets for specific groups or disclaiming if the training datasets only includes information up to a certain date)

    • highlighting where AI models and products may require additional safeguards for high risk uses, such as for automated decision-making.

Key takeaways

APP entities need to be aware of how the different ways of handling personal information with AI models and products may create privacy risks and impact on their obligations under the Privacy Act. This includes APP entities considering upfront the suitability of an AI model or commercially available AI product for the purposes for which they wish to use it, including any personal information handling associated with such use, the associated privacy risks and to evaluate whether such use is appropriate and compliant. If personal information is used to develop AI models or with commercially available AI products, APP entities will need to consider what processes and mitigants need to be put in place to ensure they can comply with their Privacy Act obligations, together with regular reviews of the performance of the model and/or product over its lifecycle to ensure the APP entity continues to comply.

The guidance papers are a useful tool in assisting APP entities to understand how their existing obligations under the Privacy Act apply in the context of AI or, at the very least, how the Privacy Commissioner will interpret such obligations in the context of the development of AI models and use of AI products. Regulatory guidance supports Australia’s broader approach to regulating AI, which is largely technologically-neutral, allowing existing laws to be reinterpreted and remain flexible with emerging technology and risks. In parallel, the Federal Government continues to consider gaps that may need to be addressed for new and unique risks that existing legislation may not solve for, including any required amendments to existing regimes or introducing new standalone regimes (such as potential mandatory guardrails) to ensure that Australia’s laws remain fit for purpose for the responsible development and deployment of AI.