Business Insight

Australia: New AI safety "guardrails" and a targeted approach to high-risk settings

Red swirls

    The Australian Government aims to ensure use of artificial intelligence (AI) systems in high-risk settings is safe and reliable, while use in low-risk settings can continue largely unimpeded.

    In September 2024, Australia released a Voluntary AI Safety Standard and consulted on new AI laws in the form of a proposal paper on introducing mandatory guardrails for AI in high-risk settings. New laws are not expected to be passed this year. However, following the proposal paper, it is expected that the Government will prepare a response and take steps to implement the mandatory guardrails.

    In specific industry sectors, Australian regulators are currently using enforceable industry codes and standards tailored to deal with specific AI risks, and recent data privacy reforms bring automated decision-making and other requirements that will impact AI.

    New Voluntary AI Safety Standard

    The Voluntary AI Safety Standard includes 10 "guardrails" with specific requirements around accountability and governance measures, risk management, security and data governance, testing, human oversight, user transparency, contestability, supply chain transparency, and record keeping. An additional guardrail requires broad stakeholder engagement and assessment.

    The initial Voluntary AI Safety Standard focuses more on organisations deploying AI – the next version will include additional and more complex guidance for AI developers. Recognising that Australian businesses tend to rely on third party AI systems, the standard includes specific procurement guidance, including recommendations about which of the guardrails should be reflected in contractual provisions agreed between the AI developer and the AI deployer.

    While the standard is voluntary, it sets expectations for what may be included in future legislation and contains guardrails that are closely aligned to the proposed mandatory guardrails for high-risk use cases – which means that implementing the voluntary standard early will help organisations adapt to coming mandatory requirements.

    Proposed mandatory regime for high-risk and general purpose AI (GPAI)

    In September 2024, Australia proposed for consultation:

    • 10 mandatory guardrails – closely aligned with those in the voluntary standard, except that the mandatory regime would require conformity assessments (ie audit/assurance and public certification), while the voluntary guardrails require broad stakeholder engagement instead.
    • … for high-risk AI – using a principles-based assessment of the intended and foreseeable uses of a system, rather than a list of use cases (although input has been sought on listing use cases, similar to the EU AI Act or Canada’s Artificial Intelligence and Data Act (AIDA)). Factors to consider in the assessment include human rights, health and safety, legal or similar effects, impacts to groups or collective rights, and broader impacts to the Australian economy, society, environment and rule of law. National security and defence applications are expected to be treated differently.
    • … and all general purpose AI – AI models that are capable of being used for a variety of purposes (or capable of being adapted for a variety of purposes) including by integration would be subject to the 10 mandatory guardrails.

    The Government is considering how mandatory guardrails for high risk AI should be legislated (as a new economy-wide legislation like the EU AI Act or Canada’s AIDA, as "framework" legislation that could be implemented in other laws, or by directly amending existing laws). Regulator powers, enforcement mechanisms and penalty regimes will depend on which approach is adopted.

    AI in government

    Automated decision making and the use of AI within government has been a focus in Australia after Royal Commission into the Robodebt Scheme recommended wide-ranging reforms.

    The Australian Government released a national framework for the assurance of AI in government in June 2024, and has specifically committed to the Australian Government being an ‘exemplar’ for the safe and responsible adoption of AI. This commitment is set out in the Government’s policy for the responsible use of AI in Government.

    Regulators are intervening

    There is an increasing trend in Australia to address specific societal concerns with enforceable industry codes and standards. Australia's eSafety Commissioner has already used powers to register mandatory industry codes and standards under Australia's Online Safety Act to address the risk that generative AI might be used to produce child sexual exploitation or pro-terror materials.

    Under the Designated Internet Services Industry Standard, websites or apps that use generative AI must either implement controls or processes to reduce the risk of generating such material, or to detect, remove or deter such material, and continuously improving safety. Further obligations apply to distributors or marketplaces of generative AI. Similarly, last year's Search Engine Services Code requires ongoing improvement of machine learning algorithms and models to limit exposure to similar materials in search results.

    A busy reform agenda

    AI issues will be an important part of Australia's ongoing law reform agenda and regulatory priorities. The Australian Government has flagged a number of areas of law that will be reviewed in parallel to consider the impact of AI developments, for example, health-specific laws, consumer laws, copyright law, automated decision-making frameworks for government, privacy reforms, and the issue of statements of expectations for future regulation. Work is already under way, with a consultation in October 2024 looking at how Australian consumer law handles (or should handle) current and emerging AI-enabled services, diving into issues such as consumer remedies and distribution of liability among manufacturers and suppliers. However, it is not guaranteed that AI reforms will be passed before Australian Government elections are called. The timing of Australia’s next federal election is flexible within the parliamentary term, with the latest possible date being 17 May 2025.

    • Online safety: We expect further specific regulation of AI risk in online safety codes currently being developed for content that is not appropriate for children. Further online safety reforms are on the agenda, with the Government considering the October 2024 report of an independent statutory review of the Online Safety Act 2021, including additional protections to address harmful online material.
    • More codes and standards: Similar mandatory industry code and standard frameworks are expected to be introduced as part of coming privacy reforms, anti-scam regulation, and misinformation and disinformation laws. These frameworks provide simpler avenues to regulate specific AI risks, but reactive regulation can introduce uncertainty for businesses that don't pro-actively monitor and address areas of emerging social concern.
    • Data protection and privacy reforms impact AI: Australia has passed the first of two tranches of privacy reforms in November 2024, part of a generational change in privacy regulation in AustraliaRelevant to AI, the reforms include new transparency requirements for substantially automated decisions and a mandatory industry code framework allowing more targeted regulator interventions, with a Children's Online Privacy Code the first cab off the rank. Other reforms include a statutory tort allowing individuals to take legal action for a serious interference with privacy (whether or not privacy laws are breached). A second tranche of legislation is expected to bring the bulk of reforms – including privacy impact assessments for high-risk activities, a requirement that data activities be "fair and reasonable" (regardless of consent), and changes to protect more data as "personal information" (impacting the training and use of AI models). The Office of the Australian Information Commissioner is already clarifying its regulatory expectations – including by issuing new guidance for companies using commercially available AI products, as well as guidance on developing and training generative AI models in October 2024.

    Australian privacy laws can impact overseas data and AI

    Despite a slow shift to adopt GDPR concepts, Australia's privacy laws are both unique and in a state of flux – businesses operating directly or indirectly in Australia, or interacting with Australian data, need to understand which business operations are covered by Australian privacy laws, map Australian obligations into their compliance frameworks, and take steps now to adapt to ongoing reforms.

    This publication is a joint publication from Ashurst Australia and Ashurst Risk Advisory Pty Ltd, which are part of the Ashurst Group.

    The Ashurst Group comprises Ashurst LLP, Ashurst Australia and their respective affiliates (including independent local partnerships, companies or other entities) which are authorised to use the name "Ashurst" or describe themselves as being affiliated with Ashurst. Some members of the Ashurst Group are limited liability entities.

    The services provided by Ashurst Risk Advisory Pty Ltd do not constitute legal services or legal advice, and are not provided by Australian legal practitioners in that capacity. The laws and regulations which govern the provision of legal services in the relevant jurisdiction do not apply to the provision of non-legal services.

    For more information about the Ashurst Group, which Ashurst Group entity operates in a particular country and the services offered, please visit www.ashurst.com

    This material is current as at 12 September 2024 but does not take into account any developments to the law after that date. It is not intended to be a comprehensive review of all developments in the law and in practice, or to cover all aspects of those referred to, and does not constitute legal advice. The information provided is general in nature, and does not take into account and is not intended to apply to any specific issues or circumstances. Readers should take independent legal advice. No part of this publication may be reproduced by any process without prior written permission from Ashurst. While we use reasonable skill and care in the preparation of this material, we accept no liability for use of and reliance upon it by any person.

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.

    Readers should take legal advice before applying it to specific issues or transactions.