Specialist Corporate and Commercial Lawyers
In case you’ve been living under a rock, exposed to only “geological intelligence,” (we must admit – ChatGPT came up with that one!), Artificial Intelligence (AI) is a term used to broadly describe technology enabling computers to perform tasks that humans would usually perform. AI is widely used to advance business operations and enhance efficiency in an increasingly competitive market.
In recent years, the world has seen an exponential growth of AI, inevitably placing increasing pressure on the Australian legislature and regulatory bodies to keep up with international counterparts and provide a suitable regulatory framework for AI to operate in. However, it is well-known that the law struggles to keep up with emerging technology.
In 2019, the ‘AI Ethics Principles’ were introduced by the Australian Government to guide businesses and governments to responsibly design, develop and implement AI. The Ethics Principles are derived from the OECD principles and are entirely voluntary, therefore not creating legal obligations for businesses. The Office of the Australian Information Commissioner also releases materials containing ‘good governance’ approaches, which businesses can consult where needed. However, there is currently no specific legislation dealing with AI in Australia. By default, AI is currently regulated by existing legislation such as copyright laws, privacy laws, consumer protection laws and anti-discrimination laws, which we explore in this article.
Amendments to the Privacy Act 1988 (Cth) (Privacy Act) are on the horizon which will aim to provide greater transparency and accountability over the use of personal information (PI). PI is defined under the Privacy Act as information or an opinion about an identified individual or individual who is reasonably identifiable, whether the information or opinion is true or not and whether the information or opinion is recorded in a material form or not. The Australian Privacy Principles (APP) under the Privacy Act primarily govern how PI is handled by APP entities, including collection, storage and destruction of data.
With the rise in AI, there is a heightened and real risk of privacy breaches. For example, AI has the ability to link and match individual data which may have initially been considered ‘de-identified’ data, blurring the once straightforward definition of PI under the Privacy Act. As a general rule, businesses should be cautious about placing PI into AI systems, given the requirement in APP 6 to only use and disclose information for the particular purpose for which it is collected, unless the data subject’s consent is obtained to the use.
On 29 November 2024, The Privacy and Other Legislation Amendment Bill 2024 (Cth) (Bill) was passed by both Houses of Parliament. Once the Bill receives royal assent, this legislation will form part of a suite of reforms to the Privacy Act. Included in the long list of reforms will be a requirement for APP entities to amend privacy policies to include and disclose how PI is utilised in computer programs and automated decision-making systems and the decisions that are made as a result. It is important to note that this new legal requirement will also apply retrospectively.
In addition to privacy risks, employers need to safeguard against AI systems producing discriminatory outcomes on the basis of race, sex, age and disability when hiring, promoting and terminating employees. AI systems used by businesses should be trained in considering unbiased data that reduces the risk of unfairness.
Intellectual property and confidentiality clauses in employment contracts may also need to be revisited to capture work generated by AI in the course of employment. At present, the Copyright Act 1968 (Cth) only acknowledges humans as the owners of copyright. The use of appropriate clauses in employment contracts may reduce the risk of disputes over such rights.
The Artificial Intelligence Act (AI Act) in the European Union provides a comprehensive and detailed framework for the regulation of AI which has undoubtedly set a high threshold globally. The AI Act creates a framework based on levels of risk (low, high and unacceptable). It is likely that these regulations will have a similar impact on Australia as the EU General Data Protection Regulation (GDPR) had on privacy laws. For businesses with a global presence, becoming familiar with and adopting practices to meet these regulations would be prudent.
Businesses should consider the following non-exhaustive list of practical tips:
Businesses can take these proactive steps to keep ahead of the impending law reform to ensure best practice prevails. Effective regulation of AI is key to building trust in businesses and supporting long-term ethical practices, sustainability and innovation. Australia is likely to follow suit with the EU in the foreseeable future – so watch this space.
This article was co-written by Emily Partridge, Lawyer.
This article is not legal advice. It is intended to provide commentary and general information only. Access to this article does not entitle you to rely on it as legal advice. You should obtain formal legal advice specific to your own situation. Please contact us if you require advice on matters covered by this article.