Artificial Intelligence

  1. Regulation
    1. Australia
  2. AI as a Regulatory Tool
    1. Case study: Algorithmic moderation as content regulation

This chapter explores the interaction between artificial intelligence (AI) and the internet, with a focus on the current growth in AI, the regulation of AI and associated challenges, and how AI can be used as a regulatory tool.

Artificial intelligence is a type of technology that simulates human behaviour and creativity to perform tasks, mimicking human intelligence. Examples of Artificial intelligence include the following:


Type of Artificial **Function Examples Intelligence**
————————- ———————— ——————— Generative AI Generates content ChatGPT and Microsoft through analysing Copilot existing data and
patterns to create
complex material.

Machine Learning Improves accuracy of Facial recognition in decision-making smartphone technology processes by analysing
datasets, requiring
human input

Deep Learning Advanced form of machine Chatbots and virtual learning, utilising assistants neural networks when
analysing data to make
independent decisions in the absence of human
input
————————————————————————

Regulation

Due to its rapid growth and development, regulatory mechanisms have struggled to keep up. However, the implementation of AI regulations has seen a sharp rise in recent years.1

Australia

Australian lawmakers have started to regulate the use of AI. The Online Safety (Basic Online Safety Expectations) Determination 2022,2 made under section 45 of the Online Safety Act 2022,3 is an attempt to ensure the safety of internet users and reduce the risk of misuse of AI online. The Determination requires that internet service providers take reasonable steps to ensure that their delivery of AI to the consumer base has safety at the forefront of its design implementation and maintenance. Reasonable steps include undertaking safety assessments, providing educational tools for users, monitoring the data used as training material for the AI systems, and implementing ways of detecting harmful content. The Determination also requires that providers are proactive in their approaches to reducing the risk of their AI being used to create harmful content.4

Other Australian state and national legislation such as the Privacy Act,5 the Privacy and Personal Information Protection Act 19986 (NSW) and the Privacy Legislation Amendment (Enforcement and Other Measures) Act 20227 can assist in regulating certain elements of AI use on the internet, but there is not yet any specific legislation that directly regulates the use of AI online. However, the Australian government is currently engaged in consultation around the safe use of AI in Australia and other policy activity. Notably:

  • June 2023: a Discussion paper was released on the safe use of AI for public comment.8 Topics include the opportunities and challenges of AI, and strategies for managing the risks posed by AI.

  • September 2023: the AI in Government Taskforce was set up

  • January 2024: commentary was released by the government acknowledging the challenges presented by AI and detailing the potential mechanisms which may be required for the use of AI.9

  • February 2024: the Artificial Intelligence Expert Group was set up.

  • June 2024: “The National framework for the assurance of artificial intelligence in government” was released. This framework deals with the government’s use of AI.

AI as a Regulatory Tool

AI may be used to assist in regulating the internet. There are many examples of this, including

  1. Fraud detection. For example, banks will use AI in real time to assess patterns of behaviour to determine whether fraudulent activity is taking place.10

  2. Spam filtering. For example AI will use learned algorithms to analyse massive amounts of data to identify characteristics, patterns and anomalies which may indicate spam.11

  3. Behavioural Patterns. For example, PayPal uses AI to monitor behavioural patterns of its users to identify potential fraudulent behaviour. If changes in spending patterns such as a large or out of character transaction is made, the transaction can be frozen pending authorisation.12

  4. Content regulation is another area.

Case study: Algorithmic moderation as content regulation

“Algorithmic moderation” refers to the use of automated systems, typically powered by machine learning algorithms and artificial intelligence (AI) to monitor, evaluate and manage online content. These systems are designed to detect and take action against content that violates platform policies, such as hate speech, misinformation, or explicit material. Unlike human moderators, algorithmic moderation can process vast amounts of content in real-time, making it an essential tool for large-scale platforms like social media networks.

The concerns surrounding algorithmic moderation stem from its potential for errors and biases, which can result in the wrongful removal of legitimate content or the failure to detect harmful material. The implications of these errors are amplified by the vast reach of the internet, where decisions made by algorithms can impact millions of users in real time.

Whilst algorithmic moderation as a form of content moderation may be effective in removing illegal content, it has systemically struggled in removing harmful content. Like with human moderation, it can be difficult to differentiate between what is “harmful” and what is merely a non-mainstream opinion.

  1. Stanford University, Artificial Intelligence Index Report 2024 (2024). 

  2. Online Safety (Basic Online Safety Expectations) Determination 2022 (Cth) (“The Determination”). 

  3. Online Safety Act 2021 (Cth). 

  4. n 6. 

  5. Privacy Act 1988 (Cth). 

  6. Privacy and Personal Information Protection Act 1998 (NSW). 

  7. Privacy Legislation Amendment (Enforcement and Other Measures) Act 2022 (Cth). 

  8. Australian Government Department of Industry, Science and Resources, Safe and Responsible AI in Australia (Discussion Paper, June 2023) <https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia-discussion-paper.pdf>. 

  9. Julian Lincoln, Susannah Wilkinson and Alex Lundie, “Australia Government announces mandatory regulations for high-risk AI’ (Article, Insight Australia, 18 January 2024). 

  10. Ravi Sandepudi, ‘The Banker’s Guide: Using AI for Fraud Detection (Effective, 11 March 2024). 

  11. David Emelianocm, ‘Advanced Spam Filtering AI, Trimbox (Blog Post, 21 November 2023). 

  12. Ashtynn Baltimore, ‘Is AI changing customer expectations?’ (2024) PayPal Braintree Product Team.