Content Regulation and Online Classification

  1. Historical development of content regulation
  2. Current regulatory framework
  3. Content classification in Australia
    1. Classification Guidelines: R18+
    2. Classification Guidelines: X18+
    3. Classification Guidelines: Refused Classification (RC)
  4. Online Content Scheme
  5. Basic Online Safety Expectations
  6. Section 313 of the Telecommunications Act 1997 (Cth)
  7. Image-based abuse
    1. Social context and prevalence
      1. Key statistics
      2. Factors affecting reporting
      3. Social prevention and response
  8. The Office of the e-Safety Commissioner
    1. Cyberbullying and hate speech
      1. Adult Cyber Abuse Scheme
      2. The Challenge of Online Abuse: Trolling
  9. Abhorrent Violent Material
    1. How Abhorrent Violent Material emerged
    2. Definition
    3. Criticisms of the AVM Act
      1. “As soon as reasonably possible”
    4. The Online Safety Act 2021 (Cth)
    5. Disguised Content and Evasion Techniques
      1. TikTok, YouTube and ‘Splicing’
      2. Disguised Content and the Online Safety Act
    6. Case Study: eSafety Commissioner v X Corp [2024] FCA 499
      1. Facts
      2. The Commissioner’s Response
      3. Proceedings
  10. Regulating content in other jurisdictions
    1. Social media regulation in Brazil
      1. Elon Musk and X under investigation
      2. Implications of the Brazilian decision
      3. United Kingdom
      4. Canada
  11. Other emerging issues
    1. Example: Unsolicited Dick Pics
    2. Deepfakes
      1. Deepfakes and Non-consensual Sexual Imagery
    3. Misinformation and Disinformation

Historical development of content regulation

Video Overview of Online Content Regulation in Australia by Nicolas Suzor

The evolution of content regulation in Australia has been marked by several key developments:

  • 1992: The Broadcasting Services Act 1992 established the first framework for content regulation in Australia, aimed at protecting public interests whilst balancing the freedoms of the broadcasting sector.

  • 1997: The Australian Broadcasting Authority (ABA) investigated online content regulation and introduced 47 legislative principles for a National Framework, establishing the groundwork for industry codes of practice and the Platform for Internet Content Selection (PICS).

  • 1999: Schedule 5 of the Broadcasting Services Act 1992 introduced a self-regulatory scheme with the ABA (later ACMA) as the enforcement agency. The scheme focused on ISP regulation through industry codes and complaints mechanisms.

  • 2007: The National Filter Scheme (NetAlert Program) was introduced to provide internet safety education and filtering technology, though it was axed in 2008 due to low adoption rates and technical limitations.

  • 2008: The Content Services Code 2008 provided clearer guidelines for content/hosting service providers and established self-regulatory content assessment regimes.

  • 2015: The Enhancing Online Safety for Children Act 2015 established the Office of the Children’s eSafety Commissioner, later expanded in 2017 to cover broader online safety issues including image-based abuse and domestic violence.

  • 2021: The Online Safety Act 2021 replaced previous frameworks with a comprehensive regulatory scheme administered by the eSafety Commissioner.

For more detailed information on earlier regulatory approaches, see the Code for Industry Co-Regulation in Areas of Mobile and Internet Content (2005).

Current regulatory framework

Australia has a co-regulatory content regulation scheme. Under a co-regulation model, an industry body (such as the Communications Alliance) usually develops a code of practice, which is then made binding on industry participants through a legislative mechanism. Co-regulation is a common form of regulation in Australian media law.

The Online Safety Act 2021 (Cth) sets out an expectation that industry bodies or associations will develop industry codes to regulate certain types of harmful online material. The Act provides for the eSafety Commissioner to register the codes if certain conditions are met. These include, among other things, that the Commissioner was consulted on the code and the Commissioner is satisfied that:

• The code was developed by a body or association that represents a particular section of the online industry, and the code deals with one or more matters relating to the online activities of those participants.

• To the extent to which the code deals with one or more matters of substantial relevance to the community—the code provides appropriate community safeguards for that matter or those matters.

• To the extent to which the code deals with one or more matters that are not of substantial relevance to the community—the code deals with that matter or those matters in an appropriate manner.

• The body or association published a draft of the code and invited members of the public and industry participations to make submissions, and gave consideration to any submissions that were received.

The Commissioner may also request that a particular body or association which represents a section of the online industry develop an industry code dealing with one or more specified matters relating to the online activities of those industry participants. In April 2022, the Commissioner issued such a request, seeking the development of codes relating to ‘class 1’ material by six industry associations. The associations submitted draft codes in November 2022. All six draft codes were rejected in February 2023 as they did not meet the standards for registration. The Commissioner believed that they did not provide appropriate community safeguards. Following resubmission, five of the draft codes were registered in June 2023.

See the Online Content Scheme - Regulatory Guidance for further information.

Watch the following videos for background on online content regulation prior to the 2021 changes:

  • Video Overview of Online Content Regulation in Australia by Nicolas Suzor

Content classification in Australia

Video Overview of Australia’s classification Ratings by Emily Rees

The rules that apply to content depend upon the classification of the content. Australia has a national classification scheme for content (films, games, publications) likely to cause offence which was enacted in 1995 – the National Classification Scheme/Code. The Online Safety Act establishes an online content scheme which is partly dependent upon classification under the National Classification Code. As such, an overview of the basic features of the code supports an understanding of the Online Safety Act scheme.

The National Classification Code provides a statement of purpose that classification decisions are to give effect, as far as possible, to the following principles: (a) adults should be able to read, hear and see what they want; (b) minors should be protected from material likely to harm or disturb them; (c) everyone should be protected from exposure to unsolicited material that they find offensive; (d) the need to take account of community concerns about: (i) depictions that condone or incite violence, particularly sexual violence; and (ii) the portrayal of persons in a demeaning manner.

Publications, Films, and Computer games are rated by the Classification Board, according to the Classification Guidelines. Each State and Territory determines the consequences of classification. The ratings systems differ by media type:

  • Films: G, PG, M, MA15+, R18+, X18+, RC
  • Publications: Unrestricted, Unrestricted (M), Category 1 Restricted, Category 2 Restricted, RC
  • Games: G, PG, M, MA15+, R18+, RC

Classification Guidelines: R18+

  • High impact violence, simulated sex, drug use, nudity
  • No restrictions on language

Classification Guidelines: X18+

  • Real depictions of sexual intercourse and sexual activity between consenting adults
  • No depiction of violence or sexual violence
  • No sexually assaultive language
  • No consensual activities that ‘demean’ one of the participants
  • No fetishes (such as ‘body piercing’; candle wax; bondage; fisting; etc)
  • No depictions of anyone under 18, or of adults who look under 18.

Classification Guidelines: Refused Classification (RC)

“Publications that appear to purposefully debase or abuse for the enjoyment of readers/viewers, and which lack moral, artistic or other values to the extent that they offend against generally accepted standards of morality, decency and propriety will be classified ‘RC’.”

For films, anything that exceeds X18+ is Refused Classification. For Games, anything that exceeds R18+ is RC (A new R18+ category was introduced for Games in 2012).

Classification Guidelines: RC (Films)

  • Detailed instruction in crime or violence
  • Descriptions or depictions of child sexual abuse or any other exploitative or offensive descriptions or depictions involving a person who is, or appears to be, a child under 18 years.
  • Violence: Gratuitous, exploitative or offensive depictions of:
    • violence with a very high degree of impact or which are excessively frequent, prolonged or detailed;
    • cruelty or real violence which are very detailed or which have a high impact;
    • sexual violence.
  • Sexual activity: “Gratuitous, exploitative or offensive depictions of:
    • activity accompanied by fetishes or practices which are offensive or abhorrent;
    • incest fantasies or other fantasies which are offensive or abhorrent.”
  • Drug use:
    • Detailed instruction in the use of proscribed drugs.
    • Material promoting or encouraging proscribed drug use.

Online Content Scheme

The online content scheme under the Online Safety Act relates to two kinds of material: ‘class 1 material’ and ‘class 2 material’. Pursuant to s 106, class 1 material is material which is, or would likely be, classified as ‘RC’ by the Classification Board. Pursuant to s 107, class 2 material is material which is, or would likely be, classified as X 18+, R 18+, Category 2 or Category 1 restricted.

The Act provides for the notice and removal of class 1 material. Sections 109 and 110 provides that the Commissioner may give a notice to certain online service providers (including social media and hosting services) to remove or cease hosting material which the Commissioner is satisfied is class 1 material that can be accessed by end-users in Australia. It is not relevant where the service is provided from, or where the material is hosted – it merely needs to be accessible from Australia.

The notice may require the service provider to take all reasonable steps to remove the material from the service within 24 hours or such longer period specified by the Commissioner. Section 111 requires the service provider to comply with a removal notice to the extent they are capable of doing so.

The Act also provides for the notice and removal of certain class 2 material, namely material classified or likely classifiable as X 18+ or Category 2 restricted. The Commissioner may issue a notice to the relevant provider under ss 114 or 115. In this case, the location of the services or hosting is relevant. The Commissioner may only issue notices in relation to services provided from Australia, or content hosted within Australia. Pursuant to s 116, the provider must comply with the notice to the extent capable of doing so.

With respect to class 2 material which falls within the R 18+ or category 1 restricted classifications, the Commissioner has the power to give the provider a remedial notice under s 119. The notice may require the relevant provider to remove the material or ensure that the material is subject to a ‘restricted access system’. A restricted access system is an access-control system which the Commissioner declares to be a ‘restricted access system’. In essence, these are systems which limit the exposure of person under 18 to ‘age-inappropriate’ content online.

Under s 124, the Commissioner also has the power to issue notice to search engine providers requiring the provider to cease providing links to class 1 materials (a ‘link deletion notice’) in certain circumstances. Under s 128, the Commissioner may issue notice to an app distribution service provider to cease enabling end users in Australia to download an app that facilitates the posting of class 1 material (an ‘app removal notice’) in certain circumstances.

Basic Online Safety Expectations

The Online Safety Act provides for to Minister for Communications to make a determination (a form of legislative instrument) setting out basic online safety expectations.

The first determination was made in 2022. The Online Safety (Basic Online Safety Expectations) Determination 2022 specifies the basic online safety expectations for a social media service and other services that allow end users to access material using a carriage service or a service that delivers material by means of a carriage service.

Under s 49, the Commissioner may require the relevant providers to submit periodic reports on how they are meeting the expectations set out in the determination. The Commissioner may also publish statements about the provider’s compliance or non-compliance with the expectations on its website.

Section 313 of the Telecommunications Act 1997 (Cth)

Video Overview by Kaava Watson:Section 313

In Australia, several different forms of pressure have been exercised in recent years to encourage intermediaries to take action to police the actions of their users. The most blunt is direct action by law enforcement agencies, who are empowered to make requests of telecommunications providers under s 313 of the Telecommunications Act. This provision requires carriers and carriage service providers to “do the carrier’s best or the provider’s best to prevent telecommunications networks and facilities from being used in, or in relation to, the commission of offences against the laws of the Commonwealth or of the States and Territories”, and to “give officers and authorities of the Commonwealth and of the States and Territories such help as is reasonably necessary” to enforce criminal law, impose pecuniary penalties, assist foreign law enforcement, protect the public revenue, and safeguard national security.

Gab Red Explains How s 313 Is Used by Government Agencies to Block Websites

Matt Cartwright Explains the Recommendations of the Recent Inquiry Into the Use of s 313

The section essentially enables police and other law enforcement agencies to direct ISPs to hand over information about users and their communications. Increasingly, however, it is also apparently used by a number of government actors to require service providers to block access to content that appears to be unlawful, in cases ranging from the Australian Federal Police seeking to block access to child sexual abuse material to the Australian Securities and Investment Commission (ASIC) blocking access to phishing websites. Even the RSPCA is reported to have used the power, although the details of its request are not clear. There is significant concern over the lack of transparency around s 313(3) and lack of safeguards over its use.1 These came to the fore in 2013 when ASIC asked an ISP to block a particular IP address, not realising that the address was shared between up to 250,000 different websites, including the Melbourne Free University.

Image-based abuse

Video overview of image-based abuse laws by Danielle Harris

The non-consensual sharing of intimate images is often colloquially referred to as ‘revenge porn’. The term ‘image-based abuse’ is generally considered to be a better term because it avoids the victim-blaming connotations that the abuse is done in ‘revenge’ for some perceived wrong.

The National Statement of Principles Relating to the Criminalisation of the Non-consensual Sharing of Intimate Images encouraged each Australian jurisdiction to adopt nationally consistent criminal offences.

Under the Criminal Code Act 1995 (Cth), it is an offence to post, or threaten to post, non-consensual intimate images.2 Specifically, s 474.17 of the Criminal Code sets out an offence for the use of a carriage service in a way that reasonable persons would regard as being, in all the circumstances, menacing, harassing or offensive. Section 474.17A makes it an aggravated offence where that use involves transmitting or promoting material that is private sexual material.

Section 75 of the Online Safety Act prohibits the posting, or threatened posting, of an intimate image of another person without their consent. The prohibition applies where the person in the image or person posting the image are ordinarily resident in Australia. An ‘intimate image’ is defined to include images that depict genital or anal areas, a female, transgender or intersex person’s breasts, private activities such as showering, using the toiler or engaging in a sexual act not ordinarily done in public.

There is also a complaints-based system in the Online Safety Act, whereby the eSafety Commissioner may issue a removal notice or another civil remedy upon receipt of a victim’s complaint.

Queensland extended the definition of ‘intimate’ images to include original or photoshopped still or moving images of a person engaged in intimate sexual activity; a person’s bare genital or anal region; or a female, transgender or intersex person’s breasts.3

The definition covers an image that has been altered to appear to show any of the above-mentioned things.

The State also introduced three new misdemeanours into their Criminal Code to broaden the scope of conduct which is captured under the offence. These include distributing intimate images without the consent of the person depicted,4 observing or recording breaches of privacy,5 and distributing prohibited visual recordings.6

Social context and prevalence

Key statistics

The prevalence of image-based abuse was highlighted in a study conducted by the eSafety Commissioner in 2017 that found that 1 in 10 individuals experienced image-based abuse, with females aged between 15 to 17 years being most at risk. The report also found:

  • 6 in 10 victims knew the perpetrator;
  • The perpetrator was a friend that they knew offline (29%), an ex-partner (13%), a current partner (12%) or a family member (10%); and
  • Image-based abuse is more likely to occur on Facebook (53%).

Factors affecting reporting

The eSafety Commissioner is empowered to investigate and make decisions regarding image-based abuse but this requires victims to report it. Studies have estimated that only 35% of cases of image-based abuse are reported. The factors influencing a victim not reporting can include:

  • Negative stigma;
  • Psychological barriers including victim blaming, humiliation and embarrassment;
  • An unawareness of the severity of the incident;
  • A fear of exacerbating or making it worse including attention to image-based abuse;
  • Lack of confidence in law enforcement; and
  • Unaware of the support services available.

Social prevention and response

Considering the social context of image-based abuse, there have been several actions taken by the Australian Government and other bodies to raise awareness and better educate individuals:

  1. The Office of the eSafety Commissioner has developed a professional learning program for teachers and facilitators titled ‘Online Harmful Sexual Behaviours, Misinformation and Emerging Technology’. Its goal is to equip individuals with the necessary skills to identify and respond to incidences of image-based abuse and the role that coercion plays.

  2. In 2020, the NSW Government launched a campaign to help prevent image-based abuse and educate individuals on the topic including information on where to seek help. This was largely due to the number of reports between 2019 to 2020, namely a 172% increase. The campaign also offered counselling to individuals and the removal of the content.

  3. In 2022, the Australian Government responded to the report from the Senate titled ‘Phenomenon colloquially referred to as ‘revenge porn’’. The statement supported most of the recommendations proposed including that all police officers undertake mandatory training around image-based abuse.

Angelina Kardum explains: How the major social media platforms deal with image-based abuse

Most major social media sites now have policies against image-based abuse in their community guidelines or standards. Victims of image-based abuse can make a report directly to the site on which their intimate image was shared. This report is then assessed against the site’s community guidelines or standards and if it appears to be in violation of the community guidelines or standards, the image is generally removed within 24 hours. Whilst major social media services have taken positive steps towards tackling image-based abuse, such as developing reporting and take-down mechanisms, these mechanisms have their shortcomings. The two main issues with the current approaches taken by social media services are the delays associated with the assessment of reports and the heavy reliance on self-reporting.

The responses from online service providers to the issue of image-based abuse include:

  1. In February 2015, Reddit updated its private policy to prohibit the publication of image-based abuse;
  2. In March 2015, Twitter (now known as ‘X’) announced that they would immediately remove any link image-based abuse upon request; and
  3. In June/July 2015, Google and Microsoft announced they would remove links upon request.

The Office of the e-Safety Commissioner

Lauren Trickey explains how to make a complaint to the eSafety Commissioner

The eSafety Commissioner is a statutory office which was first established by the Enhancing Online Safety Act 2015 (Cth) to promote and enhance online safety. The powers of the Commissioner were later enhanced in the Online Safety Act 2021 (Cth). While most of the Commissioner’s functions are contained in the Online Safety Act 2021, the Commissioner also has powers and functions under the Telecommunications Act 1997 (Cth) and the Criminal Code Act 1995 (Cth).

The Commissioner can receive reports for cyber-bulling, image-based abuse or offensive and illegal content.

Under s 30, complaints about cyberbullying of a child can be made by an Australian child or parent, guardian or person authorised by the child. An adult person can also make a complaint if they believe they were the target of cyberbullying material as a child, so long as the complaint is made within a reasonable time after they became aware and 6 months after they reached 18 years old. Cyberbullying refers to online material intended to seriously threaten, intimidate, harass or humiliate an Australian child.

The 2021 amendments introduced the world’s first legal scheme dealing with cyberbullying of adults. Under s 36, an Australian adult may make a complaint to the Commissioner about cyber-abuse material. Cyber-abuse material is material an ordinary reasonable person would conclude is likely intended to have an effect of causing serious harm to a particular Australian adult; and an ordinary reasonable person in the position of the Australian adult would regard the material as being, in all the circumstances, menacing, harassing or offensive.

Cyberbullying and hate speech

Young children and adolescents are being increasingly impacted by the high use of electronic devices and social media, resulting in bullying, exclusion and intimidation of young people.

Cyberbullying involves bullying online. It occurs where a perpetrator intentionally acts violently towards a victim repeatedly over a long period of time through a variety of social media platforms such as Facebook, Instagram, Snapchat or other online forums, often anonymously. It falls under the umbrella term ‘cyber hate’, which encompasses many types of harmful behaviours including hate speech, harassment, and discrimination targeting individuals based on their personal characteristics or identity.

A study by the eSafety Commissioner was completed in 2020. 44% of Australian young people reported having a negative online experience in the last 6 months, and 15% had received threats or abuse online.

Adult Cyber Abuse Scheme

Part 7 of the Online Safety Act 2021 (Cth) establishes an Adult Cyber Abuse Scheme, the first in the world. This scheme provides the eSafety Commissioner with the power to issue service providers with a formal notice to remove harmful content targeting an Australian adult within 24 hours. The provider could incur civil penalties and fines if they fail to remove the harmful content. Section 162 of the Online Safety Act gives the Commissioner the powers to seek these penalties. However, there is a high threshold to be satisfied before the eSafety Commissioner has the authority to act:

  • Section 7(1)(c) requires that the material was “intended to have an effect of causing serious harm”; and
  • Section 7(1)(d) requires that “an ordinary reasonable person in the position of the Australian adult would regard the material as being, in all the circumstances, menacing, harassing or offensive.”

There is question about what surpasses the threshold and what is of a ‘serious’ nature as the harm can be subjective and arbitrary.

As outlined above, image based-abuse complaints can be made to the Commissioner. Pursuant to s 32, complaints can be made by the person in the intimate image, a person authorised to make a report or a parent or guardian of a child or a person who does not have capacity.

Australian residents can also report offensive or illegal content, which includes abhorrent violent material or material depicting illegal acts.

For each type of material an online form can be completed on the eSafety website. Each form requests information regarding what is contained or depicted in the material and where the material has been posted. After receiving a complaint, the Commissioner has the power to conduct an investigation (as the Commissioner thinks fit). The Commissioner assesses the material complained of to determine the appropriate course of action, which may include liaising with the relevant platform for the material to be removed.

The Challenge of Online Abuse: Trolling

What is Trolling?

Trolling takes many forms. Trolls typically respond to or post inflammatory, off-topic, or ludicrous material to generate an emotional response from users. This behaviour can lead to a pile-on effect, where others join in on the attack. Many Australian users experience online trolling. The most common platforms for such encounters are Instagram, YouTube, and Snapchat. On these platforms, trolling is often confused with cyberbullying.

In Australia, it is up to the individual to report to the online service first. This typically involves the user collecting evidence, such as screenshotting abusive comments and reporting the troll within the app. When the service does not remove the content within 48 hours, an individual can report to eSafety if their experience meets the legal threshold of serious cyberbullying. A key message to internet users on platforms is to not feed into the trolls and report the abuse within the used app.

Social media platforms can be reluctant to change bullying policies as they try to balance users’ freedom of speech, privacy, and protection. Private intermediaries that do not make in-app regulatory changes, such as social media platforms, continue to amplify the voices of trolls.

Trolling and the Law

Compared with other online antisocial behaviour, such as cyberbullying, trolling remains largely unregulated within the legal framework of Australia. There are fundamental differences between cyberbullying and trolling behaviours regarding form, content, intent, and consequence. These differences are not reflected in the Online Safety Act 2021 (Cth), which has formed a world-first cyber abuse scheme for adult Australians and introduced new basic online safety expectations to promote and improve online safety. Although researchers have deemed cyberbullying and trolling to be different behaviours, the Online Safety Act 2021 (Cth) does not specifically protect the safety of users from being trolled, and no other legislation exists that specifically targets trolling in Australia.

To have access to a legal remedy, Australian residents seeking justice for being trolled on platforms need to fall within legal provisions in the Online Safety Act 2021 (Cth) and Criminal Code Act 1995 (Cth) that regulate cyberbullying, harassment, image-based abuse, or offensive and illegal content. Trolling can, but doesn’t always, fall within these definitions.

If the complaint fits within the criteria, a complaint can be made to the Commissioner about the matter. Criteria includes:

  • Under s 30 of the Online Safety Act, an Australian child who has reason to believe they are a target of cyber-bullying material on platforms is a justified matter.
  • An Australian Adult under s 36 can make a complaint if they believe they have been a target of cyberabuse material.
  • Under s 474.17 of the Criminal Code Act, an offence is set out in the way a ‘reasonable person would regard as being, in all the circumstances, menacing, harassing or offensive’.

Both the Online Safety Act 2021 (Cth) and Criminal Code Act 1995 (Cth) address cyberbullying and harassment broadly and require modifications to address trolling effectively.

Social Media (Anti-Trolling) Bill

The federal government introduced an exposure draft on Social Media (Anti-Trolling) Bill 2021 (Cth) shortly after the decision in Fairfax Media Publications Pty Ltd v Voller. The High Court found that media companies can be held responsible for alleged defamatory third-party comments made on Facebook accounts of media companies. The Bill intended to address defamatory comments by exposing anonymous commenters through platforms obtaining their contact details. The Bill established a limited role on trolling issues and lapsed at the dissolution of Parliament in April 2022.

Abhorrent Violent Material

See overview by Georgie Vine about the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019

How Abhorrent Violent Material emerged

In response to the Christchurch massacre, the Australian government passed an amendment to the Criminal Code Act 1995 (Cth) in 2019 to regulate abhorrent material.

In Christchurch, New Zealand on 15 March 2019 an Australian gunman, Brenton Tarrant, entered two Mosques massacring 51 individuals and injuring an additional 49. Tarrant wore a GoPro attached to his head, recording and live streaming the entire massacre to his personal Facebook page. The live stream lasted 17 minutes and was initially viewed by 4,000 users but was later re-produced and shared in 1.5 million videos within 24 hours of the incident. It took approximately one hour from the start of the streaming for Facebook to remove the original video off the platform.

Definition

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 creates new offences under the Criminal Code Act 1995 (Cth), effective from 6 April 2019.

Abhorrent violent material is defined by section 474.31 of the Criminal Code Amendment Act (Sharing of Abhorrent Violent Material) Act 2019 (Cth) as audio, visual, or audio-visual material that records or streams abhorrent violent conduct being engaged in by one or more persons. Section 474.32 classifies that a person engages in abhorrent violent conduct if the person:

a) engages in a terrorist act; b) murders another person; c) attempts to murder another person; d) tortures another person; e) rapes another person; or f) kidnaps another person.

These offences target content that is reasonably capable of being accessed within Australia, regardless of where the material was created or where the platform operator is located. The offences include substantial penalties if individuals and companies do not remove or report such classified material: fines up to 10% of annual global turnover for companies, and up to 3 years imprisonment for individuals.

There are a few defences to the new offences, including:

  • material necessary for law enforcement,
  • material distributed by journalists,
  • material used for scientific, medical, academic or historical research, and
  • the exhibition of artistic works.

Criticisms of the AVM Act

The AVM Act has been criticised for drafting deficiencies. The AVM Act came into effect on 6 April 2019, just under a month after the Christchurch Massacre. In the Second Reading of the AVM Bill, Mr Bandt, leader of the Greens Party, in highlighting the government’s haste to pass the Bill noted that it may undermine the legitimacy of the Bill further down the track and there could be unintended consequences. The President of the Law Council of Australia echoed this sentiment, describing the legislation “knee jerk reaction to tragic event”.

“As soon as reasonably possible”

Under section 474.33(c) of the AVM Act, it is an offence for a person who has reasonable grounds to believe that certain material is abhorrent violent material, and fails to report the details to the Australian Federal Police within a “reasonable time” after becoming aware of its existence. The AVM Act does not specify what will constitute a “reasonable time”. However, the Attorney-General deemed it “unacceptable” that the Christchurch Massacre livestream was available for over an hour before first attempts were taken to remove the content. This points to the conclusion that it is likely that the timeframes would be measured in hours and minutes rather than days. The United Nations has expressed concerns that the wording of “reasonable time” is ambiguous and in practice will cause hasty decisions made by content services or hosting services.

The Online Safety Act 2021 (Cth)

The eSafety Commissioner has wide-ranging powers under the Online Safety Act 2021 (Cth) to order the removal of certain online material.

Under section 109, the Commission can issue a removal notice when they are satisfied the material is or was class 1 material; the material can be accessed by end-users in Australia, and that once issued a removal notice the service provider must take ‘all reasonable steps’ to ensure removal of the material.

Under section 95, the eSafety Commissioner can issue a provider with a blocking request, requesting that they take steps to disable access to the material if the: a) material can be accessed using an internet carriage service supplied by an internet service provider; b) Commissioner is satisfied that the material that promotes, instructs, incites or depicts abhorrent violent conduct; and c) Commissioner is satisfied that the availability of the material online is likely to cause significant harm to the Australian community.

Disguised Content and Evasion Techniques

Needs review for fit and accuracy. I don’t think ‘disguised content’ or ‘splicing’ are known phrases, and evasion isn’t as simple as this section suggests – nic.

Despite content regulation efforts, some individuals and organisations have used coercive and nefarious tactics to share prohibited imagery and videos online, especially aimed at those most vulnerable, children. Disguised content refers to harmful content that seeks to evade online content regulation or filtering mechanisms.

TikTok, YouTube and ‘Splicing’

In January 2021, users of TikTok, one of the world’s most popular social media sites, were exposed to a viral video containing a graphic depiction of bodily mutilation and the recorded death of an individual. Although TikTok strictly prohibits content of this nature, the video was able to be shared globally due to the original poster ‘splicing’ the clip of the violent crime behind a video of a girl dancing — allowing the upload to ‘trick’ the application’s artificial intelligence algorithm into thinking that it was a normal video.

In 2019, parents became aware of a similar issue when YouTube hosted popular videos that at first showed children’s content such as Peppa Pig that were found to be spliced with videos of a disfigured humanoid character called ‘Momo’ repeating inappropriate phrases and showing videos of cartoon characters being tortured. Another example involving YouTube involved children being exposed to spliced videos that gave explicit instructions on how to inflict self-harm.

Other manners of disguising content have begun to emerge. In August 2024, a new trend arose that involved nefarious creators uploading videos where the first few seconds appear relatively normal and fun before they begin to video themselves exposing themselves on camera or showing disturbing content.

Disguised Content and the Online Safety Act

In Australia the primary instrument that instructs online content regulation is the Online Safety Act 2021 (Cth). This Act works to classify online content in conjunction with the National Classification Code into class 1 material and class 2 material under section 106 and 107. The Act utilises these classification systems as a way of directing enforcement, with sections 109-128 stipulating that the publication of harmful material against the regulations of the Act be followed by orders of deletion or the removal of content or platforms in extreme circumstances. However, these measures have been criticised for focusing too heavily on content removal rather than prevention of publication.

Disguised content is often class 1 material. Due to creators evading the regulatory algorithms on platforms like YouTube, these videos remain online until a substantial number of reports are made, and the videos are manually reviewed or flagged as class 1 material, by which point these videos have been widely circulated, causing harm. Exacerbating this issue is that the current laws are limited by geographical jurisdiction and therefore cannot effectively govern ‘cloud computing’ or the uploading or creation of disguised content from locations outside of Australia particularly with the more prevalent use of proxy servers such as VPNs that can hide the location of a poster.

Another issue around enforcement is that creators and uploaders of disguised content often are anonymous, undetectable and in viral circumstances, numerous due to re-uploads. This is also problematic for enforcing the new Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Cth) provisions that criminalises the acts of uploading harmful material.

Case Study: eSafety Commissioner v X Corp [2024] FCA 499

The recent case of eSafety Commissioner v X Corp [2024] FCA 499 tested the powers and limits of the Online Safety Act in relation to the Commissioner’s powers under section 109.

Facts

On 15 April 2024, in the Sydney suburb of Wakeley, Bishop Mar Mari Emmanuel was attacked and stabbed while delivering a service that was being live streamed by the church. The Bishop, along with a priest, a member of the congregation, and the suspected attacker, all sustained injuries in the incident.

The footage that was captured through the live stream was available instantly. Following its online livestream, the footage was reduced to a shorter clip of roughly 11 seconds, which depicted the suspected attacker approaching the Bishop and striking him several times in a downward motion. No highly graphic details were visible (i.e., no blood or wounds), however the audio captured the sounds of the impacts between the weapon and the Bishop, and the shocked and distressed reactions of witnesses can be heard. The clip was subsequently shared across various mainstream social media sites, including X (formerly Twitter).

The Commissioner’s Response

The day following the incident, the eSafety Commissioner deemed the clip as ‘class 1 material’, and through invoking her powers pursuant to section 109 of the Act, issued a formal notice to X Corp requiring them to take all reasonable steps to remove the footage from their platforms. The notice did not apply to all copies of the footage, but rather only to a set of 65 links which contained footage of the incident and were posted to X. The justification being that the material was deemed to be of class 1 classification under the Act, which depicted ‘crime, cruelty and real violence’, such that it ‘offends against the standards of morality, decency and propriety generally accepted by reasonable adults’.

X subsequently responded to the notice by geo blocking each of the specified posts for Australian users; meaning that users who had their IP address set to an Australian location, were unable to access the content. However, the eSafety Commissioner was not satisfied that X’s response in geo blocking the material constituted compliance with the notice, as Australians could still access the URLs via a VPN.

Proceedings

On 22 April 2024, the Commissioner commenced proceedings in the Federal Court, seeking injunctive relief that would require X to remove the material from its platform or make them inaccessible to all users. The eSafety Commissioner was successful - at least temporarily - as the Federal Court ordered an interim injunction against X, which remained in effect until 13 May. The injunction required X to hide the material identified by the Commissioner behind a separate notice, such that X users would instead only see the notice blocking the material (rather than the material itself), which could not be removed.

On 13 May 2024, the Federal Court of Australia handed down its judgment. In delivering the judgment, his Honour Justice Kennett considered two key issues, including:

  1. whether the removal notice was a valid exercise of the eSafety Commissioner’s power under section 109 of the Act; and
  2. whether, given the notice only requires X to take “reasonable steps” to ensure the removal of the material, the proposed final injunction goes further than what is required for compliance with the notice.

In relation to the second issue, at the core of the dispute was that the eSafety Commissioner argued it was insufficient for X to simply ‘geo-block’ the material for Australian users, and that the 65 URLs should be removed from the platform altogether. The Commissioner argued that such action is within the “all reasonable steps” that the notice required to be taken. X argued that a requirement to remove the material worldwide, goes beyond what could be considered “reasonable”.

Justice Kennett held that it would be reasonable for X Corp to remove the content, but unreasonable for the Commissioner to compel removal through section 109 of the Act. The injunction was thus denied. Further observations were made that, had the injunction been ordered, it would have:

  • Had a ‘global effect’, impacting X and many other organisations who have no real connection to Australia or its interests.
  • Impacted the interests of individuals globally who have no connection to the proceedings.
  • Been ineffective in preventing people from watching the video elsewhere.
  • Been highly unlikely to be enforced by a US court.

On 5 June 2024, the eSafety Commissioner discontinued the proceedings in the Federal Court.

The Federal Court has deemed this case to be of ‘public interest’, meaning that an almost complete public record of the documentation can be accessed here.

Regulating content in other jurisdictions

Snoot Boot explains France and Germany’s online hate speech laws

Social media regulation in Brazil

The Civil Framework of the Internet governs internet regulation in Brazil. It outlines the fundamental rights that internet users have, including freedom of expression, privacy protection, and preservation of net neutrality.

The Brazilian Government has prioritised combatting misinformation and online extremism, which increased significantly during Bolsonaro’s administration, and in the lead up to the 2018 and 2022 presidential elections. This came to a head on 8 January 2023, when thousands of Bolsonaro’s supporters stormed various government buildings in Brasilia following President Lula’s victory in the 2022 federal election. Similar to the US Capitol riot in 2020, social media platforms were used to spread misinformation about a ‘stolen election’ and organise the attack. A study conducted by Ozowa, Lukito, Bailez, and Fakhouri found that Twitter and WhatsApp were heavily used by right-wing extremists to spread propaganda and conspiracy theories and instigate violence.

Following the attack, the Supreme Court began investigating the incident and putting more pressure on social media platforms to filter hate speech and misinformation. Part of this involved introducing a Bill to combat misinformation, which ultimately failed after several platforms campaigned against its introduction.

Elon Musk and X under investigation

As part of the investigation led by Supreme Court Justice Alexandre de Moraes, in early April 2024, X was ordered to block several accounts that were accused of spreading misinformation. Elon Musk refused to do so, and in a series of posts, accused Moraes and the Brazilian Government of censorship and threatened to lift all platform restrictions on X. Moraes then commenced Inquiry No. 4957 under article 12 of the Civil Framework of the Internet, which allows the court to investigate internet-based obstruction of justice and criminal acts. The investigation was also concerned with leaked internal emails from X, which contained criticisms of the Superior Electoral Court and its decisions.

On 17 August 2024, X closed its office in Brazil and removed its legal representative. This contravened article 1134 of the Civil Code, which requires a foreign company operating in the country to have a legal representative. As a result of the inquiry, Moraes issued a summons to X by posting it on the platform.

At page 23 of Moraes’s judgement, he found that ‘Musk confuses freedom of speech with a non-existent right to aggression, and deliberately confuses censorship with a constitutional prohibition against hate-speech and incitement of antidemocratic action’. Moraes also referenced the proceedings against X in Australia at pages 28-29 to find that X has a history of not cooperating with governments and judicial orders, and the platform is regularly involved in antidemocratic action.

Moraes ultimately ruled against X and ordered the platform to be blocked in Brazil due to its incitement of extremist activity, obstruction of justice, and lack of legal representation in the country. This decision was later affirmed by the other members of the Supreme Court. X has also been ordered to pay fines valuing at least 20 million Reais, roughly just over $5 million AUD, and Musk’s Starlink assets in the country were frozen.

Implications of the Brazilian decision

On 31 August 2024, X was blocked in Brazil and inaccessible to millions of users. Initially Moraes attempted to outlaw the use of VPNs in the country entirely, but instead the court ordered that users in Brazil who use a VPN specifically to access X face fines of up to 50,000 Reais (roughly $13,000 AUD). This has caused debate within the country, with activists calling for Moraes to reconsider fining users.

After X was blocked, rival platform BlueSky saw an increase of 2 million members in a week, and Meta’s Threads also saw a significant increase in activity.

Despite the verdict, there is still a possibility that X can be reinstated, provided they comply with court orders and pay the accumulated fines.

Both France and Germany have attempted stricter approaches to regulating online hate speech. In particular, Germany’s laws require social media companies to remove hate speech and report users to the police, or else face significant fines. In 2020, France proposed laws similar to Germany’s, but these laws were struck down by a French court, as the laws were unconstitutional and imposed an unreasonable burden on the freedom of speech because they incentivized over-censorship. The laws highlight a deeper tension between free speech and the need to regulate and censor hateful ideologies being spread online.

United Kingdom

The United Kingdom’s Online Safety Act 2023 represents a comprehensive approach to platform regulation that extends beyond Australia’s co-regulatory model. The Act imposes direct duties on social media platforms and search engines to protect users from harmful content. The Office of Communications (Ofcom) is implementing the Act through codes of practice addressing illegal content, content harmful to children, and specific categorised services.7

The UK Act introduces several new criminal offences that address gaps in existing law:

  • Intimate image abuse (s 188) - criminalising the non-consensual sharing of intimate images, similar to Australia’s image-based abuse provisions
  • Epilepsy trolling (s 183) - targeting those who send content designed to trigger seizures
  • False communications (s 179) - prohibiting the sending of false information intended to cause non-trivial harm
  • Threatening communications (s 181) - modernising threats law for digital contexts
  • Encouraging self-harm (s 184) - addressing content that encourages or assists serious self-harm
  • Cyberflashing (s 187) - criminalising the unsolicited sending of sexual images

These offences demonstrate how jurisdictions are identifying and addressing specific online harms through targeted criminal law provisions, complementing broader platform regulation approaches.

Canada

Canada’s proposed Online Harms Bill 2024 takes a duty-based approach to platform regulation.8 The Bill would impose three primary duties on social media services:

  1. Duty to act responsibly - requiring platforms to implement systems to address harmful content
  2. Duty to protect children - specific obligations regarding content accessible to minors
  3. Duty to make certain content inaccessible, specifically:
    • Content that sexually victimises a child or revictimises a survivor
    • Intimate images posted without consent

This approach emphasises proactive obligations on platforms rather than reactive content removal, reflecting an emerging trend in online safety regulation.9

Other emerging issues

Example: Unsolicited Dick Pics

Video overview by Kaito Suzuki

An illustrative example of how existing harassment laws struggle with digital contexts is the sending of unsolicited sexual images, commonly known as “dick pics.” It’s important to distinguish this from consensual sexting, which is a normal part of many adults’ digital relationships and sexual expression. The legal issue arises specifically when intimate images are sent without consent or solicitation.

Research from dating applications like Bumble found that 41% of women have received unsolicited photographs of male genitalia. The 2023 Online Safety Issues Survey found that nearly 8% of respondents had experienced this behaviour in the past 12 months, with women and LGBTQIA+ people disproportionately affected.

Several jurisdictions have created specific offences for this behaviour. The UK’s Online Safety Act 2023 criminalises sending photographs of genitals with intent to cause alarm, distress or humiliation (maximum two years imprisonment). Ireland’s Online Safety and Media Regulation Act 2022 takes a similar approach.

In Australia, this conduct is not specifically criminalised. Traditional “indecent exposure” laws like section 5 of the Summary Offences Act 1988 (NSW) are limited to public places and don’t capture digital sending. The federal provision most likely to apply is section 474.14 of the Criminal Code Act 1995 (Cth), which criminalises using carriage services to menace, harass or cause offence, but this requires proving the conduct would be considered menacing, harassing or offensive by a reasonable person.10

Some state laws may apply in specific circumstances. Victoria’s Crimes Act 1958 section 48 creates an offence for sexual activity directed toward another person intending to cause fear or distress, though how courts would interpret this in digital contexts remains unclear.

This example illustrates how laws designed for physical spaces often translate poorly to digital environments, creating enforcement gaps even for relatively straightforward harmful conduct.

Deepfakes

See an explanation of deepfakes by Eric Briese

Deepfakes and Non-consensual Sexual Imagery

A deepfake is a technique of image manipulation where artificial intelligence and deep learning is leveraged to manipulate a person’s characteristics, including physical appearance and voice, to create images or content that appear authentic. While manipulated media is not new, deepfake technology has lowered the technical barriers to creating convincing fabrications, raising significant legal and ethical concerns.

The primary legal concerns with deepfakes relate to their use in creating non-consensual sexual imagery, political disinformation, fraud, and harassment. This section focuses on the legal frameworks addressing non-consensual sexual deepfakes, which constitute a form of image-based sexual abuse.

Australia lacks comprehensive deepfake-specific legislation, but several existing laws may apply depending on the context:

  • Criminal Code Act 1995 (Cth);
  • Telecommunications Act 1997 (Cth);
  • Enhancing Online Safety (Non-consensual Sharing of Intimate Images) Act 2018 (Cth); and
  • Online Safety Act 2021 (Cth).

Most Australian jurisdictions have criminal offences covering non-consensual sharing of intimate images, with varying application to altered material. Federal offences under sections 474.17 and 474.17A of the Criminal Code Act 1995 (Cth) prohibit using carriage services to menace, harass or offend, including through sharing intimate images. Victoria leads in explicit deepfake criminalisation, with section 53 of the Crimes Act 1958 (Vic) specifically addressing both production and distribution of deepfake intimate images.

The Online Safety Act 2021 (Cth) empowers the eSafety Commissioner to issue removal notices to online service providers hosting intimate imagery, including deepfakes. Providers must remove content within 24 hours of notice, with penalties for non-compliance. As noted by scholars, ‘detection without removal offers little solace to those exploited by deepfake pornography’.11

The limitations of civil enforcement mechanisms are illustrated by Anthony Rondondo v eSafety Commissioner (2023), where contempt proceedings were required after non-compliance with a removal notice. Rondondo was ordered to pay $25,000 plus costs.12 This apparently did not serve as a deterrent; Rondondo was subsequently arrested for distributing deepfake images of school students and teachers.

Criminal Code Amendment (Deepfake Sexual Material) Bill 2024

The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 represents Australia’s first targeted legislative response to deepfake sexual abuse. The Bill introduces specific offences for:

  • Creating deepfake sexually explicit content without consent
  • Distributing such material (maximum 6 years imprisonment)
  • Aggravated offences for creators who also distribute (maximum 7 years)
  • Repeat offending (maximum 7 years)

Critics argue the Bill duplicates existing offences and may impact freedom of expression, though supporters emphasise the need for specific deterrence given the unique harms of deepfake technology.13

International Approaches

United Kingdom

In April 2024, the UK government announced plans to amend the Criminal Justice Bill to include a new offence for making sexually explicit deepfakes without consent, which will build on section 66B of the Sexual Offences Act 2003 (UK).

United States

The United States lacks comprehensive federal deepfake legislation, with regulation occurring primarily at state level. California and Texas pioneered state-level deepfake laws in 2019:

California’s Assembly Bill 602 establishes civil remedies for victims of non-consensual pornographic deepfakes, while Assembly Bill 730 prohibits distribution of political deepfakes within 60 days of elections. Texas similarly prohibits political deepfakes within 30 days of elections.

At the federal level, proposed legislation includes:

European Union

The EU Directive on combating violence against women and domestic violence (May 2024) requires member states to criminalise non-consensual production and distribution of sexual deepfakes using artificial intelligence.

China

China has taken a comprehensive regulatory approach since 2019. The ‘Regulations on the Administration of Networked Audiovisual Information Services’ require disclosure when deepfake technology is used and prohibit unlabelled deepfake content.14 The 2023 ‘Regulations on Deep Synthesis Management of Internet Information Service’ extend controls throughout the deepfake lifecycle, requiring platforms to obtain consent before using individuals’ likenesses and strengthen training data management.

Misinformation and Disinformation

See an explanation of misinformation by the ACMA.

Artificial Intelligence (‘AI’) has contributed to the ongoing challenge of regulating and controlling the spread of misinformation and disinformation. Misinformation is ‘false, misleading or deceptive information that can cause harm’. Disinformation is misinformation that is deliberately spread to cause confusion and undermine trust in governments or institutions.[^14] Algorithms and ‘bots’ are becoming some of the strongest spreaders of false, unreliable and misleading information online. ‘Bots’ are computer algorithms generated by AI that automatically produce content and interact with humans on social media platforms.

The spread of misinformation and disinformation online has been linked to propaganda and the proliferation of abuse and targeted attacks, and harm in emergency situations as civilians are unable to obtain the correct information from reliable sources about how they should ensure their own safety. A 2023 Forbes report indicated that 76% of consumers were worried about misinformation provided by AI.

Regulation

Misinformation and disinformation are regulated in Australia through a voluntary code of practice. The Australian Code of Practice on Disinformation and Misinformation (The Code) was released in Australia by The Digital Industry Group (DIGI) in February 2021. DIGI is a not-for-profit industry association tasked with administering the Code. The objective of the Code is to combat false material being released on digital platforms by setting a standard of practice to which signatories are required to comply with. Eight technology companies have opted into commitments under the Code, however according to provision 7.1 they are only required to comply with their selected commitments. Provision 7.2 also recognises that companies may withdraw from the Code by notifying DIGI. An independent Complaints Committee resolves complaints regarding Signatories compliance with their commitments under the Code and the public has access to complaints that are made via a complaints portal on DIGI’s website.[^15] The Australian Communications and Media Authority (ACMA) also has oversight over the code and reports on the adequacy of platforms measures to implement their commitments. These reports are then publicly available.

The previous Australian Government took steps towards introducing legislation to combat the spread of misinformation and disinformation on digital platforms. A senate inquiry was conducted in 2023 by the Economics References Committee who reported on the ‘Influence of international digital platforms’. This inquiry received several submissions from organisations such as the Human Rights Law Centre noting concerns about the rise of disinformation and misinformation online and the failure of any existing effective enforcement mechanism combatting it. An exposure draft of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 was released for public feedback on 25 June 2023. This legislation would afford ACMA new powers to hold digital platforms to account and strengthen and support the Code to extend to non-signatories. The Government, however, has not yet announced a timeline for introduction of this Bill to parliament and there is considerable pushback from organisations noting concerns for the restraint it may impose on freedom of expression.

Online Safety Act

The Online Safety Act 2021 (Cth) does not directly regulate the spread of misinformation and disinformation. However, the Commissioner has the power to require providers to report on the extent to which they are complying with expectations under BOSE. Importantly, however, failure to comply with the expectations listed in BOSE will not lead to legal penalties as they are not enforceable by proceedings in a court.[^16] Inclusive within the ‘Core Expectations’ under BOSE, providers are required to take ‘reasonable steps’ to ensure the safety of their end-users and to prevent ‘harmful material’ being released on their sites.[^17] Under the first determination of BOSE in 2022 the Minister for Communications set out expectations that providers would take reasonable steps to minimise the extent to which AI and anonymous accounts would produce harmful material on their sites.[^18] ‘Harmful material’ is not defined anywhere in the Act, however it is considered a ‘reasonable step’ by the provider to request a consultation with the eSafety Commissioner in making determinations about what may be ‘harmful’.

  1. See, for example, Alana Maurushat, David Vaile and Alice Chow, ‘The Aftermath of Mandatory Internet Filtering and S 313 of the Telecommunications Act 1997 (Cth)’ (2014) 19 Media and Arts Law Review 263. 

  2. Enhancing Online Safety (Non-Consensual Sharing of Intimate Images) Act 2018 (Cth) sch 2 s 4; Criminal Code Act 1995 (Cth) s 474.17A. 

  3. Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 4; Criminal Code Act 1899 (Qld) s 207A. 

  4. Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 5; Criminal Code Act 1899 (Qld) s 223. 

  5. Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 6; Criminal Code Act 1899 (Qld) s 227A. 

  6. Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 7; Criminal Code Act 1899 (Qld) s 227B. 

  7. United Kingdom, Department for Science, Innovation & Technology ‘Guidance – Online Safety Act: explainer’ Online Safety Act: explainer (Web Page, 8 May 2024) https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer#what-the-online-safety-act-does

  8. Government of Canada, ‘Government of Canada introduces legislation to combat harmful content online, including the sexual exploitation of children’ Canadian Heritage (Web Page, 26 February 2024) https://www.canada.ca/en/canadian-heritage/news/2024/02/government-of-canada-introduces-legislation-to-combat-harmful-content-online-including-the-sexual-exploitation-of-children.html

  9. Government of Canada, ‘Proposed Bill to address Online Harms’ Arts and media (Web Page, 4 April 2024) https://www.canada.ca/en/canadian-heritage/services/online-harms.html

  10. R v TB (No 5) [2023] SASC 118 

  11. Tong, S, “You Won’t Believe What She Does!’: an Examination into the Use of Pornographic Deepfakes as a Method of Sexual Abuse and the Legal Protections Available to its Victims” [2022] UNSWLawJlStuS 25; UNSWLJ Student Series No 22-25. 

  12. See Laura Lavelle, ‘Antonio Rotondo guilty of contempt of court after allegedly creating deepfake images of school students and teachers’ (ABC News) (6 December 2023) https://www.abc.net.au/news/2023-12-06/qld-deepfake-images-court-charge-antonio-rotondo-school-students/103195578 

  13. Billi Fitzsimmons, ‘A Victorian teen has been arrested after fake nudes of 50 school girls were shared online’ The Daily Aus (online, 13 June 2024) < https://www.newsletter.thedailyaus.com.au/p/teen-arrested-fake-ai-images>. 

  14. Cyberspace Administration of China, Regulations on the Administration of Networked Audiovisual Information Services (18 November 2019) http://www.cac.gov.cn/2019-11/29/c_1576561820967678.htm [perma.cc/E2DQ-ZHCQ]. 


Table of contents