Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Edit this page

Content Regulation and Online Classification

  1. Content classification in Australia
    1. Classification Guidelines: R18+
    2. Classification Guidelines: X18+
    3. Classification Guidelines: Refused Classification (RC)
  2. Online Content Scheme
  3. Basic Online Safety Expectations
  4. Section 313 of the Telecommunications Act 1997 (Cth)
  5. Image-based abuse
  6. The Office of the e-Safety Commissioner
  7. Abhorrent Violent Material
  8. Regulating content in other jurisdictions
  9. Other emerging issues
    1. Deepfakes

Australia has a co-regulatory content regulation scheme. Under a co-regulation model, an industry body (such as the Communications Alliance) usually develops a code of practice, which is then made binding on industry participants through a legislative mechanism. Co-regulation is a common form of regulation in Australian media law.

The Online Safety Act 2021 (Cth) sets out an expectation that industry bodies or associations will develop industry codes to regulate certain types of harmful online material. The Act provides for the eSafety Commissioner to register the codes if certain conditions are met. These include, among other things, that the Commissioner was consulted on the code and the Commissioner is satisfied that:

• The code was developed by a body or association that represents a particular section of the online industry, and the code deals with one or more matters relating to the online activities of those participants.

• To the extent to which the code deals with one or more matters of substantial relevance to the community—the code provides appropriate community safeguards for that matter or those matters.

• To the extent to which the code deals with one or more matters that are not of substantial relevance to the community—the code deals with that matter or those matters in an appropriate manner.

• The body or association published a draft of the code and invited members of the public and industry participations to make submissions, and gave consideration to any submissions that were received.

The Commissioner may also request that a particular body or association which represents a section of the online industry develop an industry code dealing with one or more specified matters relating to the online activities of those industry participants. In April 2022, the Commissioner issued such a request, seeking the development of codes relating to ‘class 1’ material by six industry associations. The associations submitted draft codes in November 2022.

Watch the following videos for background on online content regulation prior to the 2021 changes:

Content classification in Australia

Video Overview of Australia’s classification Ratings by Emily Rees

The rules that apply to content depend upon the classification of the content. Australia has a national classification scheme for content (films, games, publications) likely to cause offence which was enacted in 1995 – the National Classification Scheme/Code. The Online Safety Act establishes an online content scheme which is partly dependent upon classification under the National Classification Code. As such, an overview of the basic features of the code supports an understanding of the Online Safety Act scheme.

The National Classification Code provides a statement of purpose that classification decisions are to give effect, as far as possible, to the following principles: (a) adults should be able to read, hear and see what they want; (b) minors should be protected from material likely to harm or disturb them; (c) everyone should be protected from exposure to unsolicited material that they find offensive; (d) the need to take account of community concerns about: (i) depictions that condone or incite violence, particularly sexual violence; and (ii) the portrayal of persons in a demeaning manner.

Publications, Films, and Computer games are rated by the Classification Board, according to the Classification Guidelines. Each State and Territory determines the consequences of classification. The ratings systems differ by media type:

  • Films: G, PG, M, MA15+, R18+, X18+, RC
  • Publications: Unrestricted, Unrestricted (M), Category 1 Restricted, Category 2 Restricted, RC
  • Games: G, PG, M, MA15+, R18+, RC

Classification Guidelines: R18+

  • High impact violence, simulated sex, drug use, nudity
  • No restrictions on language

Classification Guidelines: X18+

  • Real depictions of sexual intercourse and sexual activity between consenting adults
  • No depiction of violence or sexual violence
  • No sexually assaultive language
  • No consensual activities that ‘demean’ one of the participants
  • No fetishes (such as ‘body piercing’; candle wax; bondage; fisting; etc)
  • No depictions of anyone under 18, or of adults who look under 18.

Classification Guidelines: Refused Classification (RC)

“Publications that appear to purposefully debase or abuse for the enjoyment of readers/viewers, and which lack moral, artistic or other values to the extent that they offend against generally accepted standards of morality, decency and propriety will be classified ‘RC’.”

For films, anything that exceeds X18+ is Refused Classification. For Games, anything that exceeds R18+ is RC (A new R18+ category was introduced for Games in 2012).

Classification Guidelines: RC (Films)

  • Detailed instruction in crime or violence
  • Descriptions or depictions of child sexual abuse or any other exploitative or offensive descriptions or depictions involving a person who is, or appears to be, a child under 18 years.
  • Violence: Gratuitous, exploitative or offensive depictions of:
    • violence with a very high degree of impact or which are excessively frequent, prolonged or detailed;
    • cruelty or real violence which are very detailed or which have a high impact;
    • sexual violence.
  • Sexual activity: “Gratuitous, exploitative or offensive depictions of:
    • activity accompanied by fetishes or practices which are offensive or abhorrent;
    • incest fantasies or other fantasies which are offensive or abhorrent.”
  • Drug use:
    • Detailed instruction in the use of proscribed drugs.
    • Material promoting or encouraging proscribed drug use.

Online Content Scheme

The online content scheme under the Online Safety Act relates to two kinds of material: ‘class 1 material’ and ‘class 2 material’. Pursuant to s 106, class 1 material is material which is, or would likely be, classified as ‘RC’ by the Classification Board. Pursuant to s 107, class 2 material is material which is, or would likely be, classified as X 18+, R 18+, Category 2 or Category 1 restricted.

The Act provides for the notice and removal of class 1 material. Sections 109 and 110 provides that the Commissioner may give a notice to certain online service providers (including social media and hosting services) to remove or cease hosting material which the Commissioner is satisfied is class 1 material that can be accessed by end-users in Australia. It is not relevant where the service is provided from, or where the material is hosted – it merely needs to be accessible from Australia.

The notice may require the service provider to take all reasonable steps to remove the material from the service within 24 hours or such longer period specified by the Commissioner. Section 111 requires the service provider to comply with a removal notice to the extent they are capable of doing so.

The Act also provides for the notice and removal of certain class 2 material, namely material classified or likely classifiable as X 18+ or Category 2 restricted. The Commissioner may issue a notice to the relevant provider under ss 114 or 115. In this case, the location of the services or hosting is relevant. The Commissioner may only issue notices in relation to services provided from Australia, or content hosted within Australia. Pursuant to s 116, the provider must comply with the notice to the extent capable of doing so.

With respect to class 2 material which falls within the R 18+ or category 1 restricted classifications, the Commissioner has the power to give the provider a remedial notice under s 119. The notice may require the relevant provider to remove the material or ensure that the material is subject to a ‘restricted access system’. A restricted access system is an access-control system which the Commissioner declares to be a ‘restricted access system’. In essence, these are systems which limit the exposure of person under 18 to ‘age-inappropriate’ content online.

Under s 124, the Commissioner also has the power to issue notice to search engine providers requiring the provider to cease providing links to class 1 materials (a ‘link deletion notice’) in certain circumstances. Under s 128, the Commissioner may issue notice to an app distribution service provider to cease enabling end users in Australia to download an app that facilitates the posting of class 1 material (an ‘app removal notice’) in certain circumstances.

Basic Online Safety Expectations

The Online Safety Act provides for to Minister for Communications to make a determination (a form of legislative instrument) setting out basic online safety expectations.

The first determination was made in 2022. The Online Safety (Basic Online Safety Expectations) Determination 2022 specifies the basic online safety expectations for a social media service and other services that allow end users to access material using a carriage service or a service that delivers material by means of a carriage service.

Under s 49, the Commissioner may require the relevant providers to submit periodic reports on how they are meeting the expectations set out in the determination. The Commissioner may also publish statements about the provider’s compliance or non-compliance with the expectations on its website.

Section 313 of the Telecommunications Act 1997 (Cth)

Video Overview by Kaava Watson:Section 313

In Australia, several different forms of pressure have been exercised in recent years to encourage intermediaries to take action to police the actions of their users. The most blunt is direct action by law enforcement agencies, who are empowered to make requests of telecommunications providers under s 313 of the Telecommunications Act. This provision requires carriers and carriage service providers to “do the carrier’s best or the provider’s best to prevent telecommunications networks and facilities from being used in, or in relation to, the commission of offences against the laws of the Commonwealth or of the States and Territories”, and to “give officers and authorities of the Commonwealth and of the States and Territories such help as is reasonably necessary” to enforce criminal law, impose pecuniary penalties, assist foreign law enforcement, protect the public revenue, and safeguard national security.

Gab Red Explains How s 313 Is Used by Government Agencies to Block Websites

Matt Cartwright Explains the Recommendations of the Recent Inquiry Into the Use of s 313

The section essentially enables police and other law enforcement agencies to direct ISPs to hand over information about users and their communications. Increasingly, however, it is also apparently used by a number of government actors to require service providers to block access to content that appears to be unlawful, in cases ranging from the Australian Federal Police seeking to block access to child sexual abuse material to the Australian Securities and Investment Commission (ASIC) blocking access to phishing websites. Even the RSPCA is reported to have used the power, although the details of its request are not clear. There is significant concern over the lack of transparency around s 313(3) and lack of safeguards over its use.1 These came to the fore in 2013 when ASIC asked an ISP to block a particular IP address, not realising that the address was shared between up to 250,000 different websites, including the Melbourne Free University.

Image-based abuse

Video overview of image-based abuse laws by Danielle Harris

The non-consensual sharing of intimate images is often colloquially referred to as ‘revenge porn’. The term ‘image-based abuse’ is generally considered to be a better term because it avoids the victim-blaming connotations that the abuse is done in ‘revenge’ for some perceived wrong.

The National Statement of Principles Relating to the Criminalisation of the Non-consensual Sharing of Intimate Images encouraged each Australian jurisdiction to adopt nationally consistent criminal offences.

Under the Criminal Code Act 1995 (Cth), it is an offence to post, or threaten to post, non-consensual intimate images.2 Specifically, s 474.17 of the Criminal Code sets out an offence for the use of a carriage service in a way that reasonable persons would regard as being, in all the circumstances, menacing, harassing or offensive. Section 474.17A makes it an aggravated offence where that use involves transmitting or promoting material that is private sexual material.

Section 75 of the Online Safety Act prohibits the posting, or threatened posting, of an intimate image of another person without their consent. The prohibition applies where the person in the image or person posting the image are ordinarily resident in Australia. An ‘intimate image’ is defined to include images that depict genital or anal areas, a female, transgender or intersex person’s breasts, private activities such as showering, using the toiler or engaging in a sexual act not ordinarily done in public.

There is also a complaints-based system in the Online Safety Act, whereby the eSafety Commissioner may issue a removal notice or another civil remedy upon receipt of a victim’s complaint.

Queensland extended the definition of ‘intimate’ images to include original or photoshopped still or moving images of a person engaged in intimate sexual activity; a person’s bare genital or anal region; or a female, transgender or intersex person’s breasts.3

The definition covers an image that has been altered to appear to show any of the above-mentioned things.

The State also introduced three new misdemeanours into their Criminal Code to broaden the scope of conduct which is captured under the offence. These include distributing intimate images without the consent of the person depicted,4 observing or recording breaches of privacy,5 and distributing prohibited visual recordings.6

Angelina Kardum explains: How the major social media platforms deal with image-based abuse

Most major social media sites now have policies against image-based abuse in their community guidelines or standards. Victims of image-based abuse can make a report directly to the site on which their intimate image was shared. This report is then assessed against the site’s community guidelines or standards and if it appears to be in violation of the community guidelines or standards, the image is generally removed within 24 hours. Whilst major social media services have taken positive steps towards tackling image-based abuse, such as developing reporting and take-down mechanisms, these mechanisms have their shortcomings. The two main issues with the current approaches taken by social media services are the delays associated with the assessment of reports and the heavy reliance on self-reporting.

The Office of the e-Safety Commissioner

Lauren Trickey explains how to make a complaint to the eSafety Commissioner

The eSafety Commissioner is a statutory office which was first established by the Enhancing Online Safety Act 2015 (Cth) to promote and enhance online safety. The powers of the Commissioner were later enhanced in the Online Safety Act 2021 (Cth). While most of the Commissioner’s functions are contained in the Online Safety Act 2021, the Commissioner also has powers and functions under the Telecommunications Act 1997 (Cth) and the Criminal Code Act 1995 (Cth).

The Commissioner can receive reports for cyber-bulling, image-based abuse or offensive and illegal content.

Under s 30, complaints about cyberbullying of a child can be made by an Australian child or parent, guardian or person authorised by the child. An adult person can also make a complaint if they believe they were the target of cyberbullying material as a child, so long as the complaint is made within a reasonable time after they became aware and 6 months after they reached 18 years old. Cyberbullying refers to online material intended to seriously threaten, intimidate, harass or humiliate an Australian child.

The 2021 amendments introduced the world’s first legal scheme dealing with cyberbullying of adults. Under s 36, an Australian adult may make a complaint to the Commissioner about cyber-abuse material. Cyber-abuse material is material an ordinary reasonable person would conclude is likely intended to have an effect of causing serious harm to a particular Australian adult; and an ordinary reasonable person in the position of the Australian adult would regard the material as being, in all the circumstances, menacing, harassing or offensive.

As outlined above, image based-abuse complaints can be made to the Commissioner. Pursuant to s 32, complaints can be made by the person in the intimate image, a person authorised to make a report or a parent or guardian of a child or a person who does not have capacity.

Australian residents can also report offensive or illegal content, which includes abhorrent violent material or material depicting illegal acts.

For each type of material an online form can be completed on the eSafety website. Each form requests information regarding what is contained or depicted in the material and where the material has been posted. After receiving a complaint, , the Commissioner has the power to conduct an investigation (as the Commissioner thinks fit). The Commissioner assesses the material complained of to determine the appropriate course of action, which may include liaising with the relevant platform for the material to be removed.

Abhorrent Violent Material

See overview by Georgie Vine about the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 creates new offences under the Criminal Code Act 1995 (Cth), effective from 6 April 2019.

These new provisions require content (social media) websites and hosting providers to remove abhorrent violent material as soon as reasonably possible, and to refer it to the Australian Federal Police.

These offences target content that is reasonably capable of being accessed with Australia, regardless of where the material was created or where the platform operator is located. The offences include substantial penalties if individuals and companies do not remove or report such classified material: fines up to 10% of annual global turnover for companies, and up to 3 years imprisonment for individuals.

There are a few defences to the new offences, including material necessary for law enforcement, material distributed by journalists; material used for scientific, medical, academic or historical research; and the exhibition of artistic works.

Regulating content in other jurisdictions

Snoot Boot explains France and Germany’s online hate speech laws

Both France and Germany have attempted stricter approaches to regulating online hate speech. In particular, Germany’s laws require social media companies to remove hate speech and report users to the police, or else face significant fines. In 2020, France proposed laws similar to Germany’s, but these laws were struck down by a French court, as the laws were unconstitutional and imposed an unreasonable burden on the freedom of speech because they incentivized over-censorship. The laws highlight a deeper tension between free speech and the need to regulate and censor hateful ideologies being spread online.

Other emerging issues

Deepfakes

See an explanation of deepfakes by Eric Briese

A deepfake is a technique of video manipulation where artificial intelligence and deep learning is leveraged to modify the appearance of a person in a video to appear as someone else. While fake and edited videos are nothing new, this recent phenomenon has caused concern due to the ease and accessibility of their creation.

The concerns surrounding deep fakes stem from their ability to be used for all sorts of bad reason such as false evidence, blackmail, or for personal attacks. The potential harm that deepfakes can cause is also magnified due to the widespread use of the internet in our increasingly online interconnected world.

Deepfakes aren’t explicitly illegal but depending on how they are used they may be captured under other laws. For instance, creating and spreading deepfake pornography may be illegal under section 223 of the Criminal Code in Queensland. Under section 223, it is a criminal act to distribute an intimate image of another person without the other person’s consent and in a way that would cause the other person distress reasonably arising in all the circumstances. Some uses of deepfakes might constitute fraud under the Crimes Act 1900 (Cth), or misleading and deceptive conduct in trade or commerce under the Australian Consumer Law. Some uses might involve the use of ‘personal information’ and accordingly, be subject to the Privacy Act 1988(Cth).

Regulating deepfakes is a difficult challenge. One possibility to combat deepfakes is to create algorithms which leverage the same deep learning technology to automatically detect deepfakes, but this is not perfect. Companies are working towards finding viable solutions like by releasing collections of deepfakes to help researches and experts study this new phenomenon.

  1. See, for example, Alana Maurushat, David Vaile and Alice Chow, ‘The Aftermath of Mandatory Internet Filtering and S 313 of the Telecommunications Act 1997 (Cth)’ (2014) 19 Media and Arts Law Review 263. 

  2. Enhancing Online Safety (Non-Consensual Sharing of Intimate Images) Act 2018 (Cth) sch 2 s 4; Criminal Code Act 1995 (Cth) s 474.17A. 

  3. Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 4; Criminal Code Act 1899 (Qld) s 207A. 

  4. Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 5; Criminal Code Act 1899 (Qld) s 223. 

  5. Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 6; Criminal Code Act 1899 (Qld) s 227A. 

  6. Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 7; Criminal Code Act 1899 (Qld) s 227B.