- Content classification in Australia
- Online Classification
- Content Hosted in Australia (Sch 7)
- Internationally Hosted Content (Sch 5)
- Section 313 of the of the Telecommunications Act 1997 (Cth)
- Image-based abuse
- The Office of the e-Safety Commissioner
- Abhorrent Violent Material
- Regulating content in other jurisdictions
- Other emerging issues
Video Overview of Online Content Regulation in Australia by Nicolas Suzor
Australia has a co-regulatory content regulation scheme. Telecommunications providers work together to develop industry codes of practice. These industry codes are overseen by the Australian Communications and Media Authority (ACMA) under the Broadcasting Services Act 1992 (Cth) Schedules 5 and 7. Schedule 5 applies to material hosted outside of Australia, and Schedule 7 applies to material hosted inside Australia. Australia also has an Office of the eSafety Commissioner, who has powers to investigate and address complaints about internet content. The Communications Alliance is the industry body that represents ISPs and Content Hosts.
Video Overview of Australia’s classification Ratings byEmily Rees
The National Classification Code provides a statement of purpose that classification decisions are to give effect, as far as possible, to the following principles: (a) adults should be able to read, hear and see what they want; (b) minors should be protected from material likely to harm or disturb them; (c) everyone should be protected from exposure to unsolicited material that they find offensive; (d) the need to take account of community concerns about: (i) depictions that condone or incite violence, particularly sexual violence; and (ii) the portrayal of persons in a demeaning manner.
Publications, Films, and Computer games are rated by the Classification Board, according to the Classification Guidelines. Each State and Territory determines the consequences of classification. The ratings systems differ by media:
- Films: G, PG, M, MA15+, R18+, X18+
- Publications: Unrestricted, Unrestricted (M), Category 1 Restricted, Category 2 Restricted
- Games: G, PG, M, MA15+, R18+
- High impact violence, simulated sex, drug use, nudity
- No restrictions on language
- Real depictions of sexual intercourse and sexual activity between consenting adults
- No depiction of violence or sexual violence
- No sexually assaultive language
- No consensual activities that ‘demean’ one of the participants
- No fetishes (such as ‘body piercing’; candle wax; bondage; fisting; etc)
- No depictions of anyone under 18, or of adults who look under 18.
“Publications that appear to purposefully debase or abuse for the enjoyment of readers/viewers, and which lack moral, artistic or other values to the extent that they offend against generally accepted standards of morality, decency and propriety will be classified ‘RC’.”
For films, anything that exceeds X18+ is Refused Classification. For Games, anything that exceeds R18+ is RC (A new R18+ category was introduced for Games in 2012).
Classification Guidelines: RC (Films)
- Detailed instruction in crime or violence
- Descriptions or depictions of child sexual abuse or any other exploitative or offensive descriptions or depictions involving a person who is, or appears to be, a child under 18 years.
- Violence: Gratuitous, exploitative or offensive depictions of:
- violence with a very high degree of impact or which are excessively frequent, prolonged or detailed;
- cruelty or real violence which are very detailed or which have a high impact;
- sexual violence.
- Sexual activity: “Gratuitous, exploitative or offensive depictions of:
- activity accompanied by fetishes or practices which are offensive or abhorrent;
- incest fantasies or other fantasies which are offensive or abhorrent.”
- Drug use:
- Detailed instruction in the use of proscribed drugs.
- Material promoting or encouraging proscribed drug use.
The online classification scheme is a complaints-based scheme. When ACMA receives a complaint, it applies the Guidelines to come to a preliminary view as to whether content is ‘potential prohibited content’. If the content has not yet been classified, ACMA will obtain classification from the Classification Board.
Video Overview of the complaints system under Sch 7 by Hannah Burnett
Under Schedule 7, members of the public can make complaints about prohibited or potentially prohibited content 1, which the Commissioner must investigate. Content that is prohibited in Australia is defined as material that is or would be:2
- Refused Classification (content that exceeds the limits of all other categories. RC content is not necessarily unlawful to possess or access, but it is unlawful to broadcast, sell, or screen publicly)
- X rated (films that depict sexually explicit content);
- Rated R18+ (Films or computer games with high impact violence, sex scenes, and drug use that are high in impact) without a Restricted Access System; or
- Audiovisual content that is Rated MA15+ (‘strong content’ that is legally restricted to persons 15 years and over), provided over a commercial service, and not subject to a restricted access system.
If prohibited content is available from servers with an ‘Australian connection’, the Commissioner must order the service provider to remove or stop serving the content. This is done in different ways for different types of content:
- For hosted content, the Commissioner must serve the hosting service a ‘final take down notice’.3
- For live-streaming content, the Commissioner must give the provider a ‘final service-cessation notice’.4
- Where prohibited content is linked to by an Australian indexing service (or search engine), the Commissioner must give the provider a ‘final link-deletion notice’.5
To determine the applicable rating, internet content is evaluated as if it were a ‘film’6 under the Classification Guidelines. If content is ‘potentially prohibited’, the Commissioner must issue an interim notice and apply to the Classification Board for classification.
Must comply with ACMA’s RAS Declaration 2007: (a) requires an application for access to R 18+ content; (b) provides warnings and safety information for R18+ content; (c) verifies the age of applicants; (d) limits access to R18+ content or R18+ and MA15+ content as required; (e) includes a risk analysis; (f) includes quality assurance measures for R 18+ content; (g) meets the record keeping requirements for R 18+ content.
Available as of right Only available to the service provider concerned. (Who is the service provider?)
Where relevant industry codes have failed or are not adopted, the ACMA may replace these codes with their own standards. The ACMA’s ability to replace an industry code with a standard varies dependent on the legislation that governs that industry.
Generally, the ACMA must request that an association representing an industry develop a replacement of a code within 120 days. The ACMA must publish this request in the federal gazette if no association exists within an industry. If the developed code fails in some capacity or no association is formed, the ACMA may replace an industry code with a standard. In some industries, the appropriate Minister may direct the ACMA to establish a standard to replace a code.
Video Overview of the Operation of Sch 5 for Internationally Hosted Content by Anon
Legislation provides for blocking of content if there is no industry code in force. However, there is a current industry code, and it does not require blocking. Under the code, URLs of prohibited content are given to voluntary filter vendors, but no further action is taken in regards to overseas hosted content.
Video Overview by Kaava Watson:Section 313
In Australia, several different forms of pressure have been exercised in recent years to encourage intermediaries to take action to police the actions of their users. The most blunt is direct action by law enforcement agencies, who are empowered to make requests of telecommunications providers under s 313 of the Telecommunications Act. This provision requires carriers and carriage service providers to “do the carrier’s best or the provider’s best to prevent telecommunications networks and facilities from being used in, or in relation to, the commission of offences against the laws of the Commonwealth or of the States and Territories”, and to “give officers and authorities of the Commonwealth and of the States and Territories such help as is reasonably necessary” to enforce criminal law, impose pecuniary penalties, assist foreign law enforcement, protect the public revenue, and safeguard national security.
Gab Red Explains How s 313 Is Used by Government Agencies to Block Websites
The section essentially enables police and other law enforcement agencies to direct ISPs to hand over information about users and their communications. Increasingly, however, it is also apparently used by a number of government actors to require service providers to block access to content that appears to be unlawful, in cases ranging from the Australian Federal Police seeking to block access to child sexual abuse material to the Australian Securities and Investment Commission (ASIC) blocking access to phishing websites. Even the RSPCA is reported to have used the power, although the details of its request are not clear. There is significant concern over the lack of transparency around s 313(3) and lack of safeguards over its use.7 These came to the fore in 2013 when ASIC asked an ISP to block a particular IP address, not realising that the address was shared between up to 250,000 different websites, including the Melbourne Free University. The operation of s 313(3) is currently under review by the House of Representatives Standing Committee on Infrastructure and Communications.
Video overview of image-based abuse laws by Danielle Harris
The non-consensual sharing of intimate images is often colloquially referred to as ‘revenge porn’. The term ‘image-based abuse’ is generally considered to be a better term because it avoids the victim-blaming connotations that the abuse is done in ‘revenge’ for some perceived wrong.
The National Statement of Principles Relating to the Criminalisation of the Non-consensual Sharing of Intimate Images encouraged each Australian jurisdiction to adopt nationally consistent criminal offences.
The Commonwealth has inserted a new criminal offence which prohibits the posting, or threatening to post, non-consensual intimate images.8 It also introduced a complaints-based system whereby the eSafety Commissioner may issue a removal notice or another civil remedy upon receipt of a victim’s complaint.9
Queensland extended the definition of ‘intimate’ images to include original or photoshopped still or moving images of a person engaged in intimate sexual activity; a person’s bare genital or anal region; or a female, transgender or intersex person’s breasts.10
The State also introduced three new misdemeanours into their Criminal Code to broaden the scope of conduct which is captured under the offence. These include distributing intimate images,11 observing or recording breaches of privacy,12 and distributing prohibited visual recordings.13
Angelina Kardum: (How the major social media platforms deal with image-based abuse](https://www.youtube.com/watch?v=Y-RbaE73B7M&feature=youtu.be)
Most major social media sites now have policies against image-based abuse in their community guidelines or standards. Victims of image-based abuse can make a report directly to the site on which their intimate image was shared. This report is then assessed against the site’s community guidelines or standards and if it appears to be in violation of the community guidelines or standards, the image is generally removed within 24 hours. Whilst major social media services have taken positive steps towards tackling image-based abuse, such as developing reporting and take-down mechanisms, these mechanisms have their shortcomings. The two main issues with the current approaches taken by social media services are the delays associated with the assessment of reports and the heavy reliance on self-reporting.
Lauren Trickey explains how to make a complaint to the eSafety Commissioner
The eSafety Commissioner is a statutory office established by the Enhancing Online Safety Act 2015 (Cth) to promote and enhance online safety. The Commissioner can receive reports for cyber-bulling, image-based abuse or offensive and illegal content.
Cyberbullying complaints can be made by an Australian child or parent, guardian or person authorised by the child. Cyberbullying refers to online material intended to seriously threaten, intimidate, harass or humiliate an Australian child.
Image based-abuse complaints can be made by the person in the intimate image, a person authorised to make a report or a parent or guardian of a child or a person who does not have capacity. The Commissioner can only assist if the image is hosted in Australia or if the person in the image or the person who posted the image resides in Australia. Image based-abuse occurs where a person has shared or threatened to share an intimate image of another person, without their consent.
Australian residents can also report offensive or illegal content, which includes abhorrent violent material or material depicting illegal acts.
For each type of material an online form can be completed on the eSafety website. Each form requests information regarding what is contained or depicted in the material and where the material has been posted. After receiving a complaint, the Commissioner assesses the material complained of to determine the appropriate course of action, which may include liaising with the relevant platform for the material to be removed.
See overview by Georgie Vine about the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019
The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 creates new offences under the Criminal Code, effective from 6 April 2019. These new provisions require social media websites and other platforms to remove abhorrent violent material as soon as reasonably possible, and to refer it to the Australian Federal Police. These offences target content that is reasonably capable of being accessed with Australia, regardless of where the platform operator is located. The offences include substantial penalties if individuals and companies do not remove or report such classified material: fines up to 10% of annual global turnover for companies, and up to 3 years imprisonment for individuals.
There are a few defences to the new offences, including material distributed by journalists; material used for scientific, medical, academic or historical research; and the exhibition of artistic works.
Snoot Boot explains France and Germany’s online hate speech laws
Both France and Germany have attempted stricter approaches to regulating online hate speech. In particular, Germany’s laws require social media companies to remove hate speech and report users to the police, or else face significant fines. In 2020, France proposed laws similar to Germany’s, but these laws were struck down by a French court, as the laws were unconstitutional and imposed an unreasonable burden on the freedom of speech because they incentivized over-censorship. The laws highlight a deeper tension between free speech and the need to regulate and censor hateful ideologies being spread online.
See an explanation of deepfakes by Eric Briese
A deepfake is a technique of video manipulation where artificial intelligence and deep learning is leveraged to modify the appearance of a person in a video to appear as someone else. While fake and edited videos are nothing new, this recent phenomenon has caused concern due to the ease and accessibility of their creation.
The concerns surrounding deep fakes stem from their ability to be used for all sorts of bad reason such as false evidence, blackmail, or for personal attacks. The potential harm that deepfakes can cause is also magnified due to the widespread use of the internet in our increasingly online interconnected world.
Deepfakes aren’t explicitly illegal but depending on how they are used they may be captured under other laws. For instance, creating and spreading deepfake pornography is illegal under section 223 of the criminal code in Queensland.
Regulating deepfakes is a difficult challenge. One possibility to combat deepfakes is to create algorithms which leverage the same deep learning technology to automatically detect deepfakes, but this is not perfect. Companies are working towards finding viable solutions like by releasing collections of deepfakes to help researches and experts study this new phenomenon.
cl 37 ↩
cl 20 ↩
cl 47 ↩
cl 56 ↩
cl 62 ↩
cl 25 ↩
See, for example, Alana Maurushat, David Vaile and Alice Chow, ‘The Aftermath of Mandatory Internet Filtering and S 313 of the Telecommunications Act 1997 (Cth)’ (2014) 19 Media and Arts Law Review 263. ↩
Enhancing Online Safety (Non-Consensual Sharing of Intimate Images) Act 2018 (Cth) sch 2 s 4; Criminal Code Act 1995 (Cth) s 474.17A. ↩
Enhancing Online Safety (Non-Consensual Sharing of Intimate Images) Act 2018 (Cth) sch 1 s 24; Enhancing Online Safety Act 2015 (Cth) ss 19A, 27, 44D–44F. ↩
Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 4; Criminal Code Act 1899 (Qld) s 207A. ↩
Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 5; Criminal Code Act 1899 (Qld) s 223. ↩
Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 6; Criminal Code Act 1899 (Qld) s 227A. ↩
Criminal Code (Non-Consensual Sharing of Intimate Images) Amendment Bill 2018 (Qld) s 7; Criminal Code Act 1899 (Qld) s 227B. ↩