Regulating Private Intermediaries
- Freedom of Speech Online
- Racism, misogyny and abuse: the internet has problems
- State Intervention
- Platform Governance and Content Moderation
Freedom of Speech Online
Overview of Online Freedom of Speech Issues by Nic Suzor
The great potential the internet brings is to democratise speech. It provides the ability for ordinary people to be heard by massive audiences. When Time Magazine named ‘You’ the person of the year in 2006, it showed a sense of great optimism in the democratic potential of the internet. The optimistic view is that the internet provides the ability for ordinary people to be heard by massive audiences. This was never the case with broadcast or mass media.
Article 19 of the ICCPR protects the freedom to seek and impart information. In many cases, the freedom that the internet provides seems almost unlimited. Stewart Brand famously said in 1984, ‘information wants to be free’. Dan Gilmore explained that regulating speech on the internet was extremely difficult - ‘the net treats censorship as damage and routes around it’. However, there are conflicts and a series of difficult issues that we will explore below.
Racism, misogyny and abuse: the internet has problems
**Lucinda Nelson on racism on social media in 2020
Social media has made spreading abusive content easier, but regulating it more difficult. Activists have long demanded action from social media platforms to address hateful content that they help distribute online. Three of the central, overarching demands are: increased transparency; more proactive efforts; and greater engagement with experts and marginalised populations.
Although there have been some positive responses to these demands, social media platforms have been reluctant to make changes that effectively address abuse.
State Intervention
There are serious problems with direct state intervention and abuse of state-created rules. There are ongoing debates about the extent to which states should regulate speech online. In Australia, the Federal Government sought to introduce a filtering system that would restrict access to speech that was ‘offensive’, including speech that was not illegal. In many countries around the world, governments are requiring online intermediaries to censor information in ways that likely violate Art 19. Russia, for example, has leant on Twitter to block pro-Ukranian activists; Turkey has required Twitter to block content within Turkey from anti-government protestors.
There are also serious problems with Notice-and-takedown. Notice-and-takedown is a state-created response to a need to provide effective ways to police the internet, but it leaves open massive potential for abuse. There’s a serious procedural problem here: intermediaries are threatened with liability if they don’t remove infringing content, but they’re not in a position to know whether the material is protected under law. They’re not courts, and can’t really legitimately make this decision.
Private Regulation and The Power of Platforms
Private organisations are largely responsible for determining what information we can communicate and seek. Intermediaries like Google control what information turns up in search engines. Social networks like Facebook make important decisions about what information we see. So, for example, Facebook has admitted to manipulating the content of news feeds to drive changes in mood. Because Facebook’s goal is to sell advertising, they have a strong incentive to show us the most profitable content. This is a huge amount of power over human thought.
All of these companies also have standards that determine what content is acceptable. So, for example, male nipples are OK on Facebook, but not female nipples. Pictures of beheadings are OK, but not pictures of mothers breastfeeding. Pictures of marijuana, but not other drugs.
These private organisations are increasingly important in how we access information, but they are not bound by constitutional protections of free speech.
Platform Responsibility: Imposing Obligations on Private Intermediaries
Video Overview of the UN Guiding Principles on Business and Human Rights by the Danish Institute for Human Rights
The new gatekeepers are private actors who have the power to control speech but no real responsibilities. So we don’t know when they’re censoring speech, on the one hand, and on the other, there is a lot of abusive speech, hate speech, and vilification that is difficult to respond to or control.
There are ongoing battles to try to get more transparency about how private entities make decisions. These new gatekeepers are under increasing pressure to justify the policies they make and the way those policies are enforced.
This becomes even more important when we look at the conflict between freedom of speech and other legitimate legal rights. For example, there are many who complain that private companies do not do enough to limit the spread of hate speech. In recent years, there has been a lot of publicity about gender-based hate speech in particular, but there are serious questions about how well the social networks that control speech encourage and protect minority viewpoints in general.
If we think of freedom of speech not just as a negative right to be free from overt state interference, but as a thicker substantive right to maintain and express one’s opinions, there is a real conflict here. Minority voices are being drowned out by abuse1 or silenced by algorithms, filters, and moderators with inbuilt majoritarian biases.
This represents a key tension between the right of freedom of expression, and the ability to actually enforce legal rules and social norms. Private intermediaries are increasingly being asked to do more, but they don’t have the legitimate authority of courts. If they don’t do more, though, people get hurt. Finding a way to balance these tensions is one of the key challenges for regulating the internet.
The United Nations Guiding Principles on Business and Human Rights explains that business have a responsibility to respond to human rights abuses with which they are involved. Civil society groups are increasingly seeking to get intermediaries to cooperate. Using the language of ‘responsibility’, they’re trying to drive change in the policies of platforms.
This proceeds both in terms of seeking more transparency and accountability when platforms censor information or hand over personal details on behalf of governments, and also when platforms remove, or refuse to remove, content that violates their terms of service. Across many of these services, people are also increasingly looking for technical ways to achieve regulation (see, for example, Twitter’s block lists).
Platform Governance and Content Moderation
What is Platform Governance?
“Governance” refers to the system of rules, policies, practices and standards that manages how an organisation operates and the use of mechanisms to hold organisations and individuals to account. Thus, “platform governance” refers to the management of platforms, such as social media, their users and stakeholders, using a range of rules, policies, practices and standards to govern the use, operation and purpose of the platform.
Self-Regulation Model of Platform Governance
There are many different models of platform governance, however, the most common type is the self-regulation model. Many online platforms hold an extreme amount of discretion as to how to govern their platforms and whilst there have been recent improvements in content moderation, self-regulation is still the prominent form of platform governance.
Responsibilities of Platforms in Content Management
Online platforms have a responsibility to manage the content available to its users. Content moderation ensures that user-generated content meets standards and guidelines through monitoring and reviewing. Content moderation is a form of platform governance, and its goal is to enforce community guidelines and terms of service. There are various strategies that platforms use to govern the content including: human moderation and automated moderation.
Human Moderation
Human content moderation is a form of manual moderation. This typically involves the manual monitoring of user generated content to ensure that the content conforms to the platform’s guidelines. User-generated content that does not conform to the law, guidelines or platform specifications is removed through human intervention to improve user satisfaction and protect users from illegal or harmful content.
Manual human moderation has many benefits including the ability to understand context behind content, which improves content-specific judgements within full-scale datasets. However, its major downfall occurs in its cost and time effectiveness. Human moderation is unable to manage content to the same scale at the same price as automated moderation.
Automated Moderation
Automated moderation includes any form of automated response to user-generated content submitted to a platform. Automated moderation includes tools like algorithms and artificial intelligence (AI) to ensure that content meets platform guidelines. The use of AI in automated moderation allows user-generated content to be reviewed against platform data and determine an appropriate cause of action – the AI itself is able to learn from these outcomes and develops more accurate moderation. The benefits of automated moderation are centred around the time and cost benefits. Automated moderation can make almost instantaneous decisions about content, ensuring it meets regulations. This creates a cheaper alternative to paying humans to moderate the same amount of content, it also allows the platform to grow.
Government Regulations on Platform Policies
While many platforms approached platform governance with self-regulation at the forefront, government legislation and rules can also form part of the governance associated with content available online. In 2021, the Australian federal Government introduced the Online Safety Act 2021 (Cth). Further regulations came into force with the Online Safety (Basic Online Safety Expectation) Determination 2022 (Cth).
Online Safety Act - Online Content Scheme
The Online Safety Act is aimed at keeping consumers safe when accessing online materials or using platforms in addition to removing and reporting harmful content. The Online Safety Act established an Online Content Scheme. The Scheme is designed to ensure the aims of the Act are maintained and enforced through various measures.
The Online Content Scheme allows users to make formal complaints to eSafety about content that may be harmful, offensive and illegal. The Scheme also empowers eSafety to access and review the complaints.
The Online Safety Act part 9 has a classification system for content that may be illegal or restricted. Legal tools are available to keep Australian platform users safe, including:
- For class 1 material – involvement of the Australian Federal Police, and
- For class 2 material – an order for the removal or a restricted access system of the content.
The Online Safety Act can also empower eSafety to order the removal of extremely harmful and illegal content from platforms, even when it is not hosted within Australia. Extremely harmful and illegal content includes:
- Child sexual abuse material,
- Detailed instruction or promotion of crime or violence,
- Gratuitous, exploitative and offensive depictions of violence or sexual violence, and/or
- Material that advocates carrying out a terrorist act.
Online Safety Expectations
The Online Safety (Basic Online Safety Expectation) Determination 2022 (Cth) set out a range of expectations that online platforms are expected to conform to, aimed at increasing transparency and accountability of online service providers, and guaranteeing that platforms have adequate processes in place to minimise and mitigate harmful content.
-
For an interesting overview of hate speech in online gaming platforms, see this video by GAMBIT: http://video.mit.edu/watch/gambit-hate-speech-video-7031/ ↩