How is the internet regulated?

1. How does law operate?

In this unit, we pay particular attention to the limits of law. There are challenges to regulating online that are different in kind to the challenges of regulating in 'meat space'. So, for example, regulating offensive content online is much more difficult than the way we have traditionally approached it. Over a third of Australian internet users think that there is 'too much immoral material on the internet' 1). Historically, we have regulated content by targeting the handful of radio and television broadcasters and making them abide by content standards. We could also regulate publications and physical media through a classification scheme, and require anyone dealing commercially with media to abide by that classification scheme. The same strategy no longer works in the online environment: the internet is too decentralised and there is too much material for us to adequately classify each piece of content or regulate every distributor. In this unit, we will study how the law can influence technology makers, intermediaries, and others in new ways to regulate the kinds of material that are accessible. This requires a new way of thinking about how regulation can actually operate in practice.

2. What obligations are imposed on intermediaries?

One way of ensuring that law can scale with the size of the internet is to target the intermediaries. These are the internet service providers that consumers use to get access to the internet, the content hosts and social networks where material is posted, the search engines that control how material is found, and the makers of software through which material is accessed. The law can impose obligations on each of these actors to regulate indirectly. So, for example, this has been very important in ongoing attempts to regulate copyright infringement. Over the last twenty years, copyright owners trying to stop peer-to-peer infringement have successively targeted the developers of P2P software, the websites and search engines that index P2P content, and, most recently, the consumer internet service providers. In this unit, we will examine when, exactly, an online intermediary has an obligation to protect the rights of an unrelated third party. We will spend a lot of time on copyright and defamation law, but also consider the general law principles and the new 'safe harbour' protections that intermediaries have been granted against claims by third parties.

The third theme of this unit is to look at internet-specific regulation. We study how contracts are formed online, what civil and criminal liability exists for unauthorised access to computer systems, how SPAM and private information is regulated, and how a specific quasi-judicial regime has developed to regulate the domain name system.

4. How do we protect human rights online?

Finally, we study how human rights can be protected online. This has two main components. The first is about how citizens can be protected from over-reach by the state: when online service providers are obliged to hand over data or block content, and what safeguards exist over these processes. The second is about how citizens can be protected from over-reach by private entities: how values of due process, freedom of speech, and freedom from abuse can be protected when, for a huge proportion of online activity, it is private corporations, and not the state, that really regulate how we interact online. In this unit, we consider how private power can be constrained through competition law, network neutrality requirements, contract law, and other legal obligations imposed on private intermediaries.

A declaration of the independence of cybersapce

In 1996, John Perry Barlow released a famous provocation about the limits of State power in regulating the internet. The Declaration begins:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.... I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.

This is a massively influential essay in the way that we think about online regulation. Barlow here is making two key points. First, that the internet is inherently unregulable by territorial governments. Second, he argues that governments should defer to cyberspace self-rule.

1. The internet is unregulable

Nic Suzor on regulating the internet

Barlow makes a descriptive claim that territorial states do not have the power to regulate the internet.

"nor do you possess any methods of enforcement we have true reason to fear."

The internet is a decentralised network of networks. It spans the globe without any real concern for jurisdictional boundaries. It connects billions of people, from all over the world, who can communicate largely anonymously. The sheer quantity of content that is transmitted over the network each day is almost incomprehensibly large.

All of these factors mean that, for the most part, any explicit interventions by governments can be trivially circumvented. If a website is shut down in one jurisdiction, it can be back up the next day somewhere else in the world. If a document is removed from one site, it will often quickly be reposted on a dozen more (see, for example, the 'streisand effect').

It turns out that regulating the internet isn't quite impossible, just often very difficult. Fundamentally, the internet is not a separate place; the people who use it are real people, in real locations, subject to the very real power of their jurisdictions. The pipes people use to communicate are cables and wireless links which also have physical presence. Where a Government can target the speakers, recipients, or intermediaries involved in a communication, it can have a real effect on what information is transmitted.

So the challenge of regulating the internet is finding an effective way to either identify and regulate the potentially anonymous creators of information, the billions of potential recipients, or finding a way to regulate the networks along the chain of communication.

Who controls the pipes, controls the universe

Here, for example, is a graph of network traffic in Egypt over the period of the January 2011 revolution. You can clearly see the point at which the Egyptian Government had shut down the five major Egyptian internet service providers.

Graph of traffic to and from Egypt on January 27-28 2011, from Arbor Networks

(Image (c) Arbor Networks via Wired)

A case study: Newzbin

While the internet is not unregulable, there are unique challenges facing regulators. One example from the fight against copyright infringement is the case of Newzbin, which was a popular Usenet indexing site. Dubbed 'the Google of usenet' by the MPAA, copyright owner groups sought to shut down the service that allowed others to easily find copyright films and other works. In a 2010 Decision, the UK High Court found Newzbin liable for copyright infringement, and the company was wound up and their website shut down.2)

Two weeks later, Newzbin2 rose from the ashes. Someone had copied the entire codebase of the old site and brought it back online on a server in the Seychelles, outside of UK jurisdiction. The MPAA went back to court, this time seeking an injunction that would require UK-based ISPs to block access to the website. The Court granted this order, marking an expansion of laws that were originally designed to block websites that hosted child sexual abuse material: Twentieth Century Fox Film Corporation v British Telecommunications PLC [2011] EWHC 1981 (Ch)

The system for blocking websites is not wholly effective. It turned out to be easy to bypass if users encrypted their connections or used a virtual private network to avoid the block. Shortly after the injunction, Newzbin2 released a user-friendly application to 'utterly defeat' the filter, explaining that its app could “break any updated web censorship methods or anti-freedom countermeasures”.

Ultimately, however, Newzbin2 closed down in 2012. It had lost the trust of its users, who were not sufficiently willing to pay to support the new service. Importantly, copyright owners had also started to target the payment intermediaries that channeled funds to the organisation - intermediaries like Mastercard, Visa, Paypal, and smaller payment processors that use these networks.

The lesson of Newzbin shows how complex these issues are. By cutting off the flow of money, the rightsholder groups were eventually successful in shutting down Newzbin. But this took a lot of time and effort, and there is a good chance that many users of the service simply moved to newer, better hidden infringement networks. Overall, the copyright industry has had some succes in tackling large copyright infringers, but this is an ongoing arms race, as infringers continue to find ways around the regulations.

2. State regulation of the internet is illegitimate

The second point that Barlow makes is that state governments should defer to cyberspace self-rule, or what we call 'private ordering'. Barlow explains that

"We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge."

Barlow's argument is that the rules and social norms created by online communities to govern themselves will be better than anything imposed by territorial states. This was expressed by Johnson and Post in a famous 1995 article as a general principle that there is “no geographically localized set of constituents with a strong and more legitimate claim to regulate [online activities]” than the members of the communities themselves.3)

This is a particular concern about the legitimacy of any one nation claiming jurisdiction over transnational communications. Barlow and Johnson and Post argue not only that online communities should be able to govern for themselves, but that if territorial governments try to impose their own laws on a borderless internet, users will never be able to work out what set of rules they are subject to.4)

If online communities are not left to regulate themselves, “we'll be stuck with the chaotic nonsense of Jurisdictional Whack-a-Mole”.

As we will see in the Jurisdiction chapter, this is still a vexed issue. As the Australian High Court noted in the Dow Jones v Gutnick5) case, Nation states purport to have a responsibility to protect their citizens' interests online, and certainly a desire to regulate online content and behaviour.

Network engineers talk about layers of networks. This can get complicated fast, but when thinking about regulation, you can think of three basic layers.

The first is the 'infrastructure' layer (network cables, routers, and protocols). This layer of the Internet is designed around the principle of a 'neutral' network ('end-to-end' principle): the responsibility for determining the content of communications rests with smart servers and users at the ends of the network, and the intermediaries are just responsible for passing messages along the chain. As the content passes over their networks, intermediaries are expected not to examine or intefere with it in any substantive way.

This design principle is largely responsible for enabling the innovation that we see today. Because the network itself is open, there is a real separation between the infrastructure (the pipes) and content (the data that flows over those pipes). This means that anyone is free to 'plug-in' to the internet and start providing services over the IP protocol and the hardware that connects all users together. This freedom at the infrastructure level allows lots of innovation at the ends, where providers can design new systems that can operate on top of standardised protocols.

The second layer can be thought of as the 'code' layer: this is the software that operates at the ends of the network to interact with users. The webserver that sends users the webpages they requested, customised and tailored for that particular user, is a software program running on a server or farm of servers. The apps that connect people to others, that allow users to chat, like, and swipe content created by others, are pieces of software running on mobile devices and personal computers that communicate with software running on the servers somewhere in 'the cloud'. These programs – their design, the input they accept, the algorithms they use to respond to requests – are responsible for determining who we can communicate with and how.

Finally, we can think of the 'content' layer: this is the material that is transmitted over the network infrastructure, selected and presented by code. This is what we mostly mean when we think of the internet - the visible components of the network, the information that users express and receive. This is the layer at which most of the regulatory concerns arise; governments and private actors often have reasons to want to limit the flow of certain information.

The history of internet regulation is most visibly a history of attempts by various parties to regulate content: offensive communications and pornogrpahy; private and confidential information; defamatory statements; and copyright content. Increasingly, however, attempts to regulate content involve struggles at the code and infrastructure layers, as pressure mounts on those who provide network infrastrucutre or services to build certain rules into their systems. The big struggles over internet governance now are largely a series of struggles over who gets to decide how networks are structured and how code operates.

The resilience of the internet is often framed in John Gilmore's famous words:6)

"The Net interprets censorship as damage and routes around it."

To understand this claim, we have to understand some principles about how the internet works. The Internet is often defined as a 'network of networks'. Wikipedia has a good definition:

The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (TCP/IP) to link several billion devices worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies.

The Internet as we know it is built on technologies funded by the US Department of Defence, large public investments in infrastructure by academic and other institutions, and, from the 1990s, massive private investments in the deployment of new commercial and private connections around the world.

From its very beginnings, the internet was designed to be resilient. One of its key features is that it relies on an inter-connected web of computers to route information from any point to any other point on the internet. It is designed to be resilient to control or failure on any of these hardware links. If there are problems with one part of the network, it can adapt automatically to route around broken links.

So, for example, when a user in Australia requests a webpage from Facebook.com, a typical message might start here on a home computer, be transmitted along iiNet or Telstra's network a few times before hitting a major backbone or undersea cable, and then be passed along the chain by several other networked routers before finally reaching its destination at a webserver in the US. This could take anywhere from 10 to 20 different 'hops' along that chain - and maybe 200ms on a fast link. Facebook's webserver in the US will receive that request, and send back the content to the user along a similar (but not necessarily the same) path.

So one of the reasons the internet is so hard to regulate is that messages can take any path between the two end points that works. In fact, individual messages are broken down into much smaller 'packets', and the 'Internet Protocol' (IP) provides the standard for communication that enables all connected systems to talk to each other and pass data along the chain if required. If any link in this chain is broken, the Internet Protocol allows computers on the Internet to find other routes to get the message to its destination. This is where we get to Gilmore's quote: if a particular path is blocked or censored, it is often possible to pass a message along different paths to its destination.

Kyung Hong explains peer-to-peer networking

  • Some networks are more easy to regulate than others. When networks are organised as 'client/server' networks, targeting the server can be very effective. When they are more decentralised, as in 'peer-to-peer' (P2P) networks, this becomes much more difficult.

Tom Armstrong and Mitch Hughes each explain how this works in relation to Australians accessing Netflix, bypassing industry agreements that require geographic market segmentation of content:

One way of avoiding regulation online is through the use of a Virtual Private Network (VPN). A VPN can create an encrypted 'tunnel' from an entry point in one jurisdiction, to an exit point in another. By using a VPN, a user can appear to be located in another jurisdiction. This means the user can avoid any jurisdiction-based filtering or blocking, and also make it much more difficult to track down his or her real location and other identifying information.

Lawrence Lessig's famous assertion, 'code is law',7) has been highly influential in how we think about online regulation. Lessig points out that the rules of the software that control online communication can be as powerful (or more powerful) than the legal rules of nation states in regulating behaviour. To help us think about the different forces of regulation, Lessig outlines four key 'modalities':

  1. Law
  2. Architecture
  3. Markets
  4. Social norms.

Each of these modalities regulates behaviour in a different way.

As law students, we are familiar with thinking of regulation primarily as law. But law is not the only, nor perhaps the main regulator in many areas of daily life. Lessig gives the example of the speed bump as a way to regulate traffic - this is an architectural solution that can in some cases be more effective (and cheaper!) than enforcing a law against speeding.

Thinking about online regulation

Lessig outlines how the four types of regulation work in cyberspace:

  • Law: laws such as copyright, defamation and obscenity threaten ex-post sanction for violation of legal rights;
  • Norms: what you can say on particular websites is influenced by the nature of that site;
  • Markets: price structures or busy signals constrain access, areas of the web charge for access, advertisers reward popular sites, online services drop low-population forums; and
  • Architecture: software and hardware that make cyberspace what it is constrain how you can behave online, by requiring passwords, proudcing traces that link transactions to you, encryption and code.

We can apply these four modalities to different regulatory issues about internet content. Say, for example, we are concerned about offensive content on the web. Over a third of Australians think that there is too much offensive content on the internet. We could create a law against offensive material, but it would be really hard to enforce. In fact, we already have several such laws - we have common law obscenity offences, as well as a content classification scheme that allows people to make complaints about content online. These are all practically useless where content is hosted in foreign jurisdictions, though.

A code-based approach to regulating offensive content might be to introduce mandatory filtering at the ISP level.

A market-based approach might include subsidising voluntary filters that parents can install on their home internet connections.

Alternatively, we might investigate how to develop social norms around acceptable behaviour. This might be a bit harder to articulate, but we see this happening a lot in our society - think of the moral outrage, in all of the papers, including outraged quotes from the PMs office when someone defaces a memorial page on facebook, for example. This is the work that creates a shared social norm about what content or behaviour is permissible and what is not.

Importantly, these modalities are never really independent - they all interact in interested ways. So, for example, the market is starting to respond to concerns about offensive content, and social network platforms like Facebook, YouTube, and Twitter are modifying their code to allow people to report or flag offensive content. This market-based initiative leverages code and social norms to regulate the massive amounts of material that are posted to these networks every day.

Alex McKay explains YouTube's ContentID system

Another example is copyright infringement. In the late 1990s, the copyright industries' answer to the problem Napster posed was to turn to the courts. The courts eventually held that Napster was liable for copyright infringement, and the service was shut down. When that didn't stop filesharing, the industries turned to marketing to try to create strong social norms against copying – you wouldn't steal a car, right? Over the last decade, working with YouTube and others, rightsholders have been able to develop new technologies to detect potential copyright infringement and deal with it automatically. YouTube's ContentID, for example, automatically detects when a person uses copyright music in their video, and copyright owners are presented with an easy choice to block access to the video, remove the soundtrack, leave it alone, or run ads alongside it. This has been a massively important tool for rightsholders. Finally, there have been some market innovations over the last few decades as well. Eventually, iTunes emerged to satisfy some of the demand music fans had to be able to get access to digital downloads in a cheap and easy way. Spotify and now Apple Music have gone further - providing fans with all-you-can-eat subscription so that they can enjoy the abundance that Napster brought, legally.

If you define 'regulation' as a concerted effort to change the way another person behaves, there are many different ways of achieving that goal. Lessig's point is law is only one of the ways to regulate. When we are thinking about internet regulation, we need to be aware of the ways in which behaviour can be altered, and the limits of any given modality.

Lessig's work is also really important to point out the hidden ways in which code regulates. This is something that often goes unnoticed - but in every piece of software, in every algorithm, there are hidden assumptions about how the world works or should work. Sometimes this is accidental - for example, many websites are inaccessible to people with print disabilities, because they are not designed with this user group in mind. It takes a lot of vigilance to ensure that technologies are developed in a way that does not unintentionally exclude or limit the access of certain groups of people. Other times, though, code acts in a much more sinister way. We have no real understanding of the algorithm that Facebook or Google use to determine which content is visible to us. The news items that popup in our feeds, or the results of our searches, are all determined according to a set of algorithms that are ultimately designed to further the interests of private corporations. These are powerful algorithms – powerful mechanisms of regulation that we really do not understand, and certainly do not know whether or how we should regulate their design or use.

It is often difficult to enforce the law against individual users online. Users may be in remote jurisdictions, anonymous, or just too numerous to effectively police.

Rica Ehlers explains 4chan

Will McLay explains anonymity


1)
Ewing, Scott, Emily van der Nagel and Julian Thomas, ‘CCi Digital Futures 2014: The Internet in Australia’ http://researchbank.swinburne.edu.au/vital/access/manager/Repository/swin:41844?exact=sm_creator%3A%22Ewing%2C+Scott%22
3)
David Johnson and David Post, ‘Law and Borders–The Rise of Law in Cyberspace’ (1995) 48 Stanford Law Review 1367, 1375
5)
Dow Jones and Company Inc v Gutnick [2002] HCA 56 http://www.austlii.edu.au/cgi-bin/sinodisp/au/cases/cth/HCA/2002/56.html?stem=0&synonyms=0&query=title(dow%20jones%20and%20gutnick%20)&nocontext=1
6)
Philip Elmer-Dewitt, ‘First Nation in Cyberspace’ (1994) 49 TIME International http://www.chemie.fu-berlin.de/outerspace/internet-article.html.
7)
Lawrence Lessig, Code 2.0 (2006), pp 121–26 http://codev2.cc/download+remix/Lessig-Codev2.pdf
  • cyberlaw/regulation.txt
  • Last modified: 4 months ago
  • (external edit)