Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cyberlaw:networks [2019/01/17 18:13]
witta
cyberlaw:networks [2019/02/13 11:37] (current)
witta
Line 1: Line 1:
-##Different Ways of Understanding the Layers of the Internet 
  
-**Video Overview by Nic Suzor:​[Layers ​of the Internet](https://​www.youtube.com/​watch?​v=mqhXMmQOOXU)**+# The Dawn of the Internet: ​A Declaration of the Independence of Cyberspace
  
-In order to understand how the internet operatesand how regulation can operate in this context, it is important to understand ​the different layers ​of the internet. ​Network engineers generally conceptualise up to seven layers of the internet and distinguish between the physical pipes, network infrastructure and other components of the networkHoweverfor the purposes of this unit, we will conceptualise the internet in terms of three main layers_infrastructure,​ code and content._  ​+In 1996John Perry Barlow released a famous provocation about the limits ​of state power in regulating ​the internet. ​The Declaration,​ which we encourage you to **[read](https://​projects.eff.org/​~barlow/​Declaration-Final.html)** or **[watch](https://​www.youtube.com/​watch?​v=3WS9DhSIWR0)** in fullbegins:
  
 +>​Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
 +>... I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.
  
-The first layer of the internet is the 'infrastructure'​ layer (network cablesroutersand protocols). This layer of the Internet is designed around the principle of a '​neutral'​ network (['​end-to-end'​ principle](https://​en.wikipedia.org/​wiki/​End-to-end_principle)):​ the responsibility for determining the content of communications rests with smart servers and users at the ends of the network, and the intermediaries are just responsible for passing messages along the chain. Intermediaries are expected not to examine ​or intefere with content, ​in any substantive way, as it passes through their networks. The design principle ​that intermediaries are merely conduits for passing messages enables innovation at the infrastructure levelAs the network itself ​is openthere is a real separation between ​the infrastructure (the pipes) and content (the data that flows over those pipes), which means that anyone is free to 'plug-in' to the internet and start providing services over the IP protocol and the hardware that connects all users together. Service providers can design new systems that can operate on top of standardised protocols.+Barlow's Declaration has played an pivotal role in shaping how we think about online regulation. In this extracthe makes two main claims about online regulationwhich we will examine in more detail below. The first is that the internet is inherently unregulable by territorial governmentsThe second is that state regulation of the internet ​is illegitimateor governments should defer to the self-rule of cyberspace.
  
-The second layer can be thought of as the 'code' layer - that is, the software that operates at the ends of the network to interact with users. ​The webserver that sends users the webpages they requested, customised and tailored for that particular user, is a software program running on a server or farm of servers. The applications ('​apps'​) that connect people to others, that allow users to chat, like, comment on and swipe content created by others, are pieces of software running on mobile devices and personal computers that communicate with software running on the servers somewhere in 'the cloud'​. These programs, including their design, the input they accept, the algorithms they use to respond to requests, are responsible for determining who we can communicate with and how.+##Barlow's First Claim: ​The Internet ​is Unregulable
  
-The third layer of the internet is content, or the material that is transmitted over the network infrastructure,​ selected and presented ​by code. For many of us, the information that users express and receive over the internet and the visible components of its networks are some of the first things that come to mind when we think of 'the internet'​. The history of internet regulation is mostly a history of attempts by various parties to regulate content, including offensive communications and pornography,​ private and confidential information,​ defamatory statements and copyright content. This is because governments and private actors often have reasons to want to limit the flow of certain information online. Increasingly,​ however, attempts to regulate content involve struggles at the code and infrastructure layers as pressure mounts on those who provide network infrastrucutre or services to build certain rules into their systemsThe most prominent struggles over internet governance are principally concerned with who gets to decide how networks are structured and how code operates.+**Overview ​by Nic Suzor:​[Governing ​the Internet](https://​www.youtube.com/​watch?​v=ybNGDquKVTc)**
  
-=====Infrastructure (The Internet Is a Series ​of Tubes)======+>"​You have no moral right to rule us nor do you possess any methods ​of enforcement we have true reason to fear."
  
-**Video Overview ​by Nic Suzor:[Internet Infrastructure](https://www.youtube.com/watch?​v=kwLTm1kJh3w)**+Barlow'​s first claim that territorial states do not have the power to regulate the internet is largely descriptive. This claim is based on a number of factors, including the decentralised nature of the internet, which is a network of networks that spans the globe without any real concern for jurisdictional boundaries. The internet enables billions of people to communicate largely anonymously across the globe, and the sheer quantity of content that is transmitted over the network each day is almost incomprehensibly large. All of these factors mean that, for the most part, any explicit interventions ​by governments can be trivially circumvented. If a website is shut down in one jurisdiction,​ it can be back up the next day somewhere else in the world. If a document is removed from one site, it will often quickly be reposted on a dozen more (see, for example, the '[streisand effect](https://en.wikipedia.org/wiki/​Streisand_effect)'​). ​
  
-The resilience of the internet ​is often framed ​in John Gilmore'​s famous words: "The Net interprets censorship as damage ​and routes around it."​((Philip Elmer-Dewitt‘First Nation ​in Cyberspace’ (1994) 49 TIME International http://​www.chemie.fu-berlin.de/​outerspace/​internet-article.html.)) To understand this claimwe have to understand some principles about how the internet ​technically worksAs noted above, the internet is often defined as a 'network of networks'​Wikipedia has [a useful definition](https://​en.wikipedia.org/​wiki/​Internet):​+However, it turns out that regulating ​the internet ​isn't quite impossible, just often very difficult. Fundamentally,​ the internet is not a separate place because the people who use it are real people, ​in real locations, subject to the very real power of their jurisdictions. ​The pipes that people use to communicate are cables ​and wireless links which also have physical presenceWhere a Government can target the speakersrecipients or intermediaries involved ​in a communicationit can have a real effect on what information is transmitted via the internet. ​Figure 1 belowfor example, illustrates ​the network ​traffic in Egypt over the period ​of the January 2011 revolutionYou can clearly see the point at which the Egyptian Government had shut down the five major Egyptian internet service providersThe Egyptian Government'​s intervention epitomises the idea that _those who controls the pipes, controls the universe.
  
->The Internet is a global system ​of interconnected computer networks that use the standard Internet protocol suite (TCP/​IP) ​to link several billion devices worldwide. It is a network ​of networks that consists of millions of privatepublic, academic, business, and government networks ​of local to global scopelinked by broad array of electronic, wireless, and optical networking technologies+The challenge ​of regulating ​the internet is finding an effective way to either identify and regulate the potentially anonymous creators ​of informationthe billions ​of potential recipientsor finding ​way to regulate the networks along the chain of communication
  
-The Internet as we know it is built on [technologies funded by the US Department of Defence](https://​en.wikipedia.org/​wiki/​DARPA),​ large public investments in infrastructure by academic and other institutions and, from the 1990slarge-scale private investments in the deployment of new commercial and private connections around the world. From its very beginnings, the internet was designed to be _resilient_. One of its key features is that it relies on an inter-connected web of computers ​to route information ​from any point to any other point on the internet. Takefor example, a user in Australia sending a message via Facebook. The user might write a message on their home computer or mobile device, the message is then transmitted along an ISP's network a few times before hitting a major backbone or undersea cable, and then passed along the chain by several other networked routers before finally reaching its destination at a webserver in the US. The process of sending a Facebook message could take anywhere ​from 10 to 20 different '​hops'​ along this chain. Facebook'​s webserver in the US will receive that request and send the content back to the user along a similar, but not necessarily the same, path. +**Figure 1Who Controls ​the PipesControls ​the Universe ​Traffic ​to and from Egypt on 27-28 January 2011, from Arbor Networks**
  
-The fact that content, including messages, can take any path between the two end points is a foremost reasons why the internet is hard to regulate. Internet infrastructure is designed ​to resist control ​and automatically re-route around problemssuch as broken links, when and where they might occur. For instance, individual messages are broken down into much smaller '​packets',​ and the '​internet protocol'​ provides the standard for communication that enables all connected systems to talk to each other and pass data along the chain. This illustrates Gilmore'​s argument that, if a particular path is blocked or censored, the internet protocol allows computers to find alternative routes to send content to its destination+![Graph of traffic ​to and from Egypt on January 27-28 2011from Arbor Networks - Who Controls ​the PipesControls ​the Universe](http://​www.wired.com/​images_blogs/​threatlevel/​2011/​01/​arbor_egypt-660x359.jpg) 
 +(Image (c) Arbor Networks via [Wired](http://​www.wired.com/​2011/​01/​egypt-isp-shutdown/​))
  
-Some networks are easier to regulate than others. More decentralised networks, such as '​peer-to-peer'​ (P2P) file sharing technologies,​ are more difficult to regulate in part because of the sheer number of largely anonymous end-users who might be sharing content. As we will discuss in the intermediary liability chapter, parties alleging some kind of infringement often chose to due intermediaries,​ rather than end-users. Targeting an intermediary,​ such as an ISP, can be very effective because of the '​customer/​server'​ nature of their networks (i.e. the ISP, or server, largely contractually agrees to provide internet access to end-users, or its customers).+###A Case Study: Newzbin
  
-One way of avoiding regulation online is through the use of a Virtual Private Network ​**[('​VPN'​)](https://​www.youtube.com/​watch?​v=prhQKAJG8nA)**. A VPN creates an encrypted '​tunnel'​ from an entry point in one jurisdiction to an exit point in another. By using a VPN, a user can appear to be located in another jurisdiction. This means a user can avoid jurisdiction-based filtering or blocking, such as geo-blocking ​of online content, and attempt to better conceal their real location and other personal information. **[Mitch Huges](https://​www.youtube.com/​watch?​v=rk0aeKMCRFs)** explains how some Australians use VPNs to access overseas Netflix content and ultimately bypas industry agreements that require geographic market segmentation of content. ​+**[Overview ​of Newzbin by Nic Suzor](https://​www.youtube.com/​watch?​v=z8Ph8eO26q4)**
  
 +While the internet is not unregulable,​ there are unique challenges facing regulators. The case of Newzbin, which was a popular Usenet indexing site, is one example from the fight against copyright infringement. Dubbed 'the Google of usenet'​ by the Motion Picture Association of America (MPAA), copyright owner groups sought to shut down the service that allowed others to easily find copyright films and other works. In a 2010 Decision, the High Court in the United Kingdom (UK) found Newzbin liable for copyright infringement,​ and the company was wound up and their website shut down.(([Twentieth Century Fox Film Corporation v Newzbin Limited [2010] EWHC 608 (Ch)](http://​www.bailii.org/​ew/​cases/​EWHC/​Ch/​2010/​608.html)))
 +
 +Two weeks later, Newzbin2 rose from the ashes. Someone had copied the entire codebase of the old site and brought it back online on a server in the Seychelles, an archipelago of islands outside of UK jurisdiction. The MPAA went back to court, this time seeking an injunction that would require UK-based ISPs to block access to the website. The Court granted this order, marking an expansion of laws that were originally designed to block websites that hosted child sexual abuse material: [Twentieth Century Fox Film Corporation v British Telecommunications PLC](http://​www.bailii.org/​ew/​cases/​EWHC/​Ch/​2011/​1981.html) [2011] EWHC 1981 (Ch).
 +
 +The system for blocking websites is not wholly effective. It turned out to be easy to bypass if users encrypted their connections or used a virtual private network to avoid the block. Shortly after the injunction, Newzbin2 released a user-friendly application to '​utterly defeat'​ the filter, [explaining that its app](http://​torrentfreak.com/​newzbin2-release-encrypted-client-to-defeat-website-blocking-110914/​) could "break any updated web censorship methods or anti-freedom countermeasures"​. Ultimately, however, Newzbin2 closed down in 2012. It had lost the trust of its users, who were not sufficiently willing to pay to support the new service. Importantly,​ copyright owners had also started to target the payment intermediaries that channeled funds to the organisation - intermediaries like Mastercard, Visa, Paypal, and smaller payment processors that use these networks. ​
 +
 +The Newzbin case study illustrates how regulating online content and behaviour can be an extremely difficult task. By cutting off the flow of money, the rightsholder groups were eventually successful in shutting down Newzbin. However, this took a lot of time and effort, and there is a good chance that many users of the service simply moved to newer, better hidden infringement networks. Overall, the copyright industry has had some succes in tackling large copyright infringers, but this is an ongoing arms race, as infringers continue to find ways around the regulations. ​
 +
 +##​Barlow'​s Second Claim: State Regulation of the Internet is Illegitimate
 +
 +**Overview by Nic Suzor[The Legitimacy of Online Regulation](https://​www.youtube.com/​watch?​v=A0m_GZC4x2w)**
 +
 +The second claim that Barlow makes in his Declaration is that state governments _should_ defer to cyberspace self-rule, or what we call '​private ordering'​. Barlow explains that:
 +
 +>"​We believe that from ethics, enlightened self-interest,​ and the commonweal, our governance will emerge."​
 +
 +Barlow'​s argument is that the rules and social norms created by online communities to govern themselves will be better than anything imposed by territorial states. This was expressed by Johnson and Post in a famous 1995 article as a general principle that there is “no geographically localized set of constituents with a strong and more legitimate claim to regulate [online activities]” than the members of the communities themselves.((David Johnson and David Post, ‘Law and Borders--The Rise of Law in Cyberspace’ (1995) 48 Stanford Law Review 1367, 1375)) In addition to arguing that online communities should be able to govern for themselves, Barlow and Johnson and Post asserted that if territorial governments try to impose their own laws on a borderless internet, users will never be able to work out what set of rules they are subject to. The consequence of governments attempting to prevent online communities from regulating themselves, according to Post, would be:((Post, '​Governing Cyberspace: Law' (2008) http://​www.academia.edu/​2720975/​Governing_Cyberspace_Law)) ​
 +
 +> “... the chaotic nonsense of Jurisdictional Whack-a-Mole"​.
 +
 +As we will see in the [[jurisdiction|Jurisdiction chapter]], the legitimacy of any one nation claiming jurisdiction over transnational communications is still a vexed issue. As the Australian High Court noted in the _Dow Jones v Gutnick_((_Dow Jones and Company Inc v Gutnick_ [2002] HCA 56 http://​www.austlii.edu.au/​cgi-bin/​sinodisp/​au/​cases/​cth/​HCA/​2002/​56.html?​stem=0&​synonyms=0&​query=title(dow%20jones%20and%20gutnick%20)&​nocontext=1)) case, nation states purport to have a responsibility to protect their citizens'​ interests online, and certainly a desire to regulate online content and behaviour. ​
  
  • cyberlaw/networks.1547709200.txt.gz
  • Last modified: 11 months ago
  • by witta