Imagine that one day you flipped open your laptop and the Internet as you had known it was changed forever. You try to do a Google search, but the results now come from Microsoft’s search engine, Bing. You visit the Eye Weekly website, but a large banner across the top of the page suggests that you might prefer the Toronto Sun instead. YouTube and Netflix won’t load at all, but a popup informs you that customers of your Internet service provider will now be served by a new online video service that only gives you access to programming from the ISP’s parent television networks — and it will cost an extra fourteen dollars a month. Want to play World of Warcraft? That’s also extra, part of your ISP’s new nine dollar “gaming access pack.”
This is the future of the Internet, feared by advocates of network neutrality: the notion that all Internet traffic, regardless of source or content, should be treated more or less equally, and that you should be able to access it just about any way you want. It is one of the founding ideals of the Internet, a system that was designed to make interconnection and the free flow of information and content not only possible, but easy. However, with the massive growth in Internet traffic and use in the past twenty years, this fundamental notion is now being questioned, with some ISPs saying that it is no longer technically practical, or good for business.
Net neutrality and other arcane aspects of networking technology have only become important to the public in the past few years. In the early days of the Internet, prior to the mid-1990s, users simply assumed that one could send or receive anything using the network. It was a low-volume world dominated by email, text documents, and remote computer logins. While there were concerns about traffic growth and congestion, they were fairly easily managed, and data flowed back and forth among interconnected networks with few obstacles. Network neutrality was assumed.
In the 2000s, the Internet changed, becoming a conduit for just about any type of media, including large audio and video files. The success of Napster early in the decade drove what would become rapid increases in Internet traffic. High-speed broadband became mainstream, and the networks of many traditional ISPs began to creak under the load.
The way people accessed the Internet changed as well. In the 1990s, North America was filled with scores of small competing ISPs. By 2005, massive consolidation in the ISP market resulted in a landscape dominated by large telephone and cable companies. Today, more than ninety-five per cent of Canadian consumers connect to the Internet through either a large cable or television company.
By 2005, major ISPs had noticed two things about their sections of the Internet. First, bandwidth use was increasing significantly thanks to online video and audio. Second, they weren’t receiving any revenue from some of the Internet’s biggest companies. Google, Yahoo, eBay, and others connected to most ISPs through other networks and weren’t paying ISPs a cent to access their customers. This was because of another principle of the Internet called ‘peering,’ which means that when an ISP connects to another network, they exchange traffic with few restrictions. Peering also means that ISPs can’t easily charge extra for some Internet services and cannot package sites and services as they might do with their traditional businesses: cable television and telephone.
What ISPs realized is that even though the Internet is broadly decentralized, they have the technical ability to control traffic entering or leaving their networks. Most do just that, blocking their customers from accessing illegal websites and limiting spam coming into the network. But your ISP could just as easily block or limit what you can see and do online, or give preference to some Internet traffic over another.
That’s what then-CEO of SBC Communications (now AT&T), Edward Whitacre, suggested in 2005. When asked about the possible competition to SBC from web services provided by Google and Microsoft, Whitacre was blunt. “How do you think they’re going to get to customers?” he asked. “Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using.” Whitacre’s comments encouraged the American communications regulator, the FCC, to impose network neutrality rules on SBC when it merged with AT&T that year.
Canadians also learned something about the powers of their ISPs in 2005. Embroiled in an acrimonious labour dispute, Telus, one of Canada’s largest ISPs, blocked a union website, arguing that it contained confidential information that put the company’s employees at risk. But rather than seeking a court order or taking other legal action to shut down the site, Telus simply made it invisible to its customers, along with several dozen other sites that happened to share the same web host.
The labour dispute passed, but the concerns remained. Would once-neutral ISPs now play favourites with websites, blocking disagreeable content or charging extra for some services? That was the model of cable television and wireless telephone networks; what would stop ISPs from doing the same to the Internet?
Electronic communications are regulated in both Canada and the United States through a series of rules mostly designed for the pre-Internet world. In the US, the FCC has long been attempting to form rules about network neutrality, a significant challenge in an environment of ongoing deregulation. In 2005, a small ISP in North Carolina started blocking the traffic of Internet telephone service Vonage. The outcry was immediate, and the FCC ordered the ISP, Madison River Communications, to stop. It later realized that its rules for managing cable and telephone companies might not be quite up to date enough to manage these sorts of disputes. Michael Powell, then chair of the FCC, issued a policy statement on broadband that included four principles: that users were entitled access to lawful content, to run applications and use services of their choice, to connect legal devices to the network, and to a choice among network providers. But Powell’s statement created guidelines rather than rules, and didn’t have the force of law.
The FCC’s commitment to net neutrality would be tested in 2007 when the Associated Press and other media outlets reported that Comcast, the second-largest Internet service provider in the United States, had been preventing subscribers from using BitTorrent, a popular peer-to-peer file sharing technology. An investigation by the Electronic Frontier Foundation revealed that Comcast actively interfered with P2P traffic by masquerading as a user’s computer and resetting peer-to-peer connections. Comcast subscribers hadn’t been told about the practice, and the company at first denied that it was blocking BitTorrent traffic. Later in the year, Comcast admitted to the blocking, arguing that the practice was necessary to limit network congestion.
The FCC disagreed with the approach, finding that Comcast’s network had not been designed to effectively manage peer-to-peer traffic. Rather than update their technology, Comcast had decided to rely on a less eloquent approach to the problem. Calling Comcast’s traffic management unreasonable and overreaching, the FCC ordered the ISP to make technical changes to its network that would allow individual cable models, rather than the entire network, to be throttled. Comcast complied, but nevertheless went to federal court to have the decision overturned, arguing that the FCC did not have the authority to regulate their Internet operations.
It was not until 2008 that the issue of network neutrality came before Canada’s media regulator, the Canadian Radio-television and Telecommunications Commission. Canada’s largest ISP, Bell, was throttling peer-to-peer traffic in a manner similar to Comcast. The Canadian Association of Internet Providers, an organization of small ISPs that resell wholesale bandwidth from Bell Canada to retail customers, asked the CRTC to order Bell to cease the shaping of Internet traffic. CAIP argued that Bell was violating the Canadian Telecommunications Act, which forbids a telecom company to “unjustly discriminate or give an undue or unreasonable preference” to anyone using their services, nor to “control the content” carried by the service. CAIP also noted that in order for Bell to limit BitTorrent traffic, it had to peek into customer’s data — a violation of Canadian privacy laws.
By the time the CRTC issued its own policy on Internet traffic management in late-2009, other Canadian ISPs, including Rogers, had also introduced peer-to-peer throttling. Unlike the FCC, the CRTC did not order ISPs to change their network management practices or network infrastructure. They also did not rule that the ISPs had violated provisions of the Telecom Act. Instead, the CRTC reaffirmed its general commitment to non-discrimination, and established a complaints-driven process on traffic management. The result: Canadian ISPs can manage their networks as they see fit, and the CRTC will only intervene in response to complaints. Many supporters of network neutrality were disappointed with the policy.
Many agree that Canadian consumers would benefit from greater choice among ISPs, and not just in the area of network neutrality. Steve Anderson, national coordinator of media advocacy group OpenMedia.ca, has argued that the wholesale and retail services of large ISPs should be operated separately. Called functional separation, large ISPs would be forced to sell their bandwidth to retailers who could provide a variety of services at various price points. According to a 2009 KPMG study, retail prices decreased after the United Kingdom implemented functional separation, and several other jurisdictions, including New Zealand, Italy, and Sweden, have followed suit.
Some critics have suggested that both sides of the network neutrality debate might be missing some important points. Columbia Law School Professor Tim Wu, who coined the term “network neutrality,” has noted that treating all Internet application traffic equally puts time-sensitive traffic, such as live video and voice-over-IP, at a great disadvantage. He has suggested that classes of data, though not data from specific providers, should be given preferential treatment, for example by prioritizing streaming video over email. Bram Cohen, the creator of BitTorrent, has urged governments to be cautious when introducing net neutrality rules, arguing that overreach and absurdities are a likely result, such as ISPs being unable to block spam or malware attacks.
The debate about the future of the Internet is far from over. In the past few days, Republican lawmakers in the United States attempted to block new FCC net neutrality regulations, while Tony Clement, the Industry Minister of Canada, has stated that he will overturn a CRTC’s decision that would see restrictive bandwidth caps applied to Internet resellers. When contemplating the notion of network neutrality, it’s important to keep in mind that decisions made today will most likely impact how we will be able to use the Internet in Canada for years to come.