A brief history of copyright on the web: Part one


When the Internet was conceived, copyright law was not in consideration. Developed initially for a mixture of academic and military purposes, the technologies that made the Internet possible were, originally, never seen as having copyright implications.

However, by 1994, the Internet was finding its way into homes at a rate unmatched by the early days of radio or television. Almost overnight, the web went from a playground for academics to a resource for the general public. Personal homepages sprang up, message boards and forums grew in popularity and chat rooms became regular sources for lively discussion.

Along with all of this new connectivity came a great deal of copying. Text was being copied and pasted, images were being posted and, eventually, audio and video files were being shared.

Along with this copying came a whole new set of copyright questions, questions that the law was unprepared to answer. This, in turn, caused governments and international organizations alike to race to update their antiquated laws to make them more fitting for the digital age.

Although many of these revisions have not been thought highly of, they have helped to sculpt the Internet we have today and directly influence the way we do business online.

Copyright before the web

Prior to the development of the Internet, most copyright law, especially on the international front, drew its origins from an international agreement known as the Berne Convention for the Protection of Literary and Artistic Works.

The Berne Convention, ratified in 1886, has been revised or amended seven times since then, the last time in 1979. The convention, which counts some 163 countries as parties to the treaty, sets basic guidelines for the types of works that are protected and what protections they receive.

Article 2 of the Berne Convention would wind up having a profound impact on the development of the Internet. By expanding copyright protection to any literary or artistic expression that has been been fixed in some material form, the Berne Convention ensured that all works posted to the Internet would be protected by copyright.

Still, there were many early copyright cases, such as Sega v. Galaxy in Australia, which sought to challenge the copyrightability of digital works, in that case a video game. However, in country after country, the copyrightabilty of digital works was upheld and most of the legal questions centered around how, not if, the works were protected.


But once the copyright protection for digital works, and specifically works on the web, was established, a whole new problem arose. The entire Internet, by its very nature, was a copyright infringement and it was up to the legal system to sort out exactly where to draw the line of liability.

The nature of the beast

Since the early Internet was so heavily concentrated in the United States, it was US law that would guide much of the early development of the Web. However, American law, at the time the Internet was starting to get established, was very hostile toward any copying of digital works.

The most potentially devastating ruling was a 1993 one in the 9th Circuit US district court which found that copying a program to RAM, even though it was only temporary, was a potential copyright violation. The case, MAI v. Peak dealt with diagnostic software used to repair machines. There, the court ruled that loading the software into the RAM is insufficiently permanent or stable to permit it to be perceived, reproduced, or otherwise communicated for a period of more than transitory duration.
The problem is that the web cannot function without copying. The mere act of opening up a web page can create several copies of a work, one in the RAM of the computer, similar to what is described in the MAI case, another in the browser’s cache and still another in the cache of the ISP. Without this copying, the Internet would either function much less efficiently or not at all.

However, despite that ruling, most legal experts felt that there was no infringement in viewing a web page. It was generally held that there was an implied license granted from the person who was posting the web page to have it downloaded to RAM, cached or otherwise stored. In a bid to nurture this implied license, systems were developed to enable Webmasters to opt out of being cached. These systems included meta tags (http://www.w3schools.com/tags/tag_meta.asp), hidden tags within the HTML itself, and a robots.txt file hidden on the server.

However, this argument would not be tested in any significant way until 2006, when a district court in Nevada ruled that the Google Cache was fair use, in large part due to the implied license the plaintiff granted by not using the available tools to opt out.

At the same time the ruling exonerated Google’s caching practice, it also ensured that regular surfers would not be hit with surprise lawsuits for caching pages or writing them to their RAM or their hard drive’s cache.

Although such a lawsuit seems farfetched today, it was a serious legal question 12 years ago and it was one of two that nearly stopped the Internet before it really got started.

The other, however, required a new law to settle the issue for good.

In Part Two we will look at the history of liability for web hosts and other service providers and changing laws and rulings surrounding services that enable copying.