I am a technology policy professional. I have worked on technology issues in both federal and local government, as well as internationally, and will be starting my PhD in the subject this fall. This is an understandably vague topic, as it’s not as clearly-defined a field as finance, agriculture or health care. When I tell people I work in this field, they invariably ask: “Oh, like cybersecurity?” I don’t work in cybersecurity, but the question makes sense, as the topics of privacy and security are given far more media attention than any other technology topic. This is understandable, as there have been innumerable data breaches, disinformation campaigns and degradations of personal privacy. The modern Internet has been painted as a confusing and scary dystopia in which users struggle against faceless hackers, shifty governments, online harassment and other forces far beyond their comprehension or control. In this context, it’s no surprise that the technology issues with the greatest public focus emphasize stability and control. But the Internet is, has been and I’m confident will be much more than that. While online privacy, safety and security will always be important, I’m more drawn to policies that promote the positive consequences of computer and the Internet, and that will help them continue to evolve in the coming decades.
When talking about the Internet, it’s helpful to understand its history and evolution into the technology we know it as today. First conceived of as an “intergalactic network” in 1962 by ARPA (now DARPA) chief JCR Licklider, the ARPANET was envisioned as a technology that could help researchers using large supercomputers to share research and technical resources more easily. It was developed by a small group of research institutions, including the University of California Los Angeles, the University of California Santa Barbara, the University of Utah and the Stanford Research Institute. Other research institutes soon followed, and by the early 70s not only had the ARPANET grown immensely, but it had also inspired many other similar types of computer networks such as the wireless ALOHAnet in Hawaii and the commercial Telenet. These networks were developed independently, and had no way to directly connect to one another. Vint Cerf and Robert Kahn, who had both been involved in early ARPANET development, collaborated to develop the Transmission Control Protocol and Internet Protocol (TCP/IP), which connected these various networks into an internetwork. Although it faced some competition, TCP/IP soon became the de facto standard on every device connected to the network, which continues to this day. There were other developments throughout the 70s and 80s such as the Ethernet protocol, which most people know from its use in Internet connection cables, and the Domain Name System (DNS), which most people know today as browser addresses.
Until this point, the Internet had been used almost exclusively among universities and research institutions, and in fact commercial activity was not allowed. But in the late 80s, with the increased prevalence of personal computers, Al Gore pushed for legislation to enhance computing and bring it to more people. Tim Berners-Lee’s also invented the Hypertext Markup Language while working at CERN, which paved the way for the World Wide Web. These developments helped to cause a marked shift in the use of the Internet from research to everyday browsing and commerce. While it was a great opportunity to spread information and connect people, the sudden re-purposing of a network designed for researchers to a wide variety of public use activities gave rise to all sorts of questions. Some of these questions led to policy responses, which were codified in legislation, such as the Telecommunications Act of 1996 and the Digital Millennium Copyright Act, while others were considered but not acted upon. After all, the Internet was a nascent technology, and nobody wanted to “overlegislate” it, so they largely left the Internet’s development to the private sector. This reticence, however well-intentioned, had the effect of ossifying and some cases magnifying certain Internet policy issues, of which privacy and security is only one. There are many opportunities for new or improved policies that can help the Internet develop into a tool that is truly transformative for everyone.
One topic particularly important topic is broadband access. For as much of a game-changer as the Internet has been, only a little over half of the world’s population has consistent access to it. Given how integrated the Internet has become to modern life, those without it lack a significant amount of information and capability in today’s world. Depending on exactly where those without Internet live, there are a variety of challenges to bringing them online. Sometimes the issues are technical: the ground isn’t conducive to laying cable, or nearby mountains block radio signals from cell towers. Sometimes they’re economic: these users can’t afford a subscription to broadband, or no broadband options exist in their area because it’s not profitable for an Internet provider to build one. And sometimes they’re legal: the local government is prohibited from building local Internet infrastructure of its own, or the national government withholds Internet access for political ends. Some countries are even threatening to cut themselves off from the global Internet in favor of a national “intranet,” in which government can control and monitor traffic flow in and out of their country. This is similar to China’s “great firewall,” in which it has erected a variety of national websites that mimic the functionality of the global ones: Renren takes the place of Facebook, Weibo stands in for Twitter, Youku Tudou instead of Youtube, and so on. Whatever the reasons that hold individuals back from participating in the global Internet, setting policies to encourage their participation will be a rising tide to lift all boats, as every new user can not only learn an extraordinary amount from the Internet, but contribute to it as well.
The Internet has also provided innumerable opportunities for artists, creatives and generally anyone who has something they’d like to share. One does not need to look very far into YouTube, Wikipedia or even the open-source movement responsible for software like Linux to see examples of this. But certain aspects of our policy around intellectual property (IP) haven’t kept pace. In early policy deliberations about online content, existing IP law was held sacrosanct: if it’s illegal to distribute or sell unauthorized copies of a copyrighted work by hand, the lawmakers reasoned, why should it be any different online? Each unlawful copy made on the Internet, therefore, was to be treated the same as an unlawful physical copy. But that notion quickly fell apart: not only is it difficult to distinguish between what constitutes a “copy” from a technical perspective, the law simply isn’t enforceable at scale. As Vint Cerf once put it, “The Internet is a giant copying machine.” The solution was Digital Rights Management, or DRM, which was protective technology meant to prevent software or media files from being used unless the user could authorize it, and the Digital Millennium Copyright Act (DMCA) made it illegal to break or circumvent it. But that too became unenforceable, especially when the DRM software was circumvented in countries outside the US and file-sharing programs like Napster and Bittorrent clients made distributing copyrighted works much easier. The Recording Industry Association of American took to suing individuals who had downloaded individual songs in the early 2000s, but the public optics of that didn’t play to their favor. These days, with streaming services like Netflix and Spotify, this has become less of an issue for music and media. However, the DMCA is still on the books, which actually makes it difficult to access older games and media with copy protection that the artist or publishing company no longer supports (occasionally official exemptions are carved out). This puts archivists and other enthusiasts of older media in a legal gray zone, and it’s becoming increasingly clear that a law passed over 20 years ago is becoming less and less relevant to our digital realities.
This was certainly the case for Aaron Swartz, a programmer and Internet activist who first rose to fame by developing the Really Simple Syndication format and co-founding the reddit website. However, in 2011 he gained a more adverse kind of notoriety when he was arrested on charges of breaking and entering and violations of the Computer Fraud and Abuse Act. He had connected a computer to a closet on the MIT campus and batch-downloaded about 70 GB worth of papers from JSTOR, a digital library of academic journals. His exact motivations were never fully clear, but he was charged by federal prosecutors with a potential sentence of up to $1 million in fines and 35 years in prison. While no one denied that he broke the law, many felt that the charges were out of step with the crime. The prosecutors claim that Swartz stole (or, rather, downloaded) “millions of dollars” worth of property, but Lawrence Lessig, a Harvard professor and close friend of Swartz’s, wrote in his blog that “anyone who says that there is money to be made in a stash of ACADEMIC ARTICLES is either an idiot or a liar” (original emphasis maintained). Despite the support he was receiving, the trial took a significant financial and emotional toll on Swartz, and he eventually hung himself in his apartment in 2013. This tragedy threw the difference between the intent and application of the law into sharp relief, and it seems that we need a significant reconsideration of how digital intellectual property is conceived of and managed online. With the amount and variety of online information and content growing exponentially, users will need effective ways to access, analyze and understand it all. Laws and prosecutions that do not serve those interests will only hold the world back from what life and work on the Internet can be.
A third important policy topic concerns nascent Internet and information technologies that are still in development and have not yet found a reliable place in our society. For example, various applications of virtual and augmented reality have existed in consumer applications for over a decade, but none have quite managed to reliably take hold enough to bring the benefits of that technology to consumers on a wide scale. Similarly, blockchain technology has been used in targeted applications such as cryptocurrencies like Bitcoin and private resource pools within large companies and banks. However, the average person does not interact with blockchain technology on a daily basis, so for most of the world it is still emerging. The Internet of Things is starting to appear in wearable devices like smartwatches and on manufacturing lines within industrial companies, but like the Internet itself, the technology has the potential for many different applications, and will likely enhance many of the devices we interact with on a daily basis. Setting policy for such technologies can often be risky, as few people understand how they work, and they may often end up being used in contexts that were not originally anticipated. That said, the right kinds of policies can encourage speedier adoption of these helpful technologies, and even the development of altogether new ones. Policy often lags behind technology, but that’s less a consequence of policy itself and more the fashion in which policy is usually made. When an exciting new discovery or device is announced, policy is not usually discussed, but it is a crucial lever that we as a society can use to decide exactly how we want our new tools to serve us.
No one can deny that our society is very different than it was 20 or even 10 years ago, and the pace of technology, along with all of the social interactions and consequences it has enabled, has been a significant influence on that. There are many important issues to consider, but perhaps the most crucial is who exactly we want to put in charge of our technology. The computing revolution, ARPANET and open-source movements were not spurred by corporate giants like Google or Apple. They came from public institutions like government, universities, research organizations and even people like you and me. In fact, during the early days of the Internet a “cyberlibertarian” culture emerged, which eschewed involvement of government, regulation or any other influence that might dampen the benefits of a free (as in free speech, not free beer) world wide web. Cyberlibertarian John Perry Barlow even published a “Declaration of the Independence of Cyberspace” in 1996, declaring “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.[…] We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.” It’s clear now that the web has taken a different turn than that which Barlow envisioned, but that doesn’t mean that we’ve lost total control. Most people in most places still have the ability to go to any website they wish, and it’s becoming easier to build something new, upload a video or carve out your own little corner of the web. If we care about the Internet as users, then we should be concerned about what kind of Internet we will have going forward. And if we don’t like where things are going, then we need to consider what every aspect of the Internet – not just privacy and security – should look like.