This post is a high level overview of how data privacy is proceeding in the US as of October 2021. I hope it might be of interest to those US citizens who are wondering what all this noise is about “data privacy”, why it’s a real thing, and why it matters. It is intended to provide some historical context behind current events in data privacy legislation, as well as insights as to where such legislation may be headed. I use some historical highlights from the early development of the Internet to help illustrate how this important topic evolved in the United States, including personal observation (which include my biases) of how the Internet has evolved since the 1990’s.
Regulation around data privacy has been a fact in the European Union since the GDPR (General Data Protection Regulation) was passed in 2016, and took effect in 2018; however, I doubt many Americans are aware of how long the public discussion around data privacy had been going on amongst civil society in Europe. Its history goes back to beginnings of the influence of computers on society back in the 1970’s; see this historical overview on “A brief history of the General Data Protection Regulation” at the IAPP. This discussion, largely internal to the parts of the EU government concerned with it, intensified in the 1990’s, as the commercialization of the Internet began to take hold, and its technological possibilities, and social implications, became more apparent.
In contrast, there was little discussion of the implications of the digitalization of people’s personal data in the US during those times. Although there had been growing use of bulletin boards software among amateur enthusiasts in the US in the 1980s (including myself), there was only slight discussion of this technology among the general public, and therefore only slight interest from corporate investors — although technology companies had begun running their own bulletin boards for competitive reasons, they remained slow, cumbersome and difficult to scale. It was certainly not on the radar of Wall Street as an investment opportunity.
However, this perception began to change when the Internet exploded into public awareness in the early 1990’s with the release of the Mosaic browser in 1992. While agonizingly slow, it allowed for the first time viewing of the few available webpages, such as the 1994 Olympic games, which I recall seeing in a professor’s office on the CU-Boulder campus at the time. This immediately captured the public’s attention, though the slow speed at which this first crude browser worked made it somewhat impractical for actual applications; so it remained more a technical amusement than a tool for actual work.
The splash of attention caused by the Mosaic browser, which illustrated a new possible technology but did not result in any immediate industrial changes, has echos earlier in the twentieth century. The spread of radio in the 1920’s followed a similar pattern. At first they was just a technical novelty, purchased by ‘earlier adapter’ households among those who could afford it, and was quite limited. But after they were priced at what most households could afford, it became obvious to the business community of the potential for this technology to expand markets for the sales of any type of sellable product. Internet technology was no different, and it spread in the 1990’s as the technology around phone modems improved, and allowed more and more people to start using text-based email, bulletin boards (precursors to websites) and this new fangled thing called a web browser.
From what I observed at the time, from the perspective as a graduate student in telecommunications in the mid-1990’s, there was a groundswell of blind exuberance in US civil society that likened the Internet to the opening a new frontier. By contrast, the discussion among European civil servants and intellectuals was more muted, cautious and realistic; they correctly perceived the risks involved with this new technology: the ability to commit fraud, create fake identities or endanger individuals’ privacy. Although I was also caught up in the excitement of that moment — that this technology could create a kind of Star Trek-like ubiquitous public utility, free to the public — I always felt some reservations. I could not articulate them at the time, but from inklings gained from my telecommunications education at CU-Boulder, I sensed that technology companies might brush aside the public potential of the Internet, and rush into it for the business opportunities.
In hindsight from 2021, I can see now that this naive enthusiasm about the potential of the Internet had strong similarities to the opening of the Western frontier in the late 1800’s. With the historical perspectives now available, the opening of the Western frontier depended heavily on developments like transcontinental railroad lines, which were accompanied by hefty amounts of political corruption and graft; or the hysteria of “gold fever”, and there was an abrupt rush to grab it; or the land speculation among politicians and the wealthy; and, of course, the genocide of the indigenous people wherever they got in the way — all of this was rampant throughout the West. Nonetheless, popular culture romanticized it in many Hollywood movies; but when the veneer of romantic notions about how the American West was settled are peeled aside, it’s not a pretty picture of history.
The truth be told, economic exploitation by the capitalists of the 19th century — whether in land, gold, railroads or other natural resources or industries — was the predominant motivating factor that explains how the Western US was settled. I think what we are now experiencing in the 2020’s with how the internet has evolved is following a similar pattern.
How history repeats itself: this leitmotif of economic exploitation of the American West seems appropriate in evaluating how the Internet has developed — and the limits of allowing unfettered market activity to takes its course. This seems especially pertinent when we examine how data privacy has been treated during the recent evolution of the Internet.
Any discussion of individual rights on the Internet was very much an afterthought in the US, and was not part of the discussion as it was in Europe, prior to the Internet’s rise to its current domination of our society. What is glaringly obvious in hindsight is that the development of the Internet in the US for past few decades has been driven nearly exclusively by the marketing of goods for the private sector, with little thought or foresight about what could happen to individuals, or to society at large, or its other potential uses in public health and the environment — much less its potential effect on politics. Although there was lip service to promoting competition in the telecommunications markets, it was slow to develop, and even after twenty years, the only area where there is real competition is in mobile phone service; and even then, Americans are typically limited to choosing between two or three providers. 
Exploring this theme of the implications of allowing the private sector to exploit economic opportunity without thinking ahead of the potential harms has a close parallel in the environmental history in the US — which is truly harrowing — and should serve as an example of why greater regulatory protection is needed. Consider the environmental disasters that have occurred just in my lifetime: the embracing of DDT in the 1950s, then not banned until 1972; Love Canal in 1977, that caused multiple deaths, sicknesses, miscarriages and birth defects before getting cleaned up; which resulted in passage of the Superfund Act 1980, an attempt to address the 1,344 sites that are known to still need cleaning up but is chronically underfunded; then, in 2010 the Deepwater Horizon oil spill…I could go on and on, but I think I make my point: government in the US at all levels are slow to act on social and environmental harm until the damage is horrifyingly obvious.
Are we going to learn something from history here, and step up to doing something preemptive in the realm of guiding the development of the Internet, the greatest information system ever devised by mankind, with huge potential for enhancing education, science, health and environmental protection? Or are we going to continue to allow great social harm to develop because of the lack of foresight? We already have a taste of what kind of harm can develop in dealing with the aftereffects of the January 6 attack on the US Capitol, which was a direct result of mass use of social media to produce and peddling of misinformation.
Unfortunately, the federal government’s first steps in data privacy were not to protect individual privacy, but rather to ensure that no one’s privacy could be hidden from the government.
What the US government was concerned about in the 1990’s was not the privacy rights of individuals but rather the ability of the government to intercept and decipher any electronic communication.
When Phil Zimmerman published his software tool, Pretty Good Privacy (PGP), on the Internet in 1991 as an open source encryption tool, it landed like a bomb in the public media. In effect, the tool would allow anyone with the appropriate software skills to encrypt their texts, emails and hard disks at a level that a high level of security. How high? Well, it’s how Edward Snowden encrypted his email messages to prevent the NSA from reading them; and Mr. Snowden is still at large in Russia. I’d say PGP is pretty secure encryption; however, the practical impact was limited to the technically literate.
This resulted in a multi-year investigation by the US Justice Department until it dropped its case in 1996. I recall going to hear Mr. Zimmerman speak at a public function in Aspen in 1995, when the outcry over his prosecution was at its height. As a professional programmer myself, the injustice and unreasonableness of it all was blatantly obvious, as was the lack of credibility of the government in its assertions about why they wanted to control data encryption — an impossible task, since the genie was already out of the bottle. Punishing individuals like Mr. Zimmerman was not just bad policy; it made the US DOJ look like fools to the rest of the technical community, and brought hardship and stress to Zimmerman and those around him. It was a good decision to drop its case in 1996.
The Dot Com period, roughly 1995 to 2002, was a prime example of how the American habit of confusing economic exploitation with actual economic development results in a waste of resources that is inefficient and destructive.
Once the commercialization of the Internet got rolling in the US in the late 1990’s, it did not take long before Wall Street speculators got involved. Concisely described in the Wikipedia article on the dot-come bubble of 1997-2003, it goes without saying that this was all about economic exploitation, and getting rich quick by companies rushing to get IPO done before the bubble burst. What made this Wall Street bubble different from previous ones was unknown quality of Internet technology. This new center of the speculation was about a technology that did not fit in well with other regulated industries. The Federal Communication Commission (FCC) regulated the technology but had no mandate to oversee investments in it. The Securities and Exchange Commission (SEC) kept watch over fraud in the financial markets but had no expertise in understanding telecommunication networks, much less how websites worked, or if their ambitious claims in their IPOs had merit.
As such, there were effectively no regulatory bodies to check the frenzied speculation in companies whose primary assets were specious ideas about their websites and investor enthusiasm. However, the bubble eventually collapsed in on itself by early 2000, as the dot com’s ran out of cash, and their stratospheric stock prices came crashing back to Earth by 2002.
It should be noted that the dot com bubble got a real boost from the permissive financial regulatory environment at the time, headed by then Fed Chairman Alan Greenspan. During his long reign at the head of the Fed after getting appointed by Reagan in 1987, his opaque public remarks about economic conditions and Fed policy earned him renown as being inscrutable but respected. he never once wavered from the “free market” ideology that dominated the Republican Party, even who later recanted before Congress that his market ideology was sadly mistaken, but only after the 2008 financial panic nearly brought down the global economy. Yet another example of how the US allows a pattern of massive exploitation before taking any preventive actions.
The financial regulatory authorities may have learned some lessons from those lessons of history; but what has the US government learned about the need to regulate the Internet? The post-9/11 period has some lessons on that score.
The government stance towards private online data and, more particularly, live communications, took a sudden turn for the worse after September 11, 2001. With the passage of the Patriot Act, barely over a month after the fateful attack, the National Security Agency (NSA) charged into action — and apparently simply started recording everything it could, and building its own software tools to analyze it. Beneath this turbulent period of US invasions of first Afghanistan and then Iraq, with the CIA pursuing terrorists around the world, legally and illegally, was the ramping up of cyberwarfare capabilities and hypersurveillance of the Internet by the NSA, working in conjunction with the United Kingdom. None of this would likely have come to light were it not for the revelations shared by Edward Snowden in 2013. After he contacted a couple of journalists in Hong Kong in 2013, and then shared thousands of classified documents from his work at the NSA, a global spotlight was suddenly shown on just how flagrantly the NSA had invaded everyone’s privacy, from phone calls to emails, to allow the NSA full access to their networks. Though it was focused primarily on the US, since most of the Internet backbone traffic flowed through the US, it was a de facto Internet-wide access.
With its sweeping actions, the NSA made clear it never considered that anyone had a right to individual privacy when it came to their work; but Edward Snowden certainly did. Not only the privacy of individuals, he was concerned that the very institutions of democracies around the world were threatened by the actions of the NSA and other security agencies. The threat of an authoritarian government misusing such power was an obvious possibility — which has certainly turned out to be the case in modern day China. The explosive information that Snowden shared has resulted in a large, organized backlash against these actions of the NSA and other agencies, with the more blatant bulk collection of data being questioned. This saga is far from over, with Edward Snowden still needing to live in Russia to prevent being criminally charged by a hostile US government, and the programs of the NSA, apparently, still ongoing under a cloak of classified secrecy.
Which brings us up to the present…and a growing awareness that perhaps people should be able to control their personal data.
By 2018, individual states have taken the lead in passing legislation that guarantees privacy rights, in states such as California, Virginia and Colorado. Much of this activity has been in reaction to the flagrant abuse of people’s privacy data by tech giants and their cash cow, targeted advertising. This had been developing in the dark as as people began flocking to social media in the 2000’s.
The Internet evolved out of an academic environment where there were high levels of trust, sharing and cooperation, which is well documented in Where Wizards Stay Up Late by Katie Hafner. Little thought was given to cybersecurity threats or how data privacy might be abused. As the motto of the Internet Engineering Task Force showed, the emphasis was on “Rough consensus and running code.” This same spirit of openness and optimism was how Google and Facebook were initially received.
There were only a few simple rules regarding how companies should treat peoples’ data; but evidence began to trickle out how much Google and Facebook were tracking its users’ every click, and analyzing this data to figure out how to place ads that directly marketed to individual users — a previously impossible advertising mechanism. This was the beginning of what has come to be known as ‘ad tech’, or technology used for digital marketing that is entirely automated, invisible to the user, but entirely visible to the companies collecting and sharing the data — and completely unregulated, other than standard contract law about truth in advertising and outright fraud.
Unfortunately, much of this social media software architecture also had rather porous security standards, which Facebook found out the hard way. When the Cambridge Analytica scandal blew up in their face in 2018 due to a whistleblower, Facebook had a full scale public relations disaster on its hands that required the entire focus of Mark Zuckerberg and the upper management of the company. Zuckerberg was subpoenaed to testify before Congress, on quite a hot seat, to explain how the political preferences of 87 million American citizens were harvested and used intensively in the 2016 presidential election by both the Trump and Ted Cruz campaigns. Essentially, he had no explanation, and was subsequently fined $5 billion by the Federal Trade Commission, then paid a £500,000 fine (approximately $650,000 in 2019) to the UK Information Commissioner’s Office — which hardly dented Facebook’s balance sheet that year.
from growing breaches of private data at large corporations in the 2000’s;
As of June 2021, Colorado, California and Virginia have passed binding data privacy laws, with about a dozen other states considering them. (The IAPP’s US State Privacy Legislation Tracker is an authoritative source to check the current status of state level privacy legislation.)