Mass Media Then and Now: A Historical Framework for Regulating Controversial Speech On the Internet

Much as the rise of the Gutenberg press transformed communication and ushered in vast societal changes, the rise of the Internet continues to transform communication and society today.  It is helpful to think of “the media” as having undergone several distinct phases, each bringing about profound changes in the transmission of ideas.  Each phase has raised new challenges and considerations for how speech should be regulated, or not.

Understanding this historical framework can help address some burning questions of the day in media law.  This framework can help answer where to draw the line of responsibility or potential liability for tech companies that de-platform individuals like former President Trump or deprive Internet infrastructure to alternative platforms like Parler.  This framework can help answer what to expect of technology companies, large and small, in managing misinformation, such as disinformation about election results associated with the Capitol riots.

Pre-Media

In the pre-media phase, the ability to reach a wide audience was limited.  Ideas could be carved in hieroglyphics on a cave wall.  A person could send out a letter or telegram to select individuals, delivered by a courier.  Or a speaker could address a gathering of individuals in small groups.

This pace of communication made it very difficult for new ideas to disseminate or transform societies.  Unwelcome or dissenting ideas could be ignored or easily managed by power figures, including patriarchs or dominant institutions.

Mass Media – First Wave

The invention of the printing press, with movable type in 1440, marked the technological innovation that laid the groundwork for the rise of mass media in the centuries that followed.  With this invention, ideas could be printed into physical posts, books or newspapers, all capable of reaching a wide audience like never before.

The capacity to physically print and disseminate ideas ushered in profound societal changes, including the ability to challenge religious dogma.  It became possible and far easier to spread new ideas, from scientific discoveries to social movements.  Society moved out of the Dark Ages.

After the printing press came other technologies, such as radio and television broadcast, that likewise enabled the ability to reach a large audience easily.  What all these technological developments shared in common, however, was one thing:  a high barrier to entry. It cost a lot to run or operate a printing press or to produce a radio or television show.

Consequently, the power of mass communication became concentrated in the hands of relatively few individuals or institutions.  Those individuals or institutions effectively served as gate-keepers.  Gatekeepers enjoyed special privileges, along with certain obligations to the public.

This understanding gave rise to government regulation of media and First Amendment jurisprudence in the United States.  The U.S. government created the FCC in 1934 to grant licenses to television and radio broadcasters, in exchange for their commitment to serve the public interest with quality programming.

First Amendment jurisprudence granted publishers certain protections, in the form of legal privileges, that made it hard to sue publishers, except in egregious circumstances where the publisher acted with maliciousness or recklessness in publishing false or inaccurate information.  As the Supreme Court effectively recognized in its landmark precedent New York Times vs. Sullivan in 1964, publishers needed to be allowed a certain amount of breathing room and margin of error in order to carry out their important reporting functions, which are necessary to serve the public interest in a robust democracy.

Mass Media – Second Wave

The rise of the Internet in the 1990s spawned the next phase of mass media, much as the printed press sparked the first phase, nearly 500 years previously.  The Internet made it possible for anyone to reach a large audience.  Anyone could become a publisher by creating his or her own blog or website or by commenting through stories on social media.

Social media became a driver for disseminating news.  Social media, along with the rest of the Internet, redistributed much power away from traditional news publishers to individuals.  Individuals gained the power to determine what was newsworthy, what went viral, and even what seemed true.  The previously high entry barriers to reach a mass audience became lowered, if not removed entirely.

Originally, the U.S. government heavily protected all the players in the Internet ecosystem, who became known as information service providers or intermediaries, for their role facilitating the exchange of information and ideas.  In the 1990s, Congress passed two laws — the Communications Decency Act (CDA) and the Digital Millennium Copyright Act (DMCA) — that afforded even broader protections to these service providers than the protections previously given to traditional publishers under traditional First Amendment law.  The protections offered under the CDA and DMCA give social networks like Facebook, Twitter, and many smaller platforms, which carry user generated content, broad immunity from liability for carrying speech of others.  These statutes also provide the same, broad immunity to purveyors of the technical infrastructure that platforms rely on to deliver content to users.

Although these intermediary protection laws have recently come under great attack from both ends of the political spectrum, in so many ways, they make a lot of sense.  The barrier to entry is low to nil for any individual speaker on the Internet.  Consequently, the intermediaries themselves typically are less situated for a meaningful gatekeeping function than a traditional newspaper or book publisher (or radio or television producers). 

A newspaper or book publisher, but not the intermediaries on the Internet, typically invest a lot of time in the creation of the stories themselves.  A newspaper or book publisher holds fairly exclusive keys in deciding what to publish, but intermediary service providers generally do not, because there are so many different avenues for speech on the Internet.  With less power for the intermediary players should come less responsibility.

The courts and Congress, nevertheless, have begun to reign in the wide latitude they initially granted to intermediaries in the Internet ecosystem.  In 2016, the Ninth Circuit chipped away at the CDA’s very broad immunity provision when it ruled that an online, roommate matching service was liable for asking users for their roommate matching preferences, in violation of anti-discrimination housing laws.  Congressional passage of Fight Online Sex Trafficking Act (FOSTA) and Stop Enabling Sex Traffickers Act (SESTA) in 2018 set out to penalize technology companies that assist companies like Backpage in profiting from human sex trafficking.  Both developments were previously unthinkable under the spirit of the CDA’s broad immunity protections, also known as Section 230 of the CDA, or simply “Section 230.”    

Now a clamor has been growing for platforms to be held responsible for the accuracy of information they spread and also for content moderation decisions seen as biased in one direction or another.  People express the sometimes opposing concerns that platforms disseminate harmful information, on the one hand, and censor controversial views, on the other.  In March 2021, Congress held hearings where they paradoxically drubbed the CEOs of Facebook, Twitter and Google for doing too much (in suppressing controversial, conservative views) — or not doing enough (by not suppressing misinformation by Trump and his supporters associated with the Capitol riots).

Where should the lines of responsibility be drawn at this stage of the Internet age?  These are pressing questions of the day, with far-reaching ramifications for democracy and speech in the 21st century.  Answers will not be simple.

However, arriving at a good answer involves considering two questions:  First, what barriers to entry do speakers face?  Second, what is the gatekeeping power of each player or institution that enables dissemination?  These factors historically have shaped mass media through each phase of its evolution and have influenced regulations or protections overall fitting for the times.

Application to Hot Topics in Media Technology Law Today

  1. De-platforming individuals — including former President Trump

Let’s apply these factors to the question of de-platforming individuals on social media by Facebook and Twitter.  First, the barrier to entry for publishing elsewhere on the Internet (including on other social media platforms or on a blog, website, or online newsletter service) are extremely low.  The entry barrier is even lower for someone like former President Trump, who has been sought as an investor and contributor to platform alternatives to Facebook and Twitter.  Trump also has announced plans to create his own platform.  Second, any one platform does not hold the keys to dissemination of any one individual’s message, given the many online alternatives out there and the inherently limited interaction with users who generate content.

  1. Depriving Infrastructure Services to Platforms like Parler

The analysis looks a little different when it comes to decisions by Internet infrastructure providers, such as Amazon Web Services (AWS), to withhold infrastructure from a particular party.   AWS suspended service to  Parler, an alternative conservative microblogging site, where many Trump supporters migrated to and engaged in posts encouraging violence following the 2020 election.

After the Capitol riots in early January 2021, AWS warned Parler to improve its moderation and stop violating AWS’ acceptable use policy prohibiting the “illegal, harmful, or offensive” use of AWS services.  After Parler failed to do so, AWS suspended services, citing the risk to public safety.  This decision became the subject of a federal lawsuit, Parler v. AWS, filed on January 11, 2021.

In that situation, the federal court ruled in a preliminary injunction hearing that AWS was entitled to suspend services, given the threat to public safety. The court ruled that forcing AWS to reinstate services before Parler could moderate violent content effectively would not serve the public interest, given the risk of further violence.

However, what if the claim had been that AWS refused to provide service to Parler solely because of its conservative views that AWS deemed offensive?  Parler tried to argue that AWS engaged in preferential treatment of Twitter based on viewpoint, but the court did not find any facts to support that claim. If the evidence had supported it, then AWS depriving Parler of critical Internet infrastructure, solely based on viewpoint, would have been far more troubling.  That is because Parler would face a huge barrier to developing its own technology to host its own site.  Moreover, there are few meaningful alternatives for powering a platform, making AWS a key gatekeeper to dissemination.

These considerations favor the application of “net neutrality” principles to infrastructure service providers that power the platforms.  Net neutrality incorporates the legal concept of treating certain service providers on the Internet as “common carriers” of public goods, which means they have a duty to provide the service to any paying customer without discrimination.  Some people concerned about censorship by platforms want to see this concept also applied to platforms themselves; however, given all the current, easily accessible alternatives for Internet speech (such as other platforms, blogs, websites, and newsletters), there is no need to impose net neutrality on platforms, unlike the providers of infrastructure services that power them.

  1. Misinformation: Conspiracy Theories, Election Results, and More

The next question is how responsible platforms and other service providers should be for the dissemination of misinformation — such as QAnon conspiracy theories or misinformation about election results or COVID treatments.  First, the barrier to entry for any one speaker on a topic, however misinformed his or her views, is generally very low.  A speaker has many other online avenues to deliver his or her message.

Second, the gatekeeping power of platforms and other service providers is relatively low but varies depending on their size. As a starting point, it may be hard for platform providers to know that controversial or misinformed views are, in fact, deeply inaccurate from the outset.  Occasionally, controversial views do later turn out to be correct. Remember it once was considered heretical to believe that the earth was round or that it revolved around the sun.  Sometimes the inaccuracy of information becomes evident only with the passage of time, once new information has accumulated.  Sometimes the inaccuracy can only be found in the nuances of the content. 

In this light, larger tech platforms seem to be acting reasonably by tagging posts about controversial topics (such as elections or COVID treatments) with links to known, reliable information on these topics.  Another reasonable approach has been tagging posts with labels alerting viewers that the content in question is disputed or inaccurate.

Notably, larger platforms have more resources to invest in tools and human resources to monitor content than smaller platforms.  Any standard of liability for platforms should consider the ratio between the platform’s size and resources on the one hand, and ability and opportunity to monitor content and serve as a meaningful gatekeeper for quality content, on the other.  Holding smaller platforms to the same standards as the largest platforms like Facebook would result in making it practically impossible for small or mid-sized platforms to survive, grow, and compete in the market — ultimately reducing available alternative venues to speakers, the exact opposite of the ideal result.

In short, the bigger the platform, the more potential power it has to manage misinformation and the more we can expect of it in addressing the problem.  Conversely, the smaller the platform, the less might it has to throw at the problem, and our expectations should be scaled back accordingly.  As Congress endeavors to scale back the protections of Section 230 of the CDA, it may impose some new duty to prevent the spread of dangerous or harmful information (beyond the anti-sex trafficking

obligations  of SESTA/ FOSTA).   If so, any new regulation should incorporate a sliding scale of expectations or obligations. 

Conclusion

The second wave of mass media, spawned by the arrival of the Internet, dramatically lowered barriers for speakers to reach a mass audience.  At the same time, it created an ecosystem of intermediaries with less power as gatekeepers than traditional newspapers, book publishers, radio or television stations.  Understanding these dynamics in historical context helps make sense of the evolving regulations of speech.  This offers a helpful roadmap for the direction that regulation and protections should take from here.


* Karen Kramer is a strategic leader, thought partner, and seasoned legal advisor for media and technology companies of all sizes.  At the forefront of digital media for more than 25 years, she has led Fortune 500 companies, like Yahoo, The Washington Post, and Tribune Media, in executing cutting-edge media initiatives, launched social media platforms such as Quora and Houzz into global markets, and provided prepublication review to numerous newspapers, TV stations and book publishers to manage the risks of publisher liability.