The First Amendment and Online Gun-Related Content
The First Amendment to the U.S. Constitution states that “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”
Following the 2011 attempted assassination of Congresswoman Gabrielle Giffords, there was much talk of numbers. The shooter used a Glock 23 handgun to fire off 33 rounds in 15 to 20 seconds, before pausing to reload. The public and legislative focus of some post-shooting discussions shifted to banning magazines that could hold large amounts of ammunition. Some gun owners responded to this talk by posting YouTube videos of themselves firing the same number of rounds in the same timeframe as the Giffords shooting, using smaller-capacity magazines. Their point was that banning large magazines wouldn’t have the desired impact of completely preventing future shooting attacks. Even before any gun control legislation might have been passed, it was being undercut by gun owners exercising what they view as their First Amendment right to free speech.
A search in YouTube of “rapid fire shooting” or “how to bump fire” produces thousands of videos. This begets the question: where is the line between free expression and public safety in the age of the Internet and social media? What is the First Amendment’s role in protecting online expression such as the content of gun owner-made YouTube videos? Does such content fall under the categories of contributing to the marketplace of ideas, or political speech, two principles of past court decisions about speech protected under the First Amendment? Or does such content cross the line into endangering public safety?
Over 100 years ago in Schenck v. United States (1919), Justice Oliver Wendell Holmes observed that while individuals should typically have the right to freely express themselves, this right was not absolute, particularly when such speech could put other social interests at risk. This, Holmes articulated, was the “clear and present danger” test. It was in his dissent in Abrams v. United States (1919) that Justice Holmes put forth the notion of a marketplace of ideas, asserting that “the best test of truth is the power of the thought to get itself accepted in the competition of the market” and that the government should only restrict speech when it “so imminently threaten[s] immediate interference with the lawful and pressing purpose of the law that an immediate check is required to save the country.” With these two cases, Justice Holmes put forth parameters for when speech may be both restricted and permitted. In a host of other cases since those two, justices have weighed the balance between individual expression and public safety.
A landmark speech protection case is Brandenburg v. Ohio (1969), which involved a Ku Klux Klan organizer speaking at a rally in Ohio. Clarence Brandenburg was recorded making the types of political statements one would expect to hear at a Klan rally. He was convicted, but would appeal his conviction citing the First and Fourteenth Amendments to the U.S. Constitution. The Supreme Court agreed with Brandenburg and overturned his conviction, noting that a state could not constitutionally prohibit an individual advocating law-breaking unless that advocacy was likely to produce immediate illegal action.
The Court’s points in Brandenburg are taken. However, technological advances require a revisiting of pre-Internet court decisions. What is different about modern online content is the reach and speed with which speech and other content travel via social media like YouTube, Twitter and Facebook. As Leets (2001, p.301) has written, fiery language “may not produce lawlessness in the average listener or viewer, but if even one or two out of thousands of listeners are provoked to act, can society regard this as imminent violence?” The 2016 ‘Pizzagate’ shooting incident comes to mind. As noted in a recent Time magazine story, “Hate speech targeted at minorities in the northeastern Indian state of Assam is spreading almost unabated through Facebook at the same time as the Indian government is stripping nearly 2 million people there of citizenship”. Facebook-spread hate speech and ideas facilitated military-led violence, including murder, against thousands of Rohingya Muslims in Myanmar in 2017. In the modern era, online speech and other content may be much more connected to subsequent imminent action (e.g., violence).
I’m the author of the book Guns on the Internet, which among other things explores online gun owner subculture on platforms like Facebook, YouTube and online forums. Gun owner subculture was also the focus of an article I wrote for The Conversation. In chapter 6 of Guns on the Internet, I present a framework for assessing whether online gun-related content (e.g., gun owner-made YouTube videos) should be protected or restricted, as per prior court decisions that focused on the First Amendment. Specifically,
- Do the ideas communicated constitute political speech, such as support for the Second Amendment or advocating a limited role for government in individuals’ (e.g., gun owners’) lives?
- Do the ideas communicated contribute to the marketplace of ideas? Will a higher truth or understanding be realized though reading a posting or watching a gun-owner’s video, for instance?
- Do the ideas communicated encourage immediate illegal activity such as violence? Is violence or other crime likely to occur because of reading the content or viewing the video?
Should the online content be assessed as fitting with the first or second criterion, then I would suggest that it should be protected content. This is with the caveat that the focus of the First Amendment is government – not companies’ – censorship of individuals’ speech. That said, social media platforms like YouTube and Facebook have come to function like a public market square of sorts, with anyone and everyone using social media to communicate. Therefore it seems appropriate to apply the tenets of the First Amendment to social media content.
However, should online content fall under the third criterion – advocating for illegal action, particularly violence – I would argue that it should not be protected. A number of past cases have dealt with the issue of restricting content such as online threats (e.g., John Andrew Collins Holcomb v. Commonwealth of Virginia [2011]; United States v. Jeffries [2012]; United States v. Voneida [2009]; United States v. Clemens [2013]; Planned Parenthood v. Amer. Coalition of Life [2002]; U.S. v. Mustafa [2011]). The foci of the original cases differ (i.e., anger at a judge overseeing a custody battle; posting a desire to commit a copycat mass shooting right after the Virginia Tech shooting; emailing threats to defendants in a lawsuit; hosting a website that included names, pictures and addresses of doctors who provided abortions; having a website that offered training material [e.g., how to modify guns so they could launch grenades] for terrorists). However, a common thread running through the court decisions was that the content was specific enough that it could be construed by a reasonable person as a potential imminent threat.
In my own searches of YouTube for gun-related content, I found far more videos that would fall under the first and/or second criteria and should be protected in my opinion. Most videos were seemingly made by law-abiding gun-owing YouTubers with a point to make, or something gun-related (and mundane) to demonstrate (e.g., the best way to clean and lubricate a such-and-such handgun). A small number of videos presented content that, to my view, seemed potentially but non-imminently threatening. It would seem that viewers of those videos had similar reactions as I had, posting comments like “the owner of these guns is more prone to cause a problem than the actual crook that’s out there” and “I’d be a little more careful. It makes gun enthusiasts look bad.” Additionally, because social media companies like YouTube, Facebook and Twitter are not the government, they can move more quickly and be more restrictive about what content gets posted. YouTube, for example, posts its community guidelines on content that can be restricted, including sections on harmful or dangerous content and violent or graphic content. Since I began watching and cataloging YouTube gun videos in 2011, I’ve discovered that some have since been removed. YouTube began to restrict some gun-related videos in response to the October 1, 2017 Las Vegas mass shooting tragedy.
The Honorable Barry Schaller bluntly wrote, “Is the First Amendment dead, or can it adapt to navigate the Internet highway?” I’m happy to say that the First Amendment is very much alive and viable in the Internet age. Still, new technologies beget new questions about the legal principles and rights that guide our lives. As “The Walking Dead” character Rick Grimes once observed, “this is how we live now.” Individuals around the world now partly live their lives on social media. The legal system will continue to grapple with how the First Amendment can best be applied to online speech, and how best to balance freedom of expression with public safety.