A Project of National Affairs

Why Speech Platforms Can Never Escape Politics

Jon Askonas and Ari Schulman

The Left argues that tech companies must limit misinformation and hate speech on their online platforms. The Right argues for free speech, and that the platforms do not wield their censorship power fairly. This essay argues that each view bears an important partial truth, but that the approaches they advocate do not hold much of a chance of preserving truly free speech and open debate on the Internet over the long term. For each lives in its own sort of denial about the nature of speech moderation, what it takes to make moderation legitimate, and whether that is possible in a virtual town square that spans the entire globe. Our aim, then, is to show a way past this impasse. The two views are not quite so far apart as they seem, and a resolution to the online speech debate will require understanding what each has to contribute to the other. But reducing the online platforms’ political power will require accepting that speech platforms cannot be strictly apolitical.

It happened like a coordinated strike: In August 2018, in the span of 24 hours, a slate of tech companies banned Alex Jones from their platforms—first Apple’s app store, then Facebook and Spotify.[i] Twitter and PayPal joined the following month.[ii] The take cycle that followed was predictable: The Left argued that tech companies must limit misinformation and hate speech on their online platforms. The Right argued that banning Jones violated free speech and that the platforms do not wield their censorship power fairly.

How to reconcile these two positions? At the risk of oversimplification, let us attempt to distill the most serious versions of the two broad views in the debate over online speech policing:

1.  The defense of free speech and the marketplace of ideas: Stated negatively, Silicon Valley has demonstrated itself incapable of deciding in a neutral way what kind of speech is allowable on the platforms. Their staff are monolithically progressive: In the 2018 midterms, the share of employees’ political donations that went to Democrats was 95 percent at Facebook, 96 percent at Google/Alphabet, and 99 percent at Twitter.[iii] Yet these companies employ censorship tools that grant them enormous political power over what information the entire world sees. And they claim to do it apolitically — requiring us to assume them capable of cognitive feats that are virtually nowhere in evidence in the rest of our political culture.

Stated positively, a foundational principle of liberal democracies is that all ideas should be debated openly, for the best way to defeat bad ideas is with better ones. Seen either way, tech companies should not have the power to police online speech, except perhaps for widely agreed upon violations like spam and pornography.

2.  Concerns over the decay of discourse and the informational environment: This view focuses on what is required for a functional discourse to even be possible. It sees having reliable information as a necessary precondition of debate rather than as one of the outcomes it aims to achieve. It argues that better ideas cannot win the day on their merits when harassment, manipulation, disinformation, and deceit are the main instruments of debate. It thus urges that public discourse cannot be saved until online speech platforms crack down more heavily on Russian troll farms, conspiracy theorists, bullies, and even some Western (mostly right-wing) news outlets.

In the near term, the free-speech side of the debate must prevail. No one — but especially not the leaders of Twitter, Facebook, and Alphabet, who have proved so foolish in their understanding of the political power they wield — can be entrusted with it. But it also should be plain to anyone who has observed the state of discourse on social media that to take a minimalist approach in this moment is also to shrug at an intractable national problem that is accelerating the erosion of the political process and civic life. Consider the matter this way: Imagine a scenario in which, five to ten years from now, one or the other of the two major options on the table has largely prevailed. Can we honestly imagine that that our sense that online speech platforms are damaging the country will have abated?

The argument of this essay is that each view bears an important partial truth, but that the approaches they advocate do not hold much of a chance of preserving truly free speech and open debate on the Internet over the long term. For each lives in its own sort of denial about the nature of speech moderation, what it takes to make moderation legitimate, and whether that is possible in a virtual town square that spans the entire globe.

Our aim, then, is to show a way past this impasse. The two views are not quite so far apart as they seem, and a resolution to the online speech debate will require understanding what each has to contribute to the other. Paradoxically, reducing the political power that Mark Zuckerberg, Jack Dorsey, and Sundar Pichai wield over our discourse today will require accepting that speech platforms cannot be strictly apolitical.

What Are the Speech Platforms?

The problem here is much broader than Alex Jones, of course. Few serious observers can consider what we might call the “public square” platforms—particularly Twitter, Facebook, YouTube, and the public square’s library, Google—a boon to democracy. Nor are they a flourishing intellectual marketplace. Although it is tempting to shrug at their problems by comparing them to the heated partisan newspapers of the early American republic, it is difficult to see the debate on today’s online platforms as offering anything like the productive factionalism of that era.

The speech platforms are rather closer to a form of mass voluntary intellectual pornography: a marketplace that lauds the basest instincts, incentivizes snark and outrage, brings us to revel in the savage burn. Even the vocally hostile consensus on the Left and in the mainstream press understates the problem by urging the platforms to throw more bodies and machine learning at the problem, urging Jack Dorsey and Mark Zuckerberg to man up and flip the “fix democracy” switches they must have squirreled away in their desk drawers.

The mistake here is twofold. The first mistake is that they offer brutal critiques of the leadership of the platforms and then urge giving that leadership much more power. The second mistake is that they approach the problem as one of side effects of an otherwise sound product. For all their anger, the tech-backlash consensus still views the platforms through the old consumer-advocacy framework, demanding that they be fixed with new “smart” community tools and machine learning as if they are an early model T rather than a new soma.

But the problems of the speech platforms are not ones of bad actors at the fringes. Rather, they are baked into the incentive structures of the platforms themselves, through the kinds of speech they reward and penalize. The platforms are rotten to the core, inducing us all to become noxious versions of ourselves.

A notable exception to free-speech criticism of the Alex Jones ban came from writer Jonathan V. Last, who argued that conservatives have long emphasized the need for communities to enforce norms.[iv] If the platforms can’t ban someone like Jones, he argued — one of the most nakedly bad-faith actors in American life today — have we simply abandoned the idea of communal norms entirely? Strikingly, Last counters the conservative view (defend the marketplace of ideas) with a conservative version of a typically progressive view (enforce boundaries on speech).

Another way to state the predicament, then, is to ask whether any of the platforms with intractable speech moderation problems—particularly Twitter, Facebook, YouTube, and Google—really are communities. Despite the language of their executives — Mark Zuckerberg speaks regularly of “building global community”[v]— it is difficult to find any sense in which this is so. A globe cannot be a community. A community has some shared understanding of the boundaries of acceptable speech, and some legitimacy to enforce them. But it is impossible to imagine either of these things ever being the case on today’s online speech platforms. Yet it is also true that to be banned from these platforms comes close to being shut out of public life.

It is because a universal public square cannot be a community that the parameters of the online speech debate are stuck. The conflict between the “marketplace of ideas” framework and the “communal norms” framework seems irresolvable because it is irresolvable. At least, this is so for the platforms we have now, which are an experiment with no obvious historical precedent: forums for discourse and information discovery that are not communities, or at best are failed communities. It is only in a forum that is a community that these two viewpoints can be reconciled, even mutually sustaining.

The failure to understand how these viewpoints might complement each other has led to the bizarre, dead-end aspirations of algorithmic governance—the idea that an engine fine-tuned to shred civic norms of discourse can be fixed with AI. This model, we will argue, must be replaced by an understanding of the inevitably political nature of speech communities and the enforcement of their standards.

From Governance to Politics

At some level, platform leaders seem to grasp that they are in an impossible position. If they take a minimalist approach to regulating speech, then hatred, misinformation, and rage will continue to rise, and they’ll face even more pressure from powerful political actors, activists, and their own employees to crack down. If they take a more robust stand for (what they deem) objective truth and credible journalism, they will be accused of censorship and will face undecidable questions about where the line is — and will feel even more pressure from powerful political actors, activists, and their own employees about where to draw it.

We can see how interminable this problem is by recognizing that both of these alternatives are happening right now. To run the gauntlet, the tech companies have focused on process: laying out new policies, creating new appeals rules and internal court-like bodies (Facebook’s highest moderation body is called its “Supreme Court”[vi]), and hiring potentially tens of thousands more foot soldiers for its moderation battle.[vii] Making increasingly fine-grained distinctions and fewer mistakes, tech companies seem to believe, will deflate the criticisms they face. But they are making a fundamental mistake—for they don’t understand what legitimacy is, and why they have none.

Max Weber introduced a three-fold typology of legitimacy, the sentiments that get people to acquiesce to authority, especially regarding rules or commands they may dislike or disagree with.[viii] For most of human history, the most common kinds of authority have been traditional or charismatic. Traditional authority appeals to the “eternal yesterday” of received ways: we follow this rule because we have always done so (and to breach it is to flirt with disaster). Charismatic authority appeals to the special qualities of a particular individual that inspire respect, loyalty, or devotion: we do this because I say so. Modern life has seen the rise of a third kind of appeal, to the legal validity or objective rationality of some process or rule. A policy is valid because the process behind generating it was valid, whether it belongs to an electoral system or a corporate bureaucracy.

When people talk about “governance,” they’re talking about this kind of legitimacy, attained by adopting “best practices” that maximally fulfill legal requirements and achieve policy aims, including through processes that incorporate “stakeholders.” By improving governance, by systematically addressing political concerns through policies that enshrine equal, fair, and neutral treatment, the platforms hope to shore up their legitimacy. (Facebook even refers to this process specifically as “content governance.”[ix]) Writing increasingly specific policies, hiring more moderators, developing appeals processes, automating speech moderation with machine learning, and outsourcing fact-checking decisions to “trusted sources” are all attempts shore up legitimacy by bolstering the validity of a process. “Because that’s the policy” holds much more legitimacy for us moderns than “Because that’s the way we’ve always done it” or “Because I said so.”

Or at least that’s the way it seems. The technocratic approach to administrative policy seems to allow companies to avoid the stagnation of appeals to tradition—think of the fusty Craigslist, barely changed in design for a decade or two—without the responsibility of appeals to charisma. But appeals to legalistic validity or “governance” assume that some underlying principles are agreed to; that the policies are neutral in both fact and perception; and that those responsible for carrying them out will pursue them objectively. Where any of these slip—or appear to slip—the whole edifice violently collapses. In an advanced capitalist society, citizens come to understand that almost any decision motivated by politics or profit can be given a “rational” spin. Without trust in the decision-maker, trust in the process breaks down, and the only important question becomes, “Who decides?”

Thus, an ever more elaborate and rules-based approach doesn’t build trust. The people who most need to be convinced will see a game of Three Card Monte, in which blame and responsibility are displaced to different parts of the same irredeemably flawed system, designed not for real fairness and neutrality but only for their appearance. When hundreds of millions of people from across the entire planet come together into a single platform, achieving broad agreement on what kind of content should be removed is simply impossible, and for a great many members, examples of what seems to be unfair removal or censorship will never be far off, however many appellate courts Facebook adds to its system. That the response from platforms has been the refrain “We’ll do better”[x] shows that they have no idea what the problem is.

The mistake the platforms are making, fueled by technocratic visions and corporate legal consultants, is to take a governance approach to a political problem. There actually are content standards for which the legitimacy standard for governance is easily met: child pornography, jihadism, violent harassment, and spam come to mind. No one seriously contests the validity of aggressively trying to remove these from the platform, and no one contests the legitimacy of the platforms themselves doing the removal. The trouble comes when platforms attempt to apply the same kind of solution and processes to speech that is broadly political—where what is normative is precisely what is contested. There can be no avoiding the question of who decides, who belongs, and who speaks behind the veil of an ever more complex algorithmic moderation system.

The solution to this legitimacy crisis is more legitimacy, not more neutrality. The tech companies must contend explicitly with the political.

Free Speech and Its Background

When we think of the standard defenses of free speech, we think especially of freedom from the censorious powers of governments and mobs. We imagine the bold open dialogue found in an academic journal, a classroom seminar, or on a debate stage. We invoke the ideas of John Stuart Mill and Robert Nozick and the model of Socrates. We hear phrases like “Let bad ideas be chased with good” and “Sir, let the claim rise or fall on its merits.”

This essay takes it as obvious that these ideals of open inquiry are core to the American project, must be defended, and are threatened by the current state of the speech platforms, and by tech executives’ ever more elaborate, opaque, and desperate technical schemes to police their boundaries of discourse. But the question at hand is whether these models of free speech can really be applied to the platforms that exist now, and thus whether the remedy they imply—pressuring or regulating the platforms into a more hands-off approach—is sensible over the long term.

Consider the ideal of a classroom in which a professor encourages no holds barred, that all ideas, no matter how controversial, be engaged on the merits. The purpose of this model is clear enough: A robust intellectual culture must engage uncomfortable, offensive, even repugnant ideas. It must respect the rights of individuals—for its own self-respect, so that it can build the intellectual resources to rebut wrong ideas, and, most fundamentally, so that it can reach the truth. A mature intellectual forum ought to be able to openly engage the ideas of, say, Mein Kampf or the defenses of slavery in the prewar South.

But if this ideal of freedom may be absolute within its domain, this is possible only because the domain is narrow, tightly limited by rigorous conditions for entering it. It is easy for a student in a University of Chicago classroom to take for granted the free intellectual play of arguing a heretical idea, for in the classroom she need not be mindful of the brutally exacting selection process she went through to get there.

The selection here is not merely for rare excellence of intellect and character; it also aims to transform the selectee by imparting an understanding that her freedom has a purpose: the attainment of wisdom and truth, the satisfactions of intellectual pursuit. When the selection process functions well, this understanding may become so woven into the fabric of the classroom that it is taken for granted, fading from view. When the selection process fails, appeals to the ideals of open inquiry may become hollow, even farcical. Even the Athenian Agora could not have sustained the presence of a critical mass of trolls.

Similar conditions exist for the other models we might call upon. We needn’t enumerate the aspects of the shared cultural background of the members of Plato’s Academy, or the tacit gentlemanly mores of discourse within which John Stuart Mill defended the marketplace of ideas, to appreciate their selective and formative significance.

Does a shared background understanding of the moral purpose of open debate exist for the participants of today’s speech platforms, our global town squares? The question is obviously rhetorical. Dangerous though the outrage mobs, cancel culture, and woke-scolds on the platforms are, we might take them seriously but not literally, recognize them as expressing a desire for solidarity, for the restoration of the background conditions of shared communal purpose under which real discourse can proceed.

Community and Politics

Any kind of legitimacy requires communal norms. Ironically, despite the language of the platforms — all that talk of community standards and norms — it is precisely in the ways they have failed to form coherent communities that they have been unable to find the legitimacy to enforce norms of speech. We might thus say that the speech platforms are undergoing a constitutional crisis.

Community has been a sacred value since the Internet’s earliest days, and interpersonal and group communications have been essential to every step of the Internet’s evolution. In the beginning, the Internet itself could be a community because your ability to access it at all meant you were probably educated and nerdy. As it grew, the Internet fractured into subcultures, forums, and chats, each embedded in its own tight-knit network.

At first, the global speech platforms seemed, and believed themselves to be, the apotheosis of this quest for Internet community: a Universal Discourse and Universal Network in the service of a Universal Community.[xi] They provided, for the first time, a way of organizing both one’s “Internet community” and the online reflection of one’s real-world relationships.

And yet, precisely as the platforms became more universal, they became more destructive of community. Communal inclusion relies on exclusion: some notion of who is and is not a member of the group, and some ways of enforcing that boundary. On the early Internet, this consisted of having many forums with different interests and aims, moderators (known in forum lingo as “mods”) who could enforce forum norms, and administrators (admins) who could make more fundamental changes to the structure of the forum, and control the moderators, but did not routinely make moderation decisions.

But this moderation scheme presented a number of barriers to forums growing into universal platforms. Mods were mostly volunteers, and so their work could not scale quickly. Groups formed around particular interests — so how could you recruit “mods” for a forum that was universal, a place where the shared interest was having a pulse?

Communal forums stood in the way of a personalized experience, too. In its early years, much of life on Facebook was centered on Groups, a feature that allows users to create smaller shared-interest forums within the platform. But Groups have become less and less central to the Facebook experience. This was by design, and the first major step toward it was Facebook’s launch of the News Feed as the central hub of the platform experience. Rather than rely on shoddy volunteer labor siloed off by interest area, the platforms themselves would act as the mediators of community, pulling all your interests and connections into one feed, a Forum of You.

This design choice fatefully merged the roles of admin and mod, of infrastructure-provider and content-police. Where previously groups and individuals controlled what was visible on their specific pages, the feed instead made platforms responsible for how posts were aggregated, and created the possibility that content would roam freely across the entire network, including in unfriendly ways.

Conservative or classical-liberal critics of online censorship often make gestures toward the “marketplace of ideas” without fully considering what is entailed in that metaphor. A marketplace is a social institution that requires underlying mechanisms uniting buyers and sellers. It requires a common language or currency. It functions best when buyers and sellers are motivated, whatever else their preferences, by a genuine desire to transact, rather than to be seen lounging about the market or making baseless offers or displaying goods that they have no intention to sell.

Moreover, the metaphor breaks down entirely in a post-scarcity, algorithmically mediated world, where there is no obvious relationship between the opinions a person puts forth and where that opinion shows up, often in a mechanically distorted way. The marketplace of ideas assumes a relatively even distribution of megaphones, or a random distribution of their power. Absent norms of and structures for productive exchange, a clear reason for why we’re all talking to each other, “meaningful communication” breaks down into a brute contest for power. Internet discourse, like the market, must be embedded in a community.

If we are to preserve freedom of online speech in the fullest sense—both legitimate freedom from the censorship whims of massive central powers, and genuine freedom for robust exchange and intellectual generation—the global town square must die. Our age is marked by a return to our given condition: tribalism. So be it. Rather than hoping for the restoration of a universalized intellectual culture, we would do better to ratify and manage the reversion to separate communities, to build institutions that encourage tribalism’s more fruitful expressions. Rather than shoving all our debates into a single, hellish town square, let each town have its own, and let us work to make each a place of fruitful exchange.

Till Mods Have Faces

With a few exceptions, by far the most important component of successful speech communities is that its moderators have faces. A core feature of bulletin boards, comment threads on blogs, and publications is that the boundaries of acceptable speech are enforced not by tech executives, the farcical Facebook Supreme Court,[xii] or distant buildings filled with beleaguered workers deciding in one moment whether a post about Joe Biden uses unreliable sources and in the next watching a beheading video—but rather by identified members of the community itself, moderators with quasi-dictatorial powers over their limited fiefdom.

In the online forums of yore, rather than appealing to neutral standards or lawlike processes, the moderator simply decided using his (in those days, typically his, not her) best judgment as to when a post or a user crossed the line. The standard he appealed to was whatever he grasped the community’s to be. He could bolster his authority by explaining his decisions or not, but it was checked by the fact that his decisions were public, his name attached to them, users readily knew who to complain to about an unjust decision, and if he lost the confidence of the forum users, they had the power to successfully agitate for his replacement.

Here, then, was a real, live modern example of charismatic authority — a moderator whose authority ultimately derives not from the soundness of his policies but from the fact that he is the one entrusted by the community to make good decisions on its behalf.

This model, though more characteristic of the early Internet, is still alive and well. One example today can be found on Hacker News, a forum for sharing and discussing news related to computer programming and venture funding. A 2019 article in The New Yorker on Hacker News’s moderation system offered the telling subtitle, “Can a human touch make Silicon Valley’s biggest discussion forum a more thoughtful place?”[xiii] On a comment thread on Hacker News about the article, most users sung their praises for the forum’s moderation style:[xiv]

“When I first came to HN from reddit, I posted some jokey nonsense comment. A moderator gently scolded me. I really disliked it. Now, I see the wisdom of keeping the jokes to a minimum and focusing in thoughtful discussion. So thank you, mods.”

“I took your moderating personally when I first joined here, got banned for some comments I didn’t think were that bad, … [then] realized I should try harder in my comments and just push my way back in by writing things that contribute better to HN. It would have been easy to start a new account, but I liked warring with [moderator Daniel Gackle, who goes by the user name] dang at the time…. It spurred me to just try harder at contributing better. Eventually, I was just randomly unbanned. Since then, i’ve come [to] appreciate HN’s moderation style. There’s really not many places on the internet with such lax account rules, yet mostly good conversation.”

Several points stand out about these comments and about Hacker News’s moderation broadly:

1.  All of Hacker News is moderated by just two men, whose names are well known to the users: Gackle and Scott Bell.

2.  Gackle and Bell’s authority is accepted as in some sense ultimately arbitrary. Certain of their decisions must be accepted because that’s simply the way they say it is.

3.  Part of the mods’ role is to police comments that not only go against the forum’s purpose but that seem to be made with bad intentions, or even bad faith.

4.  The mods’ legitimacy is robust enough that they can police intentions.

5.  The charismatic nature of the mods is such that not only are their decisions seen as legitimate, but they are able to guide and motivate users back toward the forum’s core purpose.

6.  At least from the perspective of one user, good moderation results not only in good conversation but in lax rules.

These are the sorts of ways in which speech moderation online must embrace the political. The main question we must be asking about a speech platform is not what are the standards for moderation but who moderates and what is the moderator’s relationship to the community. The communities that endure and successfully promote open debate are the ones that focus on the latter two questions — that understand that the standard of moderation is of secondary importance to the legitimacy of the moderator.

The legitimacy of moderation on Twitter, Facebook, and Google today could hardly be further from this model. The tech companies seem desperate at every turn to demonstrate that they are not political, yet they do so by trying to conduct a high-wire balancing act of satisfying — or really, offending — the Left and the Right equally. This is a fruitless, impossible goal, and like a pack of bumbling James Comey copycats, the decisions the companies make to appear nonpartisan actually give them the appearance of being overly attuned to partisan politics, of cynically playing to an imaginary center.

If our speech platforms cannot escape being political, our aim must be to make them more legitimate. Or, if that seems impossible, we must to begin to think about building ones that can be legitimate.

Features of Viable Speech Platforms

As intractable as the speech problem seems to be on the platforms, we should recognize that there are myriad examples from recent history of the same problems being addressed by and large successfully. Many of these examples come from earlier eras of the Internet, but there are many aside from Hacker News that are still active now. They include the world of newspapers and magazines that dates back long before the Internet; threaded bulletin boards like Usenet and PhpBB; personal journaling platforms like LiveJournal and Xanga of the late 1990s and early 2000s; the blogosphere of the Bush and early Obama years; and Reddit today (although it has recently been following Twitter and Facebook by moving toward more aggressive, centralized moderation[xv]).

Most of these forums had moderators with faces, although there are other instances (particularly in academia) where standards are enforced by other means. What follows are brief treatments of some of the others features by which these forums furnished genuine communities of productive discourse, and with them the legitimate enforcement of discourse norms, without the kinds of endless controversies we see in social media today.

Limitations of scale: The simplest explanation for why the speech problem is intractable on the platforms as they exist now is that they have too many users for general consensus on moderation norms to be possible. As Antón Barba-Kay argues, legitimate moderation decisions require some context by which the decisions can be judged reasonable or not.[xvi] But such a context cannot be secured for a single forum that spans the entire globe. By contrast, all the examples cited above have inherent structural features that place significant limitations on the number of speakers in a particular forum (if not on the platform as a whole).

Barriers to entry: One way to avoid creating a platform crushed by questions about how to deal with bad actors is to raise the cost of entry. Barriers to entry may take the form of geographic localization, interest segmentation, high cost of discovery, or strong gatekeeping.

If you’re reading a local newspaper, you likely share a great deal of background cultural context with most other readers of the paper. National newspapers like the New York Times still have among the tightest gatekeeping in the world, at least in terms of who gets to speak in them. LiveJournal, blogs of yore, and magazines in the pre-Internet era had high costs of discovery: It wasn’t easy to even find out that they existed, and those who succeeded in discovering them were already likely to have a great deal in common with each other. All these examples also show some measure of segmentation by interest, a feature we can see especially clearly today in Hacker News and Reddit.

It is difficult to name a single speech community lacking meaningful barriers to entry prior to Twitter, Facebook, Instagram, and YouTube. It is surely not coincidental that the challenges they have faced seem so unamenable to approaches that worked in previous speech communities.

Robust right of exit: The mantra “you have a right to speech but not to an audience,” a favorite of outrage mobs and apologists for censorship, nonetheless bears some truth. But it is only sensible to apply in a decentralized speech environment — one in which other audiences are meaningfully available. The banning of Alex Jones and Milo Yiannopolous from all the major online platforms generated such controversy because they had no equivalent choices available to them. They were not ejected from one desirable community among others; they were to a real extent ejected from public life, after what appeared to be behind-the-scenes coordination by tech executives. This means that the platforms today, though private companies, possess something resembling government-like censorship powers.

A crucial element to lowering the stakes of any particular moderation decision is the knowledge that the user has the genuine option to go to another community or start his own. The same applies to moderators: The ultimate check on a moderator’s power is that if most users start to believe he is using it poorly, users can either protest until a new moderator is selected, or they can leave the forum in favor of a better one. We can see what the absence of a robust right of exit looks like right now on Twitter and Facebook, where most users seem to feel trapped within the platforms’ death spirals, yet nobody has a viable alternative to switch to.

Response calibration: Imagine the options available to you when you encounter a tweet you deem bad. The marketplace-of-ideas framework cannot articulate the difference between responding with a quote-tweet aimed to broadcast shame to a wide audience versus responding with a message sent only to the author.

A core structural problem with Twitter is that the incentives all push toward the shaming quote-tweet, with its viral potential. In a universal public square, when an outrage storm is brewing, I face the following options: I may join the mob (potentially extracting social value for myself); I may tout my refusal to join (taking a principled stand while also deriving some social value, all without doing much to actually slow the mob); or I may quietly decline to participate (deriving no value while not stopping the mob).

Both the size and structure of the current platforms mean that meaningful mass calibration of negative feedback is rarely possible. For the most part, there is not a good analogue on Twitter to Daniel Gackle scolding or spiking a post: there is only the inscrutable algorithm, the faceless mod, the bloodthirsty mob. Responding to objectionable ideas is a core function of a speech community. And yes, this includes scorning and in some cases disallowing speech that too flagrantly violates communal norms. Shaming per se is not the problem. The problem with speech on the platforms may be not that we are too sensitive, but too insensitive—that all corrective action is already on its way to becoming an outrage mob, a cancellation.

In a context in which the stakes feel lower, corrective action rarely needs to be so severe. And even when one is dogpiled by an entire Subreddit, the risks of one becoming the next Justine Sacco—of being canceled for a minor offense—are much smaller.

The fullest forms of calibration involve virtues of prudence, lightness, and mentorship. These cannot be programmed into a platform, but they can be cultivated by a community, and a platform can encourage them.

Neutral engagement affordances: The dominant social media websites have built-in mechanisms only for agreement: the heart and like buttons, the retweet and reshare. The only meaningful way to express disagreement becomes direct negative engagement. Because disagreement is harder than agreement, it also signals greater opprobrium than it otherwise would. The absence of built-in mechanics for expressing disagreement is a structural force that actually raises the intensity of conflict on the platform.

Contrast this with simple five-star rating or up-and-down voting systems like those found on Amazon, Yelp, and Reddit. Reddit’s comment ranking system lowers the conflict involved in identifying unfruitful content by making it harder to find. Thus the platform also allows for better distinctions between unorthodox but challenging ideas, which can be met with replies, versus simply inflammatory posts, which can be quickly filtered out. In other words, it permits better discrimination between worthy interlocutors and trolls. Instead of actually giving trolls more attention, it allows them to be treated as closer to what they are: junk merchants.

Incentives for productive speech: A speech community is likely to produce whatever kind of discourse it incentivizes. This means that users should have formal or informal incentives to produce speech that is recognized by other users as worthy.

A scholarly culture that reads and discusses enduringly great works incentivizes the production of such works. A software forum in which the most thoughtful replies are rewarded will incentivize the further production of thoughtful replies. But a social media platform in which the highest reward is to go viral will simply incentivize whatever content produces immediate pleasure or hate.

A myriad of choices are available to platforms about which kind of speech they reward. A publication will make these kinds of choices in designing its website: Do we highlight the newest content, the worthiest and most enduring legacy contact, or a combination of the two? Many existing platforms handle this issue by having enduring reputational rewards for users, often called karma—Reddit and Hacker News provide two examples.

The Polar Night of Icy Darkness

The argument offered in this essay aims to describe the conditions that must be brought about if the online speech problem is to become tractable. The question for policy over the longer term is whether and how to encourage digital platforms that will meet these conditions—whether by pushing today’s players in that direction, or allowing new ones to emerge. What we must seek is platforms that enable productive political conflict within them rather than about them. Crucial though they are, the debates raging now over Section 230, antitrust, and whether Facebook and Twitter should set their censorship dials to 1 or 10 do not seem poised to address the fundamental challenge of fostering genuinely free speech online.

We take as obvious the danger of the Left’s hunger to give tech companies ever more power to decide who gets to speak and who doesn’t, which claims are permissible and which not. Yet the choruses on the Right—either to take a laissez-faire approach to speech moderation, or to make platforms more accountable for moderation decisions by regulating them as publishers—pose their own hazards. The trouble is that all these approaches, in separate ways, still presume the continued existence of the Universal Town Square, and either leave untouched or actually strengthen the incentives for engineers to turn the platforms into the architects of our informational environments.

Let us return then to a question posed earlier: Under any of the regulatory approaches now on the table, can we imagine a future ten years from now in which the state of discourse online on Twitter, Facebook, and YouTube, has not become either more hellish or more unfree? If we fail to make our focus the creation of human-scale political speech online, what results will not be some long, uneasy truce. Rather, it may be the rise of a digital cousin of what Weber calls the “polar night of icy darkness”—the final victory of the algorithmic over the human.

Many tech employees are already beginning to see the hyper-political Internet as an engineering problem with an obvious solution. The mistake of the 1.0 platforms was to optimize for engagement—likes, clicks, and shares. This was a successful short-term growth strategy, but at the long-term cost of sustainability. For engagement includes not only joy but rage, not only mirth but sadness. Incentivizing these things creates hellishness, driving people to disengage, to become disenchanted with the platforms and leave.

Better, then, to optimize for pleasurable engagement: use machine learning to drive hyper-personalized content that above all makes people happy. Abandon the search for a Universal Community and embrace the mere exchange of dopamine for data, attention for ad revenue. Create a platform where contentious virality is nearly impossible and the key to success is fostering long-term followership and engagement, and generating algorithm-preferred content. Ensure that political content, to be successful, at least has to be pleasurable. Platforms like TikTok and Instagram already reflect this next generation of engagement architecture. YouTube and others could join them with only a few tweaks to their underlying architecture.

In this model, users surrender agency to the algorithm. In exchange for the promise of maximally fulfilling their desires—whether for funny cat videos or makeup tutorials—it also monitors, assesses, and modulates all content first, without accountability or visibility. This architecture allows platforms far more tailored, precise, insidious and invisible censorship and manipulation, placing not human authority but algorithmic judgment at the center of their bid for legitimacy. When Jack Dorsey locks the New York Post’s Twitter account, it is an outrage.[xvii] If TikTok were to, say, invisibly signal-boost pro-Biden content, it would just be business as usual.

We can just now glimpse this future; in some sense we all seem to be pushing each other toward it. It holds possibilities for manipulation and willingly chosen oppression darker still than those that plague us today. It is not enough that Mark Zuckerberg and Jack Dorsey do better, and indeed may be foolish to ask them to. The creation of an online political life that can be carried out at a human scale must begin elsewhere.


[i] Schneider, Avie. Twitter Bans Alex Jones And InfoWars; Cites Abusive Behavior. NPR, September 6, 2018. https://www.npr.org/2018/09/06/645352618/twitter-bans-alex-jones-and-infowars-cites-abusive-behavior.

[ii] Fung, Brian. PayPal bans Alex Jones, saying Infowars 'promoted hate or discriminatory intolerance’. Washington Post, September 21, 2018. https://www.washingtonpost.com/technology/2018/09/21/paypal-bans-alex-jones-saying-infowars-promoted-hate-or-discriminatory-intolerance.

[iii] Molla, Rani. Tech employees are much more liberal than their employers — at least as far as the candidates they support. Vox, October 31, 2018. https://www.vox.com/2018/10/31/18039528/tech-employees-politics-liberal-employers-candidates.

[iv] Last, Jonathan. The Case for Banning Alex Jones. The Weekly Standard, August 8, 2018. https://www.washingtonexaminer.com/weekly-standard/facebook-youtube-and-apple-are-right-to-ban-alex-jones-and-infowars.

[v] Zuckerberg, Mark. Building Global Community. Facebook, February 16, 2017. https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634/.

[vi] Kelion, Lion, Facebook 'Supreme Court' to begin work before US Presidential vote, September 24, 2020. https://www.bbc.com/news/technology-54278788.

[vii] Jee, Charlotte, Facebook needs 30,000 of its own content moderators, says a new report, MIT Technology Review. June 8, 2020. https://www.technologyreview.com/2020/06/08/1002894/facebook-needs-30000-of-its-own-content-moderators-says-a-new-report/.

[viii] Weber, Max, Politics as a Vocation https://web.archive.org/web/20130319092642/http://anthropos-lab.net/wp/wp-content/uploads/2011/12/Weber-Politics-as-a-Vocation.pdf.

[ix] Clegg, Nick, Welcoming the Oversight Board, Facebook, May 6, 2020. https://about.fb.com/news/2020/05/welcoming-the-oversight-board.

[x] Tufekci, Zrynep. Why Zuckerberg’s 14-Year Apology Tour Hasn’t Fixed Facebook. Wired, April 6, 2018. https://www.wired.com/story/why-zuckerberg-15-year-apology-tour-hasnt-fixed-facebook.

[xi] Askonas, Jon, How Tech Utopia Fostered Tyranny, The New Atlantis, Winter 2019. https://www.thenewatlantis.com/publications/how-tech-utopia-fostered-tyranny.

[xii] Ingram, David. Facebook names 20 people to its 'Supreme Court' for content moderation, NBC News, May 6, 2020. https://www.nbcnews.com/tech/tech-news/facebook-names-20-people-its-supreme-court-content-moderation-n1201181.

[xiii] Wiener, Anna. The Lonely Work of Moderating Hacker News, The New Yorker, August 8, 2019. https://www.newyorker.com/news/letter-from-silicon-valley/the-lonely-work-of-moderating-hacker-news.

[xiv] The Lonely Work of Moderating Hacker News (2019), https://news.ycombinator.com/item?id=25048415.

[xv] Smith, Adam, Reddit bans 7000 subreddits in hate speech crackdown, but says there's 'more work to do'. Independent, August 21, 2020. https://www.independent.co.uk/news/reddit-subreddit-ban-hate-speech-crackdown-a9682626.html.

[xvi] Barba-Kay, Anton. The Sounds of My Own Voice. The Point, February 1, 2019. https://thepointmag.com/politics/the-sound-of-my-own-voice.

[xvii] Twitter lifts freeze from New York Post account after policy reversal. The Guardian, October 30, 2020. https://www.theguardian.com/technology/2020/oct/30/twitter-new-york-post-freeze-policy-reversal.

Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

subscribe

The Mobile Edition. Simply readable.

daily findings by kevin lewis

Gross Output

Trusting the Science

Urged

Homelands

Equal Outcomes

Jobs for Life


Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.