The Crowd Contagion, Consensus, and the Power of the Collective

 The Crowd  Contagion, Consensus, and the Power of the Collective



"An illusion shared by everyone becomes a reality."

— ERICH FROMM



You've probably seen it: a flock of starlings pulsing in the evening sky, swirling this way and that, feinting right, veering left. The flock gets denser, then sparser; it moves faster, then slower; it flies in a beautiful, chaotic concert, as if guided by a secret rhythm.


The flock moves way not due to an intentional determination to get from its starting point to a landing place but because of a strange quirk of biology: each bird sees, on average, the seven birds nearest it and adjusts its own behavior in response. If its nearest neighbors move left, it tends left; if they dive right, then it usually dives right as well. The bird does not know the flock's ultimate destination and can make no radical change to the whole. But each of these birds' small alterations, when occurring in rapid sequence, shifts the course of the whole, creating mesmerizing patterns. We cannot quite understand it, but we are awed by it. It is a logic that emerges from—is an embodiment of—the network of potentially thousands of birds. The behavior is determined by the structure of the network, which shapes the behavior of the network, which shapes the structure, and so on.



This undulating dance—called a murmuration—occurs as information cascades across the flock. Looking only at what any one individual is doing misses the full scope of what is happening. This kind of beautiful, synchronized movement of the whole out of the aggregate actions of its parts is known in biology as collective or emergent behavior. It's seen in fish, ants, and bees and is, in fact, also a perfect metaphor for the behavior of flocks—or crowds—of people online.


Particularly where trending topics are concerned.


Collective behavior happens when individual members of the flock take cues from each other; there is no deliberate, intentional leader setting an agenda. Computational biologists and computer scientists who study collective behavior describe what's happening in a starling murmuration as "the rapid transmission of local behavioral response to neighbors." Each bird, bee, or fish has the capacity to influence the behavior of its neighbors even though it's not consciously aware that it's doing so.


The idea of groups having more power than the sum of their parts and individuals collectively influencing each other appears in human social psychology as well. While we may now be extremely online users of modern technology, our brains remain the same. Indeed, the notion of crowd psychology predates the internet by centuries. Chapter 2 touched on some of the good and the bad manifestations of this: social movements, moral panics, witch hunts.


Social platforms transformed our old networks, reorganizing us into new flocks formed around interests and identities, but they gained traction due to our age-old desire for connection with like-minded people. Information still passes from person to person, as with the rumor mill of old. But now it passes at unprecedented scale and speed and—most importantly—guided by algorithmic nudges that shape what is seen, by whom, and when. Platform design decisions transformed crowds in terms of their organization, what they paid attention to, and by what means they could communicate their message and reach others. Curation algorithms, for example, choose what content or users appear in your feed; the algorithm determines the seven birds, and you react. Design decisions now play a huge role in determining whether groups online are going to behave like civil communities or mobs—and yet, at the same time, the participants in the crowd also have agency.


The internet is where reality is made. You might think that sounds dramatic or absurd—the world existed prior to the internet and is governed by the laws of physics. A tornado will destroy your house whether you choose to believe it's real or not. But for many other topics, determinations of what is "real," "true," or "accurate" come about by way of social consensus. People come together in groups to evaluate rumors, stories, experiences, and information and decide the truth of a matter, so perception shapes reality. Our perception of what's happening around us today is significantly informed by what we see online. Many people get their news, spend their time, and come together to discuss the issues of the day on the internet, particularly social media. And of course, what happens online doesn't stay online.


The digital crowd is the third part of the influencer-algorithm-crowd trinity. It's composed of ordinary people, the vast majority of whom aren't trying to be influencers. In past media environments the crowd was often relegated to the role of "the audience"—the recipient of narratives. But today's online crowds are extraordinarily influential participants; the flocks that careen around the social web both feed the algorithms and influence the influencers. Crowds are key to virality—they can turn a wayward comment by a random person into the focus of a mob, and they can shape consensus or create a perception of majority opinion simply by making certain stories trend.


Angry Birds


In May 2020, the United States was teeming with crowds as millions of Americans gathered to condemn police brutality following the murder of George Floyd. Most of these gatherings were peaceful demonstrations in both big cities and small towns. Some of these gatherings were chaotic mobs that left destruction in their wake: looted stores, burning cars. And some of these gatherings were strictly online: digital debates about race, law enforcement, and the roiling American landscape.


David Shor participated in those digital debates—and the crowd took notice.


At the time, Shor—about thirty years old—was a data scientist and political analyst at Civis Analytics, a data science firm. He had earned accolades in his field, playing an important role building voting-forecast models for Barack Obama’s second presidential campaign. But he was by no means a public figure. Yet.


On the morning of May 28, 2020, amid the protests and just months before a presidential election, Shor tweeted an observation about how the two might intersect. “Post-MLK-assassination [sic] race riots reduce Democratic vote share in surrounding counties by 2%, which was enough to tip the 1968 election to Nixon,” he wrote. “Non-violent protests increase Dem vote, mainly by encouraging warm elite discourse and media coverage.”


The tweet cited research from a political scientist at an Ivy League university and featured a thumbnail image packed with text and a busy-looking graph. It was exactly the type of tweet you would expect from a data scientist working in politics amid protests and an election.


But Shor’s tweet didn’t reach a few colleagues and then fade into oblivion. Instead, a handful of left-wing influencers seized on the tweet, suggesting it wasn’t simply an observation about the country’s race issue but rather a manifestation of it. “Go to Minneapolis and fill the protesters in about your findings. Be sure to video it for our viewing pleasure,” responded Benjamin Dixon, a political podcast host and prolific tweeter.


A couple of hours later, Ari Trujillo Wesler, the creator of an organizing app for progressives, joined the conversation: “Minimizing black grief and rage to ‘bad campaign tactic for the Democrats’ is bullshit most days, but this week is absolutely cruel,” she tweeted. Wesler also tweeted at Shor’s boss, alerting his manager to the perceived “anti-blackness.”



Shor’s tweet wasn’t quite viral—to this day it has only around five hundred quote tweets—but it was enough to stir up a crowd of angry progressive tweeters, who reprimanded Shor for an alleged lack of grace or, worse, outright racism. That crowd elicited an earnest apology tweet from Shor, but the damage was done: Shor had assumed the dreaded role of Twitter Main Character. In the following days, he was dismissed from his job at Civis Analytics.10


Then the crowd went into overdrive.



Chatter on Twitter about Shor’s firing spilled over into traditional media, bringing the saga to an exponentially larger audience. “Stop Firing the Innocent,” read an op-ed headline in The Atlantic, written by author and political scientist Yascha Mounk. “The Still-Vital Case for Liberalism in a Radical Age” was the title of columnist Jonathan Chait’s essay in New York Magazine. The media coverage likely contributed to Shor being ejected from a progressive email list, which fueled more press coverage and podcast segments, which fueled more tweets. To crowds on the political right and in the center (and even some on the left), Shor was a martyr—an innocent casualty of “cancel culture.” To crowds on the left, Shor remained villainous, an exemplar of tone-deaf white America weighing in on racial justice.


By July, the saga was culture war canon—significant enough to warrant an entire explainer article on the news website Vox. Writers across the political spectrum debated whether or not this was an example of “cancel culture”—and, of course, whether cancel culture even exists. Murmurations on Twitter had transformed an unremarkable tweet by a young data scientist into the very center of The Discourse.



There is, perhaps, no clearer illustration of the importance of online crowds than the fact that media today report on the mere existence of fleeting online trends. In prior broadcast and print media ecosystems, the public gathered to discuss the events or ideas that talking heads debated on the nightly news. These days, “some people online are talking about David Shor’s tweet” is newsworthy. Trending moments on Twitter are overwhelmingly pseudo-events—Main Character–driven kerfuffles that are forgotten, sometimes within hours, as the next thing moves in to take its place. But in that moment the entirety of some faction is tenaciously fixated on a thing, absolutely mad with outrage. Indeed, within some of the most vitriolic echo chambers, this happens with such intensity and frequency that pseudo-events string together into a full-on bespoke reality: an unending stream of rumors, fabrications, and manufactured controversies capture attention, reinforcing and further entrenching the beliefs of the already convinced. 

Meanwhile, those outside the faction are wholly unaware of the tempest.



Affordances for Activism


Digital crowds are so potent largely because of the tools that platforms give them, which enable them to assemble effortlessly, to spread information instantaneously, and to achieve nearly global reach. Facebook, Twitter, and others have built tools—and multi-billion-dollar businesses—that expertly facilitate the primal human urge for community.



This became abundantly clear by the early to mid-2010s, when a series of events dispelled the notion that social media platforms were simple places for posting about a sandwich or checking in on your cousin’s latest party pictures. They were also, it turned out, places for activism, influence, and power. Committed activists used them to topple regimes: the collective behavior of ordinary Egyptians seeing a Facebook invitation to protest in Tahrir Square in 2011 eventually took down a government. Activists across the political spectrum in the United States took careful note. More malign forces did as well: Islamic State (ISIS) terrorists began to use crowds of online sympathizers on Twitter to boost their recruitment efforts and share their content, glorifying physical attacks and inspiring copycats. And, unbeknownst to most at the time, governments had begun to participate surreptitiously: by the time of the US elections in 2016, Russia had been running fake accounts and bots for nearly two years, not only creating and disseminating the messages it thought would best serve its own interests but turning crowds of real Americans into unwitting accomplices. The platforms were useful, in large part, because everyone was on them, and everyone was targetable.


The tools that facilitate these crowds—and the consequent revolutions, mustering of armies, and election meddling—are algorithmic in nature. In Chapter 2, we discussed the algorithms that nudge people into joining groups or following influencers, even if those users never proactively search for the person or term. These algorithms are always gathering data to refine their suggestions: if the user responds by clicking, the algorithm has learned that the person—and more importantly, others like them—might be receptive to similar things.


This algorithmic entanglement happens outside the view of the members of the crowd. But platforms have given their users other algorithmic tools to leverage actively and consciously: features for liking, sharing, and commenting Every scroll or hover communicates back to platform algorithms whether a user is interested in something, but liking, sharing, and commenting convey a strong signal—the person has taken time to engage and amplify. Indeed, sometimes a like alone is all it takes for a platform to push that piece of content into the feed of the user's friends or followers.


Through these tools, even people who aren't trying to become influencers have the power to influence. Ordinary people spread messages among their own social networks every day. Each discrete action, each like or share, doesn't matter very much at the individual level. But when there is some momentum, when many users share the same news URL or meme or hashtag at the same time, platform algorithms then push it out to an even broader crowd of people who are likely to be interested.



The principle is simple: information moves faster when many people are connected to many other people. In situations where huge influencers have millions of followers, but those followers are not connected directly to each other, the message travels only so far.  This one-to-many connection is how broadcasting works on radio or television. But many-to-many connections, decentralized across millions of nodes, enable truly mass dissemination.


The potential for virality depends on the substance and tone of the content, of course, too—no one shares boring material. But the structure is key, and so are the affordances that the platforms give the crowd. On Twitter or Facebook, if there are lots of committed sharers or likers, something can spread easily. If high-follower-count influencers with significant reach get involved, things can move farther and faster. The content won't necessarily hop across communities, but it will capture attention within the one that is riled up about it. Instagram and YouTube, by contrast, don't have on-platform sharing functions, so virality doesn't happen in quite the same way. But tons of attention to a particular hashtag or searches for a specific phrase might make it show up as a topic of interest, and a lot of views all at once might result in the platform featuring a video or image on a main landing page. And, of course, it's very easy to repost an Instagram meme or YouTube video to Twitter or Facebook.


These affordances are part of the reason digital crowds wield such tremendous power today. People move things from one platform to another, from one community to another, relatively effortlessly. Stories move from the bottom up—mass media is not the only agenda setter. But the power given to us by new computer software at times intersects in unexpected ways with old human software—our psychology.


Open or Closed: The Behavior of Online Crowds


In 1960, Nobel Prize–winning writer Elias Canetti, a Bulgarian-born Jew who fled Austria as the Nazis took power, published a book on crowd psychology called Crowds and Power. Despite long predating the internet, Canetti’s work, like that of Walter Lippmann and Edward Bernays, remains a useful contribution toward understanding human behavior even as it now manifests in a different landscape.


Canetti defined four attributes he believed are universal to crowds across history, from ancient tribes to the roiling masses of the French Revolution to the disaffected citizens of the Weimar Republic. First, the crowd always wants to grow. Second, within the crowd there is equality: all members stand on equal footing, and previous divisions of race, class, or other characteristics are temporarily erased. Third, the crowd loves density: there can be no component parts, just the singular crowd. And fourth, the crowd needs a direction: a shared objective to unite and propel members to act in the common interest. The direction is essential for the crowd’s existence. While crowds have culture, symbolism, and ritual, this shared goal is what keeps them from disintegrating. As Canetti put it, “A crowd exists so long as it has an unattained goal. 


Much like Noam Chomsky’s five filters, Canetti’s four attributes are still relevant today, but with adjustments to account for the virtual nature of gathering. Online crowds still want to grow and are now unhindered by physical restraints. Online crowds still have surface equality, though internecine fighting and accusations of doctrinal impurity are not uncommon; power struggles sometimes result in highly visible schisms. Online crowds still love density, and our digital accounts can be packed together more easily than our physical bodies; there’s always room for one more Twitter handle or hashtag. And as for direction, well, the shared mission is easier than ever to create given the steady supply of Main Characters and tweet-length hot takes.  Common enemies and scintillating commentary are abundant and accessible.



Perhaps the most relevant part of Canetti’s book today is his theory about why some groups of people—members of churches and the like—come together, often in large numbers, yet remain relatively peaceful, while other large groups come together more spontaneously and are far more likely to turn violent.


He called the first group closed crowds and the latter open crowds. Closed crowds, he wrote, like churches, Rotary Clubs, and community groups, all have persistent members. They are people who share a common worldview, identity, or purpose and tend to form strong ties over time. There is a set time and place for meetings, an organizational structure. Sure, some closed crowds, including cults and extremist groups, actively dislike “out group” people. But closed crowds, overall, are largely neutral—in fact, they often evolve into institutions.


Open crowds, Canetti wrote, have no such persistence or clearly articulated reason for existing. They are far more spontaneous, emerging out of some kind of disruption and then rapidly growing by drawing people in with an almost gravitational force. The open crowd, Canetti argued, often assembles in response to an outrage. It does not dissipate until there is some sort of release, which is often chaotic or violent.


The theory of open and closed crowds maps neatly onto the dynamics of different social media platforms in the present day. The persistent community groups of Facebook, Discord, and WhatsApp lead to deep relationships and places for strategizing and discussing content with the like-minded. These platforms produce closed crowds. They can still become extreme or even generate violence, but the members know each other, and there is a persistence to their bond. Twitter, meanwhile, operates like an arena—a virtual Roman Colosseum where toxic, highly visible battles are fought within hashtags, and mobs of thousands tweet at (or about) just one or two people, seething with rage as even more bystanders look on. The platform produces open crowds. Anyone drawn to the energy can immediately jump in.


An early example of these chaotic, open-crowd battles was a 2014–2015 episode dubbed “Gamergate.  The drama had started off as a niche fight: some gamers, mostly male, got angry at (and about) women in the video game industry campaigning to reduce what they saw as sexism and misogyny in game design (female characters with skimpy clothes and exaggerated breasts and the like). These opponents of the effort felt that “political correctness” was ruining video games. There was simultaneously some conspiracy theorizing about how several prominent women in gaming had become prominent in the first place—one woman was accused by her boyfriend of cheating on him and sleeping with a reporter for a gaming website in exchange for better reviews (the reporter had not reviewed any of her games; the ex-boyfriend later blamed the insinuation on a typographical error).


Some gamers saw the rumor about sleeping with reviewers as evidence of manipulative behavior; they harassed the target and others who waded into the controversy, framing their outrage as being about “ethics in video game journalism” and as part of a fight to preserve the gamer identity. The other side, including those who joined in to defend the women who were targeted, argued that this was a social justice issue.


What might once have been a personal fight between a handful of people connected to a specific precipitating incident instead ballooned into a massive harassment campaign tied up with a seething online culture war about feminism.


Some of the early gamer-identity combatants expended significant effort in trying to sway public opinion to their side. A vitriolic subset, however, organized targeted harassment campaigns on the message board 4chan. The goal of their behavior, which included posting private information, hacked nude photos, home addresses, and threats, was to intimidate their targets and drive people out of the conversation entirely. Some fled their homes. Even just tweeting an opinion about Gamergate, as a bystander who happened to see the hashtag, might lead to significant harassment as the mob coordinated to brigade others. Hyperpartisan media, such as Breitbart, began to cover the events—its then executive chairman Steve Bannon later described how instructional Gamergate was, how he’d learned the art of factional activation from watching how its most vicious trolls behaved and realized the potential of online armies in political fights. Eventually, mainstream media began to write explainers, describing the “online culture war” and its intersection with offline issues.


While Canetti’s framing remains relevant, the nature of open and closed crowds has evolved. Twitter’s design creates an opportunity for emergent collective behavior in which bystanders everywhere can instantly jump right into open crowds and start brawling. Yet Twitter’s crowds lack the peak and then dissipation that Canetti described. In the real world, statues are toppled, and protests come to an end; alternately, hatred and violence can be mitigated by looking into the eyes of the target and recognizing their humanity.


But online harassment, brigading, and dogpiling have no similar catharsis. There is no physical requirement to disperse and no achievement indicating that things have come to an end. Instead, there is a perpetual state of simmering outrage or vitriol, ready to boil over at an opportune provocation; Gamergate itself lasted for months.As members grew tired of brigading one person or raging against some idea, they simply grasped at the next. And participants in the mobs are often further emboldened by the cloak of online anonymity. There are no consequences for the behavior and minimal potential for de-escalation short of the platform suppressing a trend or suspending accounts of the worst participants—cold comfort, as new members of the crowd will be online again a few hours later to continue the fighting, and still more will appear to complain about “censorship.”



The power of closed crowds has evolved as well. The recommendation engines’ proclivity for connecting the like-minded on platforms like Facebook led to the formation of millions of insular crowds who congregated within persistent virtual gathering places, often built up deep trust between members, and shared long-term goals. Some oriented around a shared political identity, forming persistent places for communion among fellow travelers. The most partisan and extreme groups became echo chambers, where members distrusted outsiders, and opposing viewpoints rarely made it in (unless they were shared to be mocked). They were a small percentage of the groups overall, but they had significant disruptive impact. The proliferation of potential communities and crowds to participate in enabled users to choose their own adventure.



These new dynamics of open and closed crowds have distorted our perception of opinions, events, and even norms. Groups that felt themselves underrepresented in mainstream media conversations—sometimes marginalized or small communities but also conspiracy theorists—realized that by dominating the online discourse they could own share of voice (a marketing term referencing the percentage of media representing the opinion of your company or side in a political campaign compared to that of your competitor). By coordinating in a group chat or Facebook group to come together in a strategic manner on Twitter, activists could ensure that people looking for information about a controversy or debate would see their opinions. If they had few ethical scruples, harassment tactics could be used to discourage the other side from participating at all. A small group could manufacture the appearance of being the majority, in fact, if their opposition didn't consider the online battlefield equally important or was sufficiently intimidated.


The anti-vaccine movement, for example—while growing—is still relatively small. Recent surveys suggest that those opposed are estimated at approximately 15 percent of the US population,  even as the view appears to be a majority perspective based on social media vibes. Similarly, polling shows that only 18 percent of Americans support defunding the police, but that position often seems quite mainstream on Twitter and other platforms. Yet a small, committed group can game (or more effectively use) a social platform to make their viewpoint seem dominant. A combination of effective networked crowds and compelling influencers means that some of us now perceive minority opinions as the majority viewpoint, outliers as the norm, and embellishments as facts—illusions that happen because the actual majority is silent, not active on the platform, or not creating the kind of content that platform algorithms are curating. Most people do not tweet about their children getting vaccinated and having no side effects; the child gets the routine immunization, nothing happens, and they just go about their day. There are online groups dedicated to spreading the “truth” about a flat Earth—but not a round Earth.


The Political Factions


Recognition of the power of online crowds to shape perception and galvanize action led to a proliferation of online factions specifically dedicated to fighting about politics and policies. In the mid-2010s, emojis became very popular, and Twitter users increasingly began to include emojis, hashtags, and keywords in their usernames and bios to prominently signal their political identity or allegiance.  The Pepe the Frog meme—a leering cartoon frog popular on alt-right message boards—was distilled down into the frog emoji  ; many Gamergate trolls began to use it, as did a growing community of young Donald Trump supporters.  Hillary Clinton’s supporters were slower to embrace Twitter battles, but eventually adopted the blue wave emoji (🌊), which stuck for Democratic activists in subsequent elections even as Trump fans moved to American flags. Emoji-in-bio was a quick signal of allegiance, much as warring bands might have used coats of arms on their shields in eras past. By the late 2010s, emoji-delimited factions oriented around causes or ideology had proliferated: the rose of the democratic socialists, the meridian globe of neoliberals, the bike of public transit activists, the avocado of activists calling on their cities to build more housing (adopted after an executive made an ill-considered comment that young people couldn’t afford to buy homes because they spent too much money on avocado toast). And, alongside the factions, were fandoms: the bee for fans of Beyonce, the purple heart for the die-hard fans of the K-pop band BTS. Each flock engaged with the others, at times in unexpected ways.


This was precisely the case in 2020, when one San Francisco activist tried to generate a viral moment for his political campaign—and ended up sparking something entirely different. 


Shahid Buttar, a self-described Democratic Socialist, was challenging Nancy Pelosi for her seat in the House of Representatives. One Sunday morning that July, a message appeared in a Discord server (a group chat room) named “Bernie or Vest,” a reference to Senator Bernie Sanders and France’s “Yellow Vest” protest movement—both of which are popular with young left-wing Americans. The server was home to a couple thousand of the rose-in-bio Democratic Socialist activists. The message contained an image of Buttar and asked the members of the server to participate in a

social media campaign to boost his candidacy and attract public attention. The goal was to get the hashtag #PelosiMustGo to the top of the Trending Topics list, in hopes of calling attention to a long-shot candidacy that the media had largely ignored. The author of the request included a list of criteria that would make the curation algorithm more likely to push out the individual posts and less likely to penalize the group for trying to game it.



At 11:57 a.m., a Twitter user with a modest seventeen hundred followers jumped the gun on the planned 12 p.m. start time for the campaign: “#PelosiMustGo,” they tweeted. Buttar himself posted promptly at noon: “Why do you think #PelosiMustGo?” he asked his 113,000 followers.  His tweet inspired several hundred replies and retweets, some encouraging him, others questioning him, others mocking him. More of his followers began to join.


#PelosiMustGo moved up Twitter’s rankings, elbowing aside other topics that were trending at that moment: AR-15s, a golf tournament, Trump’s pardons, and Education Secretary Betsy DeVos. As it reached number seven on the Trending list, GOP congressional candidate (and QAnon supporter) DeAnna Lorraine—who herself had run against Pelosi—noticed the hashtag and tweeted her own contribution to her then 330,000 followers. She and Buttar disagreed on nearly everything—except that #PelosiMustGo.


What happened next was fascinating.


Within three minutes of her post, the hashtag—until then largely confined to the Democratic Socialist faction—began rippling through the right-wing followers of Lorraine. A second faction had entered the campaign! The conspiracy brokers of the QAnon Twitter faction quickly got in the game, appending #PelosiMustGo to their own addled posts about Wayfair and child trafficking (this was mere weeks after Amazing Polly’s viral hashtag). Pelosi supporters—the blue wave emoji-in-bio crew—also materialized, tweeting in an effort to reframe the hashtag in a positive light: “#PelosiMustGo straight to the White House and take over the presidency!”


Forty-five minutes after Buttar’s first tweet, Jack Posobiec—the Far Right Pizzagate agitator and ideological polar opposite of Shahid Buttar—picked up the hashtag. His contribution was a banal observation: “#PelosiMustGo is now #6 trending.” He himself did not even know why it was trending. But his post was enough for his followers to understand the role they were to play. His flock liked his post sixteen thousand times and replied or retweeted it thousands more. They added their own color to the conversation—#DemocraticCriminalNetwork, #PlantationDemocrats, #PuppetRegime—and propelled the hashtag fully into the national political conversation. The trend hit number one. Mission accomplished.



Hyperpartisan conservative media outlet the Daily Wire pulled the trend from social media into the broader media universe, reporting on what some people on the internet were saying, noting the hashtag’s popularity, and quoting some of its most successful left-wing contributions while studiously avoiding any mention of the right-wing faction’s involvement in getting it there. In their telling, the outpouring was wholly reflective of a massive left-wing revolt against Nancy Pelosi. 


Most of the hundreds of thousands of Twitter users who saw that hashtag trend never knew that #PelosiMustGo began because someone gave marching orders in a private Discord channel. The hashtag hit their field of view, and they reacted. They likely assumed that some sizable number of Americans somewhere were spontaneously tweeting against the then Speaker of the House—and clicked on the trend because they were curious about why.


The visible online factions, battling under their emoji banners, are having an impact. Sociologist Chris Bail, head of the Polarization Lab at Duke University, conducts interviews with social media users who become passionate participants in online political factions. They describe these online political fights as attacks on their identity. Seeing opposing views does not inspire careful reflection of the underlying policy or idea—it feels like a personal affront. Some report that the highly visible presence of opposing factions led them to become more entrenched in their own political identity—to study talking points so that they might fight online more effectively. They also described the experience of online conflict as interactive; unlike when passively seeing something on the TV news, they could search for terms, find the latest factional battle, and immediately jump in. "There was a war going on, and [they] had to choose a side," recounts Bail, describing the experience of one liberal woman.


People initially come to participate in online crowds because of a mix of algorithmic nudging and personal interest. Being part of a political faction can be fulfilling—there's a cause and a mission. Fighting a common "enemy" creates camaraderie and a sense of belonging. It can also be fun. But participation in factions may lead to entrenchment, more extreme beliefs, or stronger and more belligerent partisanship. The groups that we're part of are fundamental to our sense of self; being a persistent factional warrior becomes part of a person's identity.


Crowds, Cults, and Extremes


Social networks transformed not only how crowds of people convene and behave but how people within crowds influence one another. In Chapter 3, we discussed the phenomenon of audience capture—when an influencer shapes their content and opinions to fit the mold of what their audience wants. A similar phenomenon can trap crowds too, creating social capture as members reinforce each other's point of view, and diversity of opinions becomes increasingly scarce.


We decide what information is correct by considering what others think is correct (consensus reality). We also determine what behaviors are correct to the degree that we see those around us performing them.


Members of groups tend to reinforce each other's views, often moving each other toward a more extreme point than where they started.³⁸ Factions appear to coalesce around the opinions of the most forceful members, and those who hold differing opinions—maybe more moderate—don't express them for fear of being ostracized.³⁹ Since beliefs are shaped collectively, new information that conflicts with the group identity or comes from someone with an "outside" identity can simply be rejected; this is one reason that partisans easily dismiss fact-checking if it comes from the "other side."


This also happens with group norms and behaviors; people try to conform to the norms associated with their identity, which may also drift toward the more extreme over time. Psychologists have found that norms are followed more strictly if there's tension with an opposing group—you don't consort with the other side while you're at war!—and in the gladiatorial arena of social media, there is always tension.


The identity of an online faction should ideally reflect their positive beliefs—the things they stand for—but some instead come to be defined oppositionally. The phrase "owning the libs" or "triggering the libs" is a meme that captures the driving motivation of some right-wing factions. It's a joke, but a revealing one: without liberals as an antithesis, the faction loses cohesion. Indeed, this is one reason why right-wing alt-platforms are slow to see significant adoption: there are no libs around to own. It is more fun to stay on a platform with an opposing crowd.


Loudly railing against an enemy gets engagement. As members post and tweet during factional battles, the most extreme voices usually get the most attention; because of the way platforms are designed and how algorithms surface content, they appear to represent the opinion of the whole group. So the most extreme views from one side end up visibly clashing with the most extreme views from the other side. People with more moderate or nuanced views might get harassed, or self-censor, or not even be seen in the discussion... or they might express opinions far stronger than the ones they actually hold in order to fit in.



Performing an identity and achieving uniform opinions within a group is not the same thing as reaching consensus. It's not a process of evaluating facts, deliberating, taking in a diverse set of opinions, and forming a view of the world. Unfortunately, the incentives to go through that deliberative process are not the ones that drive the influencer-algorithm-crowd system.


Rather, the system tends to reward content that generates strong reactions and aligns with prevailing opinions. People within the faction who are particularly good at mocking collective enemies or ideological dissenters get recognition from their peers; their clout within the group increases. And when mockery, hostility, and nastiness are not only normalized but incentivized, people will deliver. Marxist writer Freddie deBoer described the dynamics well in a critique of an online pro-housing movement (which he broadly supports), though the problem is not specific to any one faction: "Social incentives cut in favor of nastiness and extremity and against nuance and bridge-building, so the forums gradually become ugly places that are hostile to dissent and celebrate excessive

មតិយោបល់

ប្រកាស​​ដ៏​ពេញនិយម​ពី​ប្លុក​នេះ

2. If You Make It Trend, You Make It True: Influencers, Algorithms, and Crowds