2. If You Make It Trend, You Make It True: Influencers, Algorithms, and Crowds
If You Make It Trend, You Make It True: Influencers, Algorithms, and Crowds
IN THE SUMMER OF 2020, as the world fretted its way through some of the worst days of the COVID-19 pandemic, the popular online furniture store Wayfair started selling something highly unusual: trafficked children.
That, at least, was the claim that exploded across a series of social media posts that popped up on Twitter, Facebook, TikTok, and YouTube-basically everywhere that mainstream Americans spent their online time The argument: that product listings for certain industrial storage cabinets-costing up to $10,000 each and marketed with human names_ were in facode for abducted children who shared the same name . Pr possibly eves had the abducted children inside them.
The bizarre accusations were soon generating millions of engagements on social media. Posts appeared from as far afield as Turkey and Argentina, featuring pictures of girls who had been reported as missing — and even some who hadn’t — next to images of extremely expensive cabinets and pillows that shared their first names.
“Y’all this Wayfair Human trafficking thing is crazy. Look at this, there are two pillows/shower curtains that are the exact same, but one is \100 and the other is \10K. The $10K one is named the same thing as a Black girl missing in Michigan,” said one post on Twitter that was retweeted 70,000 times and liked 139,000 times.
The viral rumor, which congealed under the hashtag #SaveTheChildren, turned angry very quickly. Threats were issued against the popular Boston-based furniture company and its employees. Real-life children and teens whom the internet hive mind had decided were victims of trafficking via filing cabinet, found themselves recording videos denying that they had even been abducted. Their efforts did not assuage the online mob. One eighteen-year-old was told that she was clearly being forced to make her denial by her captors. When she pushed back, again, she was criticized for her lack of gratitude to all the people trying to “save” her.
Federal investigators and nongovernmental organizations running tiplines—people who investigate and combat actual human trafficking, as well as a child aid organization actually named Save the Children—had to take time out from looking into real abductions to examine the slew of fake reports. Some made their own appeals on social media in attempts to quell the rumor, begging the public to please stop flooding their lines with calls about Wayfair.”
The notion of young women stored in filing cabinets could be easily dismissed; as the Twitter trend gained mass attention, many observers roundly mocked it. The threats that accompanied the theory, however, had to be taken seriously. After all, in 2016, outlandish accusations that former secretary of state Hillary Clinton and a cabal of Democrats were running a child-trafficking ring out of the basement of a Washington, DC, pizzeria prompted a man to show up at the restaurant, armed with an AR-15 rifle and a revolver, to rescue them. He fired at a locked door; luckily, no one was hurt. “The intel on this wasn’t 100 percent,” he said in an interview following his arrest, while emphasizing that his intent had been to help people.
And yet, despite the fact that no children had been found, despite the fact that there was not even a basement in the building, that theory—known as Pizzagate—did not dissipate.
Influencers who had gained large followings by concocting and promoting it, such as naval reservist turned right-wing conspiracy theorist Jack Posobiec, did not recant following the incident. They doubled down: the man with the AR-15 who showed up at Comet Pizza was a plant, a false flag, who went to make them look bad. In the time since Pizzagate erupted into real-world violence, Posobiec’s following has grown from 57,000 to 2.3 million on Twitter; he now has active followings on Rumble and right-wing alt-platforms, as well as myriad contributor relationships with right-wing wing media and advocacy organizations. Posobiec, as we will discuss later in this book in the context of his work pushing wild theories about an area of topics, is rarely constrained by facts. Yet, despite making immense promises from misleading claims and conspiracy theories, he presents himself as an ordinary person just there to help his audience break free of the living mainstream media.
Similarly, the tens of thousands of people who coordinated in Facebook groups and Reddit communities to “investigate” Pizzagate did not disperse after the sheer impossibility of the claim was laid bare. Quite the contrary, many metamorphosed into adherents of the even broader cultic conspiracy theorist community QAnon, which alleged that then president Donald J. Trump was secretly battling a Satanic cabal of pedophiles who were harvesting and drinking the blood of children. In fact, a majority of people who initially pushed the claims about Wayfair’s expensive industrial filing cabinets were QAnon adherents — though many other virtual bystanders saw the conspiracy theory appear in their feeds and perhaps intrigued, helped it spread by retweeting and sharing it. Someone should look into this ...
How had this all happened ?
It began when a middle-aged, blond Canadian woman with a YouTube channel and the online handle “Amazing Polly” wrote a tweet. On June 14, 2020, she posted a catalog page showing some standard, if pricey, Wayfair cabinets with the accompanying text: “My spidey senses are tingling. What’s with these ‘storage cabinets’? Extremely high prices, all listed with girls’ names & identical units selling for different amounts.”
Following Polly’s post, the rumor percolated on Reddit, Facebook, and Instagram, gathering steam before going viral on July 10. Her fellow QAnon “investigators” picked it up first, but it spread outside the community as online sleuths got to work finding pictures of kids who’d been reported as missing—including, at times, those who had been reported falsely or were runaway or had since reunited with their families—making collages that put their photos, names, and details alongside Wayfair products with the same name.
The collages were made by ordinary people: they weren’t influencers with hundreds of thousands of followers. Yet they too felt compelled to participate. Some were part of a community of “warriors” who prided themselves on discovering the secrets of elites and fighting a theoretical shadowy cabal of famous political criminals who moonlighted as child traffickers, but as the rumor spread, it drew in others who were simply intrigued in the moment. Meanwhile, the content they created and shared as part of this mission to expose wrongdoing—all of the videos and tweets and posts that used the word Wayfair or included the #SaveTheChildren hashtag—were picked up by social media algorithms. The algorithms, in turn, saw the sudden high-velocity outpouring of interest in pricey filing cabinets as a signal to surface the theory to even more people.
And yet, understanding how it happened from a mechanical standpoint raises far more important questions. Why was an obscure woman in Canada, spewing obvious nonsense, able to whip up such a global frenzy? Why were so many otherwise rational adults willing to believe her?
The story of Amazing Polly’s viral Wayfair moment is that of a simple process that happens many times a day: an influencer on the internet says a thing, and her many followers react. Algorithms boosts that collective signal, which makes more influencers, people, and media pay attention and participate as well. This straightforward forward process describes the dynamic not only of the Wayfair debacle but also of countless other rumors that propagate across social media many times a day.
People like Polly are transforming our politics, culture, and society. The power to shape public opinion—for centuries, the purview of invisible rulers within media, institutions, and positions of authority who had the capacity to define and disseminate messages—is no longer controlled from the top down. The new invisible rulers—influencers and algorithms supported by online crowds—excel at bringing information (along with a proliferation of rumors) to mass attention, from the bottom up. Underpinning this ecosystem is a new reality: if you make it trend, you make it true.
The Influencer
You have probably never heard of Amazing Polly; most people have not. But within the QAnon conspiracy community, she was an influencer, with eighty-eight thousand followers on Twitter around the time of the Wayfair debacked. Before YouTube and Twitter shut down her accounts a few months later, she’d readied 375,000 subscribers and 24.7 million video views on YouTube and 141,000 followers on Twitter.
Polly is an avatar for the new cast of people wielding influence today. With her several hundred thousand followers, Polly had access to an audience comparable to a respectably sized local media outlet. But she was not a media. She was an active participant in a very particular niche community, QAnon, and was perceived as a trustworthy person within that online community. Her followers considered her interpretation of current events and involving themselves in online stories worth not only paying attention to but amplifying, even broader American public had never heard of her. She was a conduit who had the power to create and spread her own narratives across platforms. On that day, her narrative reached millions.
A handful of seemingly arbitrary people on social platforms, “influencers,” now have a significant impact on what the public talks about and what the news media cover on any given day—particularly when it comes to culture war politics. Influencers are the opinion leaders of the internet age. But instead of simply digesting what the media is saying for their community of friends, as the women of Decatur did, they drive the conversation, not only curating but also creating, with a healthy dose of support from online algorithms and audiences who share and comment on their content. This new system determines what goes viral, what gets attention, and what is then reported on by the media—in turn shaping public opinion, steering culture, and stoking political battles.
But what is an influencer, exactly?
The term came from marketers. Back in the early days of the internet, marketers serving corporate America discovered that certain charismatic people had a talent for connecting with niche audiences and persuading them to buy things. The term influential and then influencer—meaning someone with a large following and a uniquely relatable presence—began to appear on marketing blogs to refer to these folks. That charisma came not from being larger-than-life or famous—in other words, not a celebrity—but from being ordinary, albeit with a slightly charmed life (or the ability to make it seem so through selective editing) . Aspirational, yet achievable. These captivating folks often had a particular skill, hobby, or interest that they were infectiously excited about-something that inspired them to take to the internet and share with the world.
The influencers, it seemed to the marketers, were people with the potential to shape the culture: what people listened to, what they bought, even what slang they used. This didn’t necessarily mean that they could accomplish this feat with large——influencers usually didn’t have national name recognition—but that didn’t matter. They were connecting with particular communities of people as they blogged, vlogged, and eventually Instagrammed and TikTok’d their everyday lives. And many companies wanted to reach those particular communities of people.
The elite among influencers possess the storytelling chops of a Madison Avenue ad exec, have the reach of a mass media TV anchor, and create the cozy, intimate feeling of a phone call with your best friend. Some promote products, using their clout to earn money—being an influencer can be a very lucrative career. Others, particularly since 2015, promote ideas and ideologies, delivering political commentary with the rhetorical adroitness of a master propagandist. Some influencers, strikingly, do both—purveying politics and products—because once an influencer has grown a sufficiently large audience, people will consume their posts not only on the niche interest they were originally known for but also on whatever topic they decide to weigh in on. A mom blogger, who got famous for her fun school lunch content, weighing in on Fed rate hikes? Why not.
How have these seemingly ordinary people on social media become profilers of culture, political topics, and public discourse? Trafficked children, COVID vaccines, school curricula, police funding: The short answer is that influencers are masters of attracting attention and then using that attention to amass more influence.
Let’s look at how this cycle came to be and how it works.
The early social internet—sometimes called Web 2.0—changed the media landscape, giving anyone with a dial-up connection free tools to blog, post photos, and make memes. The Age of the Creator began.
However, while the internet delivered free creation tools and blogs, a lot of the content that was created languished in obscure corners of the web. Everything was decentralized, spread out across millions of individual sites, with only some fairly primitive search engines to surface interesting items, This made it hard for people to find the content and hard for creators to grow an audience.
That changed in the mid-2000s as social networking companies gave their users even simpler on-platform tools for making content, encouraging them to type a one-sentence status or upload a photo of the kids. But more importantly, the platforms also put the tools to target and spread content into everyone’s hands. The structure of communication was once again transformed.
Social media affordances—the features and capabilities that social media companies offer their users—democratized distribution. Hundreds of thousands of users joined the nascent platforms, hanging out first with their real-world friends and later with friends they made online. Now, with massive potential audiences all largely in one place, creating and sharing content felt meaningful—exhilarating even. Anyone could create and spread a message, whatever it happened to be. This new ecosystem overtook any conception of audiences as passive; they had now become active participants in both creation and dissemination.
Some early creators used these tools to share news, first on blogs and subsequently across all available social platforms. Journalists and professor Dan Gillmor referred to the new class of contributors as “citizen journalists.” Some participated in an ad hoc way, helping to write the first draft of history for a particular event by sharing their stories of being in a relevant place at a critical moment. Others established a more permanent presence and built up self-published news sites. These creators, the media- one, adopted the mantle of media through declaration and branding. Despite official-sounding new names, they were often run by one or two people from their living room—passion projects, at first, though some of them attracted significant followers and quickly professionalized. Ezra Klein, Andrew Sullivan, Brian Stelter, and Matt Drudge, for example, began as bloggers, gained acclaim, and then became high-profile journalists. Some pursued breaking stories within a particular niche area of expertise, such as a previous industry they’d worked in. Many, however, chose to focus not on breaking news—but on investigative journalism that requires significant resources—but on analysis and commentary, using the tools of social media to turn journalism, as Gillmor put it, into a conversation rather than a lecture. They would come to redefine the industry.
But alongside this (d)evolution in media came a unique offshoot in the evolutionary tree: individuals who used the same social media tools and strategies as the proliferating new-media entrepreneurs but positioned themselves quite deliberately as ordinary people.
These ordinary folks—like Polly or, perhaps most famously, the dance phenomenon Charli D’Amelio, a cheerful brunette from Connecticut with 151 million TikTok followers—didn’t bother trying to construct some corporate or newsy brand identity and didn’t declare themselves “citizen journalists.” They just wanted to talk about the random interests they were most passionate about: the retiree who loved to discuss gardening; your kid’s piano teacher, who, on the side, was super into knitting and cooking; a full-time barista with a secret passion for ecotravels.
They posted as themselves, using a casual, first-person, low-production style. And yet, despite being completely ordinary, some managed to amass huge audiences. These creators had the reach of mass media—passionate fanbases who shared and retweeted and reposted their content—but they were intriguing to audiences precisely because they didn’t have a corporate media feel. Their ordinariness granted them a combination of authenticity and relatability: I am just like you. And while this “ordinariness” may become manicured over time, its origins are genuine.
Their audiences listened to what they had to say and, more importantly, kept coming back to hear more. Some influencers didn’t even have particularly massive followings, but their opinions were respected enough that they could potentially drive their followers to form a positive opinion of a brand or try out a new product. Marketers encouraged their corporate clients to engage with this emerging force: to sponsor posts or to enable product placement. These elite content creators were a contemporary manifestation of Bernays’s invisible rulers: people who understood a community’s psychological and emotional currents, who could unlock the motives behind a community’s desires, who could create purchaser demand. Because the influencers were members of the community themselves, marketers believed, they could convince people to buy things—and, as political forces began to take note, to believe things.
It turned out to be a bit more complicated than that: having a lot of followers didn’t translate into having the power of persuasion, and influencers weren’t magicians capable of manifesting trends out of nowhere. But they were good at forging close and trusting relationships with their fans. They created an online experience truly designed for the social media age, in which the audience could participate by commenting and chatting back. Influencers joke around in the comments, often replying to followers individually. People genuinely feel that they know many of the influencers they follow. The influencers might live a million miles away, or they are present. They are the antitheses of old media, which is aloof, one-way, talking to their audience. Instead, influencers talk with their audience.
This also makes them distinct from celebrities, who are often anointed by the media and remain somewhat inaccessible. It’s not always clear how or why the legacy media decide to anoint someone. Early in her rise to stardom, for example, Kim Kardashian was often described as “famous for being famous”—a concept articulated decades prior by Daniel Boorstin, a prominent American historian, author, and media theorist of the mid-twentieth century. Many influencers seem to fit this characterization. Charli D’Amelio poked fun at it in one of her TikTok bios as she rose to stardom: “don’t worry I don’t get the hype either.” She’d been an early adopter of TikTok when the app was still relatively new in the United States and was a talented dancer, but algorithmic alchemy played an undeniable role in her meteoric rise. Indeed, in addition to being engaged and available, influencers are distinct from celebrities in that they are often self-made; while the algorithm provides an assist, they attract their early attention themselves by being good at social media. They control their own content, brand, and distribution. While influencers can certainly become celebrities—garnering movie cameos or guest spots on TV programs like D’Amelio’s stint on Dancing with the Stars—that happens after they’ve been effective on their own.
But what makes an effective influencer? The opinion-leader ladies of Decatur were fellow members of a geographical constrained real-life community who were personable and attened to the news around them. If everyone has access to the same tools, what sets some people apart, enabling them to amass influence in an increasingly virtual world?
Traits that endear people to others, such as charisma, attractiveness, humor, and a lively personality, are also key to an influencer’s popularity. But beyond being relatable, they are highly attuned to their audience; as their following evolves, they balance authenticity with being appealing. They know what social media trending and curation algorithms will reward—in fact, they spend a lot of time paying attention to that.
But first and foremost, the influencer is a storyteller—someone adept at creating twists, conflicts, heroes, villains, and all the other trappings of a good narrative.
Marketing is also about telling a story and creating context and an image around a product. So is public relations, as Bernays repeatedly emphasized. Therefore, it is unsurprising that corporate storytellers were the first to see the power of social media influencers. As social media gained more adopters in the late naughts, marketers developed the criteria of ‘reach, relevance, and resonance’ for assessing what made particular influencers successful.
Reach—how many people an influencer could, well, reach—measures an influencer’s audience size, an important metric for determining how far a message might spread. How does an influencer get that reach, though? By creating and curating content that interests people—often within a particular niche to start. Relevance is about sharing information that a specific audience cares about. Certain influencers become top-of-mind for a particular topic: Jordan Ferney of Oh Happy Day for her whimsical, made-for-Instagram children’s birthday parties, PewDiePie for gaming, Jeffrey Starr for makeup application videos (and, recently, raising yaks). Even if not broadly known to the whole country (like a celebrity), the influencer is perceived as a compelling or authoritative speaker producing meaningful content for a particular niche.
Resonance is the creator’s ability to make a message so compelling that people take action: buy a product, attend a rally for a cause, or even just share the creator’s video. And this is where skill as a storyteller comes into play. Relevance assesses whether an influencer is posting about topics interesting to a particular audience, but resonance asks whether an influencer’s posts are clicking with their audience emotionally, enticing followers and fans to spread the content and come back for more. Even influencers with low follower counts are useful if they have relevance and followership within a distinct, tight-knit community. They may have fewer followers, but they enjoy a trust and rapport that celebrity spokespeople don’t have. Influencers know what stories are relevant to their audience because they’re part of the same community. But knowing how to tell them—what tone to strike, what kind of rhetorical style to use, and what will build trust or entice a share—sets some influencers apart because it helps them capture and keep attention. And if they delve into the political realm, the combination of trust and skill at storytelling gives influencers the power to sell something far more potent than products: they can sell ideologies.
The power to influence opinions lies with those who can most widely and effectively disseminate a message. This holds in all media environments and was even true in our old word-of-mouth English village. But platform tools for sharing have supercharged dissemination—and made it into something of a game.
The Algorithm
The spread of Amazing Polly’s conspiracy theory about filing cabinets was aided and abetted by more than her folksy charisma and an impressionable audience. It was made possible by “the algorithm”: the kingmaker of trends and influencers, the shaper of crowds, and an invisible ruler wrought from code.
"The algorithm” is not one thing, of course, though the term has become a shorthand for the important processes that curate, suggest, and rank content—rather opaque—on social media platforms. Newsfeeds, recommendation systems, search engines—algorithms like these influence whom you know, what you see, what you post, how you feel, and even how you behave online (and off). They are often talked about in neutral terms, like a series of steps undertaken to solve a particular problem, computer code that simply processes user input or data. But in reality, social media algorithms are anything but neutral. They are intensely shaped by platforms’ primary business incentive: maximizing user engagement in service to advertising revenue. They are also, in conjunction with the terms of service and other policies, a manifestation of the values of the platform: what it will privilege, present, tolerate, or throttle.
Algorithmic system results on social media are often highly personalized. Social media companies are working to keep you on site (to see ads) or attempting to deliver results useful to you. You can see this personalization reflected in the results of search engines, like Google, which try to maximize relevance: searching for “bakery” in New York should return different results than searching in Paris. Features like the little predictive-text nudges of autocomplete also look to maximize relevance by suggesting refinements: typing in “cats” might prompt the user with “cats movie” or “cats that don’t shed.”
Autocomplete suggestions offer a glimpse into the algorithm’s understanding of aggregate user behavior—an immense important part of how key social media algorithms work. The outputs are personalized, but the data sets are massive, and the platforms are mining for similarities across vast quantities of user information. If a whole lot of users near you or similar to you complete the query that you’ve begun with specific words, the autocomplete algorithm takes that into account and may be more likely to suggest them to you. Since the results are derived from data related to millions of searches, autocomplete is, in a sense, holding a mirror up to society. Occasionally what shows up in that mirror is uncomfortable, or worse: during the 2012 presidential campaign, autocomplete results included “Is Obama a secret Muslim” or offered up his middle name—“Hussein”—at a time when his birthplace and loyalties were the subject of many conspiratorial-media smears. Autocomplete reflected the extent to which those smears had people looking for answers. More recently, internet studies scholar Dr. Safiya Noble cataloged all the offensive suggestions that follow “Why are black women so…” The autocomplete results are a nudge; the results that follow can take people in unexpected directions.
All media environments have had a mechanism for shaping what is influential and determining, to some extent, what people see. In print, radio, and television, the curators were other people, the editorial gatekeepers, who decided what to cover on the news. With the advent of the internet, vast quantities of content on every conceivable topic could be found online, produced by whoever had a mind to post. Search engines became critically important form of curation in a world with a growing glut of content. But autocomplete algorithms went one small step further, by adding a suggestion layer—the digital equivalent of a bookstore or library laying out a table of enticing content that a user hadn’t gone looking for but couldn’t resist.
Nudging people with things they hadn’t gone looking for quickly became a foundational tool for Big Tech to engage its users.
Some of the first nudges social media algorithms presented to users were connections with other users. They began to shape whom we know, fundamentally reorienting human social networks by connecting people who’d never met in real life and moving us from “friend graphs” to “interest graphs.” This is, in fact, the phenomenon that birthed influencers: influencers become influencers in large part because algorithms recommend them. Charli D’Amelio didn’t have a sleek PR firm or lavish marketing budget at her disposal when she started posting dance videos in May 2019. But she did have engaging content and an algorithm incentivized to spread it—and her fifteenth post, on July 17, 2019, went viral as TikTok’s curation algorithm promoted it. She gained thousands of followers. And then, as she continued to post more dance videos, the TikTok algorithm, along with word of mouth, reliably grew her audience.
Consider algorithms like Facebook’s People You May Know (PYMK) feature: a recommender system that proactively suggests potential friends, displaying the recommended profile’s picture and name in a prominent spot on the website. Its goal is not only to link users who might know each other in real life but to connect friends of friends as well—because Facebook’s growth team observed that knowing more people increased a user’s time on site. Facebook’s engineers estimated that typical users might have forty thousand friends of friends, while “power users” might have as many as eight hundred thousand. Recommender algorithms work—on Facebook and elsewhere—by creating a sense of serendipity or curiosity and inspiring us to click. These algorithms succeed in connecting people at a massive scale and in a way that transcends geography—creating new social networks for content and opinions to traverse. They turned friend into a verb.
PYMK and similar connection-recommendation algorithms leverage any and all data at their disposal. Social media apps ask for access to your phone contacts so they can check the numbers against their registered users, and then suggest that you friend or follow the matches. Other ways of connecting people include physical proximity—like if your phone’s location capabilities show you and another individual are working at the same location or going to the same school at the same time. This works decently well but at times reveals that the recommender do not actually understand the social norms they simulate. Tech journalist Kashmir Hill conducted an investigation into one troubling incident in which PYMK algorithms recommended a therapist’s patients to each other, presumably based on their mutual links to her (for example, having her practice’s phone number in their phones). Algorithms are optimized to achieve a particular objective; this means that if they aren’t carefully thought through, they may have unintended consequences.
Facebook was originally built on the idea of ‘friends’ and bidirectional relationships: you were friends with someone, and they were friends with you. It was, to a large extent, replicating real-world social networks in virtual space. But other platforms, such as Twitter, built their businesses and user experience around a ‘follower’ model: you could follow a person on the platform and see their public posts even if the person had no idea you existed and didn’t follow you back. You could follow people based on interests.
Algorithms that suggested people to follow based on their prominence or topical relevance helped unintentionally birth influencers simply by giving their accounts pride of place on a list. In January 2010, tech entrepreneur Anil Dash wrote a blog post describing his experience of being placed on Twitter’s Suggested Users list, a feature that debuted in 2009. ‘If I’d have continued my normal rate of growth, I’d have about 25,000 followers today,’ he wrote, ‘but thanks to being on the list, I’ve got close to 300,000 followers.’ Getting more followers begets yet more followers, as popular accounts are suggested more often. The Suggested Users recommender system was somewhat controversial even in 2009, as people recognized its power to consolidate influence online in ways that felt random, like the algorithm or platform owner putting his thumb on the scale; it felt decided, different from how fame was earned offline, despite there also being a fair bit of randomness in how media anoints celebrities.
Elon Musk, who bought Twitter in 2022, offers a more recent example. As of early 2023, following Elon Musk surfaces clusters of similar accounts—that Twitter’s algorithm thinks will interest the user: Marjorie Taylor Greene, Tesla, Jim Jordan, Leo Terrell, SpaceX, and Donald Trump Jr. Being put on the suggested followers list of the site’s most famous user delivers benefits to the others pulled along for the ride; Marjorie Taylor Greene’s and Jim Jordan’s follower counts jumped by hundreds of thousands within a month. It was an interesting glimpse into where Twitter’s algorithms thought Musk was situated politically. And if you were to follow one of the suggested accounts, you would, in turn, see another wave of suggestions similar to them—perhaps more right-wing politicians or right-wing entertainers. The recommendation engine shapes networks of influence and amplification with potentially significant impacts.
Facebook, too, gradually began to connect people around interests. It did this in large part by beginning to recommend not only friends but groups, persistent communities set up around a topic—Backyard Chickens, Melanoma Warriors, My Baby Won’t Sleep—in which people came together not around mutual connections but around interests. People formed deep friendships in these groups—which tended to increase time on site. This was good for the company.
And so, it set about reorienting users’ social networks through recommender systems that promoted groups. In their simplest form, recommender systems suggest content based on expressed interests. Known as content-based filtering, that type of algorithm takes signals from what the user herself has done, such as watch a video about gardening, and then suggests more gardening content or communities. But as platforms amassed data from millions of users, recommender systems developed the capacity to elicit similarities between users, offering up suggestions derived not from what the target herself had done but from what people like her had done—a process called collaborative filtering. Watching a video about gardening might lead to suggestions for vegetarian cooking groups, perhaps, or jogging.
As social media algorithms sucked up and processed more and more data from user behavior—including around politics—and refashioned it into nudges, they proved to be extraordinarily accurate at suggesting things that people were interested in but also troublingly amoral. Between 2013 and 2015, signs began to emerge of an unintended consequence: algorithmic recommendation of deeply toxic communities. In 2014, for example, Twitter began to have something of a problem with the Syria- and Iraq-based terrorist group Islamic State (ISIS, or Daesh). ISIS recruiters and fanboys were present on the platform, using it as a tool of propaganda and influence, spreading their messages and growing their followings. They were there to cheerlead for the ideology of their so-called virtual caliphate, and if you followed one account, the algorithm suggested more.
In 2016, Facebook’s group recommendation engine had begun to suggest Pizzagate groups to users who were in other conspiracy-theory groups, such as pseudoscience communities focused on vaccines and chemtrails. By early 2018, it had begun to suggest QAnon groups, long before QAnon (or Amazing Polly) became the subject of extensive media coverage, helping the nascent fringe theory to grow into an omniscipracy, a singularity in which all manner of conspiracy theories melted together and appealed to far more adherents than any component part. The recommendation engine was functioning as something of a conspiracy theory correlation matrix: You appreciate flat Earth content and believe NASA is lying to you, so you should definitely check out these people over here who believe pedophiles are drinking the blood of children in a pizza place and John F. Kennedy Jr. has returned. This unintended consequence has persisted for years and appears across many topics; in May 2023, my team at the Stanford Internet Observatory discovered that following one account in a child-exploitation network on Instagram resulted in suggestions for more. People looking for this kind of extremely illegal and very harmful content were being connected to each other not only by keywords or real-world relationships but by a recommender system that intuited they had mutual interests.
“The collaborative filtering algorithm was doing precisely what it was designed to do: suggesting content or communities to people likely to be interested because of some underlying similarity to other people (in the case of the conspiracy correlation matrix, perhaps a shared distrust of the government). To a large extent, these algorithms reflect existing human preferences: birds of a feather flock together. And, indeed, PYMK and other recommender algorithms that arrange people into networks often generate communities of highly similar users (high-homophily networks), a fact that has led to concern about echo chambers.
Some of these edge cases, however, went beyond echo chambers; the recommendation engines were pushing people into communities that were borderline, if not directly, harmful. By late 2018, QAnon was already, in essence, a decentralized cult, connected to several instances of violence as well as enough stories of terrible family impact to warrant a dedicated support group (called “QAnonCasualties”). On multiple occasions, anti-vaccine groups promoted by the algorithm saw members suffer child deaths after a misguided parent ignored sound medical advice in favor of suggestions from fellow members. One mother in the highly algorithmically recommended group Stop Mandatory Vaccination chose to skip Tamiflu after members warned her against Big Pharma’s treatments and suggested she try elderberry or breastmilk, or put onions in her son’s socks instead. He later died from the flu. Another skipped the vitamin K shot that prevents newborn brain bleeds out of fear that it was a vaccine with toxic ingredients—a misconception widely spread within the group. At the time, search engines were also surfacing anti-vaccine blogs in response to searches for “vitamin K shot.” The algorithms work with what they have.
Although some academics, journalists, and activists (like me, at the time) wrote about these situations and urged a rethinking of recommender system ethics, platforms largely avoided action, though internally they were secretly beginning to have some concerns. In 2016, Facebook’s internal research found that 64 percent of people who joined an extreme group did so by way of a recommendation and that the groups were, as Jeff Horwitz of the Wall Street Journal put it, “disproportionately influenced by a subset of hyperactive users.” In summer 2019, Facebook’s internal research teams set up a persona account—a politically conservative mother named Carol—and found that within two days, she too was being pushed into groups dedicated to QAnon. Broadly speaking, this is how people like Polly and her audience connected with each other, how the community grew, and how its influencers came to serve as centers of gravity for large numbers of people. And yet Facebook’s leadership team largely ignored its own findings.
By 2020, concern had grown too large to ignore. In October of that year, ahead of the US presidential election, Facebook disabled group recommendations for all political and social issues and also banned QAnon pages and groups. However, although the platform eventually moderated, the groups simply reassembled elsewhere; by this point the social networks and connections were already made, the worldviews shaped, the friendships and fandoms solidified. People had been nudged into conspiracy theorist communities, sending them down rabbit holes into full-blown bespoke realities, for years. And as technology ethicist L. M. Sacasas puts it, “The worlds we now inhabit are digitized realms incapable by their nature and design of generating a broadly shared experience of reality.” This can be lamented, if one is so inclined, but it cannot be undone.
Algorithms shape not only whom we know but also what we see. After encouraging us to join groups and follow people, curation and feed-ranking algorithms select from among these (and similar) accounts to push content into our field of view.
As social media turned everyone into content creators, platforms found themselves wrangling with the problem of a content glut. Early social media had largely relied on a reverse-chronological feed, with the most recent post at the top. Feed-ranking and curation algorithms instead filtered and sorted to try to determine what would be the most engaging, surfacing personalized content for any given user. These curation systems start with an inventory of all possible content on the platform available for the user to see—often the posts from influencers they follow, groups they’re part of, or content similar to either. Then they remove ‘borderline’ (problematic) content—things that bump up against the platform’s moderation policies and are ineligible for curation (and, at times, for monetization); this is called being deboosted, downranked, or demonetized. From the posts that remain, the curation algorithm selects a subset of “candidates” posts with high likelohood of being relevant to the target user then ranks them That ranking process is where the platform guesses which of the relevant posts you're most likely to engage with, keeping you interested, on site, and able to see ads.
“Curation algorithms are critical to a platform’s business—and an influencer’s livelihood. Facebook’s Feed, Twitter’s For You timeline, and YouTube’s right-side column of video thumbnails all strive to pick precisely the right posts to keep a specific user on site. Indeed, on one of the fastest-growing social apps of the 2020s—TikTok—the For You Page (FYP) moved away from curation based on users’ social graphs and toward curation primarily based on users’ intuited interests. Seemingly arbitrary videos often become very popular. In a study of popular election-related content in 2022, my team noticed that many of the TikTok videos with the most views were not those with obvious political hashtags (which might indicate people found them through search) or from accounts with audiences (which might indicate organic reach). Rather, they appeared to be videos that the FYP algorithm had decided should be amplified. It was, in fact, nearly impossible to find the most popular election videos through search because keywords weren’t what mattered—trying to figure out what election-related content was most popular for TikTok in 2022 was like searching a dictionary for a word you don’t know how to spell. Compounding the opaque nature of things, in January 2023 it emerged that some TikTok employees had access to a tool called the “heating” button, which enabled them to boost specific videos (across any content category). The button was purportedly used to entice influencers and brands into partnerships, and “heated” video views were sometimes 1 to 2 percent of the daily total video views on the platform—an extraordinary number.”
Competing for attention in the ruthless online arena is incredibly challenging. For savvy or lucky influencers, the curation algorithm can be a kingmaker: content that is widely boosted to For You—type features on TikTok, Instagram, or Twitter or gets pushed by Facebook Watch or YouTube’s autoplay can result in tens of thousands of views, tens of thousands of new followers, and potentially quite a lot of money. Influencers may prize authenticity, but those who turn social media into a career—or who use it to pursue clout and power—are also acutely aware of what the algorithms will reward. They constantly play what social media scholar Kelley Cotter called “the visibility game,” testing creation strategies to see which consistently yield follower growth or higher engagement. A 2023 Wired profile of one prominent talent manager for influencers, Ursus Magana, highlighted his personal slogan: “Influence the algorithm, not the audience.
Influencers spend inordinate amounts of time chasing the whims of algorithms because their boost can be life changing. Consider the former pizza shop employee who posted a sixteen-second TikTok clip of herself playing a video game, received three million views, and now spends her time trying to make it as an influencer. But an occasional algorithmic boost isn’t enough—after all, influencers are in constant competition with others to capture attention. Their followers are following other influencers as well. The influencer’s primary job is to grow an audience, a crowd of followers who will serve as eyeballs on the content as well as a secondary distribution machine. Reach significantly determines to what extent the influencer makes money or accrues power, so the temptation to pepper content with the kind of bait that the algorithms will boost to a niche is ever present.
Recommendation and curation algorithms, in other words, shape the very nature of content itself.
In 2020, during the COVID pandemic, Facebook’s Watch tab began to push a curated feed of videos of attractive young women doing bizarre and often gross things with food: mixing punch in a toilet bowl or putting SpaghettiOs into pie crust. The content was the work of a group of creators tied to magician Rick Lax. No matter what types of accounts you followed, it seemed, the Watch feature really wanted you to see videos from Rick Lax’s network of creators; their aggregate view counts were in the hundreds of millions.
A short while later, the gross food videos seemed to disappear, and a new style of massively popular videos featuring the same women began to dominate Watch, but this time they were acting, pretending to be discovered having affairs. The spouse would come home, and for a full twelve minutes the video would show the clueless spouse getting so close to discovering the paramour as they ran around trying to hide in increasingly ridiculous ways. There were dozens of variants; sometimes the spouse coming home was a returning veteran! Judging from the comments, many Facebook users who had been pushed the content via the Watch recommender algorithm found it obnoxious. But they also often clearly watched to the end— waiting for a payoff that never came— and then left a peeved remark. The recommender system was doing what it was supposed to do: serving content that kept users watching. But it also felt a bit like things had gone off the rails; these videos so frequently dominated the Watch page that a colleague of mine, Ryan Butner, compared the situation to an invasive species proliferating and going haywire within an ecosystem, crowding out all other types of content. Cane toads taking over the Watch tab.
Intrigued, I took a closer look at the network behind these videos and found a cadre of creators and influencers all cross-promoting and linking to each other. The actors had their own Facebook pages, where they were posting the videos; some had a lot of followers, but many did not. Yet these videos had millions of views. The algorithmic Watch feed was clearly driving engagement.
As I looked at the data behind the videos, going back months, I noticed that Lax had progressively discovered a formula for content titles that seemed to have helped things along. The whole network of creators gradually began to use simple, shocking, curiosity-inspiring phrases as titles for their mini soap operas: ‘What he saw when he opened the door?’ or ‘When she looked under the couch!’ The engagement on titles in that format was through the roof. Lax had worked out the magic formula, creating content specifically designed to feed the algorithm; he and his friends were getting rich as a result.
There’s a famous saying from media theorist Marshall McLuhan: ‘The medium is the message.’ Different types of media—print, radio, television—created different types of experiences for audiences. The structure of the communication channels incentivized what substance moved across it. Media theorist Neil Postman put it another way in his book Amusing Ourselves to Death: ‘Form will determine the nature of content.’ Social media affordances shape what we all create and share. Influencers make content not for humans but for the intersection of humans and algorithms. We all do this, to some extent—even if we are primarily posting for a small group of friends, it has become second nature to think about the composition of a photo or the moment it captures as something to optimize for Instagram. Influencers define an “Instagram-worthy” aesthetic for engagement optimization, and then the rest of us unwittingly replicate it.
Top influencers understand how structure and substance are connected. They are experts at blending resonant rhetoric and content that also check the boxes on what the algorithm is inclined to boost. But there’s a cat-and-mouse game in play: platforms conceal the details of what their algorithms are likely to reward because they know people will go to great lengths to engineer that algorithmic boost. And platforms notice when users start to not like, or get tired of, a particular thing and adjust their algorithms accordingly. So it was with Rick Lax: after a few months of extraordinary popularity, the Facebook Watch algorithm changed what it rewarded, and his meticulously constructed clickbait titles were rendered ineffective. Engagement plummeted. The algorithm giveth, and the algorithm taketh away.
Social media companies aren’t oblivious to the fact that their business models intersect with influencers’ business models in perverse ways. While some influencers, particularly the content creators who are primarily focused on broadcasting their lives, strive for broad appeal, the fastest path for many is to target a niche. For political influencers, as with the media-off one, there is little incentive to be broadly appealing; generality is too vague to meaningfully target. The ever-present need to capture attention from a specific niche encourages the production of sensational, and at times even toxic, content. One internal report from Facebook was particularly revealing: ‘Our algorithms exploit the human brain’s attraction to divisiveness,’ read a slide from a 2018 presentation. ‘If left uncheckfed,’ it warned, Facebook would feed users ‘more and more divisive content in an effort to gain user attention and increase time on the platform.
There is no such thing as a ‘neutral’ ranking or recommendation system. Every algorithmic curation system—even the reverse-chronological feed—is encoded with value judgments about what criteria are most important for determining what to place on users’ screens. These value judgments create incentives for those who want to capture attention in pursuit of power, profit, or clout. Ranked feeds might promote sensationalism, but reverse-chronological feeds incentivize frequent posting.
Invisible, opaque algorithms play a significant role in deciding what gets attention and when. In tandem with influencers, they determine what people see, what they talk about, and what they create on any given day. They decide what will dominate millions of people’s feeds on a Tuesday afternoon. They are the purveyors of community connection. They are also purveyors of overwrought drama and dubious stories with little substance, yet big potential to capture attention.
But there’s a third part of the equation—of the new trinity of influence—that makes this system so powerful: online crowds.
The Crowd
Amazing Polly’s saga wasn’t limited to paranoid Twitter threads— #SaveTheChildren leapt offline, impacting people in very real, very disruptive ways. As the trend progressed, with more and more participants chiming in, law enforcement and child safety organizations were overrun with calls about the Wayfair conspiracy. Professionals had to plead with people to stop calling about the situation, because it distracted them from saving actual children.
Together, Polly the influencer and the platform algorithms convoked a frenzied crowd. But that crowd was no passive entity: it completed a feedback loop, reinforcing Polly’s incentives and the algorithms’ behavior. Working in tandem, the trinity had made a rumor go wildly viral that day, and concern for non-existent trafficked children eclipsed the needs of real ones.
Groups of likeminded people have gathered and organized for centuries. Activism and grassroots organizing call attention to worthy causes and bring people together with a shared sense of mission. People join religious groups or organize into unions or political parties. Crowds come together spontaneously in protest against injustice, both online and off. But crowd activity can also go bad, souring into moral panics, mass delusions, and hysteria. Mob violence can have terrible consequences. From tulip bubbles to witch hunts, history is replete with examples of extraordinary popular delusions and the madness of crowds.
Platforms have imbued crowds with new qualities. They are no longer fleeting and local but persistent and global. They engage symbiotically with influencers but don’t require a leader or physical space to assemble. Platform affordances empowered crowds; they are now active participants in chronicling history and shaping narratives.
Influencers are acutely aware of the power of the online crowds. The influencer can use targeted ads and paid boosts to help her work get seen by the right audience. But achieving mass viral distribution—getting something to trend—requires appealing to either the algorithm or the crowd, or both. Algorithms, as we saw with Rick Lax, can be fickle; a platform can change its feed-ranking weightings or content policies overnight and tank a creator’s reach. Consistently resonating with a passionate, activist niche crowd, getting its members liking, sharing, subscribing, and reposting, is a lower-volatility approach to making money and achieving reach. Fans who feel passionately about a message may propagate an influencer’s posts from YouTube to Facebook to Reddit to Mastodon, ensuring that the influencer’s work is seen by audiences everywhere, even if they themselves aren’t particularly active on a given platform. This approach, however, does still put some pressure on the influencer: they are inclined to continue to make content that the audience wants to see—or won’t disagree with.
Large online crowds have the ability to drive mass awareness and trends at a global level. As they decide among themselves what to amplify, from among the various posts and aligned influencers they follow, they help influencers (and brands and political candidates) rapidly identify the memes and messages that truly resonate and capture public attention.
Individual platforms created distinctive features to encourage users to connect into groups, reordering social networks. Platform design and affordances then influenced how communities behaved. Facebook, as noted, join them. On Twitter, groups were private, created by an owner who then invited members. They were very effective spaces for calling attention to whatever was happening on the platform: any given member could share a tweet into a group, and rest of the group could go interact with it. Sometimes this involved liking or retreating something posted by a group.
member to boost engagement and help attract attention. However, Twitter groups were also useful for coordinating a brigade against an unfortunate tweeter or initiating a campaign to mass-report a particular post. Everyone has access to the same affordances; the difference in how they’re used comes from community norms.
Twitter’s design really offered the opportunity for emergent collective behavior, for spontaneous crowd formation, which carried with it the potential to suddenly devolve into a mob. If enough people saw even a completely earnest post—“I enjoy having coffee with my husband in our garden in the morning”—it was guaranteed to be hideously offensive to some malcontent, who would feel absolutely compelled to let the original poster know (Have you thought about how this sounds to people who don’t have husbands? gardens? coffee?). The quote-tweet feature let one user boost another’s tweet while adding some commentary to it—while it was often used to support someone’s post with just a bit of added opinion, it was equally effective as a tool for “dunking on,” or mocking, the underlying post or user. Adding a hashtag, especially one that tied the post into a broader grievance narrative or gave a distinctive brand to the controversy (#CoffeeGate), made it discoverable to both algorithms and other people. Twitter’s curation algorithms might process the proliferating number of posts as a sign that a lot of people were interested in a topic and push it out to more audiences.
Hashtags were an extraordinarily useful tool for making communities and topics discoverable: #BlackTwitter, #ScienceTwitter, #Caturday. But like any tool, they could be used as a weapon, and the way that they intersected with Trending Topics (a feature in which Twitter curates a semis personalized list of hashtags popular in the moment) created incentives for conspiracy theorists, hyperpartisans, and brawlers to try to get rumors, propaganda, and harassment hashtags trending. Trending hashtags were prominently featured on a list, and the intrigue could pull in bystanders, who in turn might create their own posts or further amplify the ones in the hashtag. So it went with Amazing Polly, her crowd of supporters, and #SaveTheChildren.
In internet parlance, this process of many users sharing related content nearly simultaneously—which often leads to a broader algorithmic boost or featured promotion—is how something goes viral. The phrase, lifted from epidemiology, refers to how pathogens spread from person to person. But 'it went viral' is a curiously passive phrase for a very active and participatory process. Things don’t just 'go viral'; we deliberately spread them.
The media often covers viral content as if it were caused by algorithms—particularly if it’s a 'borderline content' trend that a large group of people feel should have been moderated and downranked. But that framing strips the users of agency, and online crowds have more agency today than ever before. The algorithms provide lift and raise broader awareness, but virality is a collective behavior: each user makes a deliberate choice to post or retweet content because they find the post or messaging appealing, they believe in it, or they’re outraged by it.
Facebook’s design, unlike Twitter’s, didn’t lend itself to spontaneous crowds in active pursuit of heated frenzies of rage. The closed nature of Facebook groups and the privacy controls on the platform broadly, which allow people’s posts to be visible only to friends or fellow group members, were useful for keeping people on site but not particularly effective for the roving bands of the outraged. Groups could certainly coordinate to target their enemies—indeed, the anti-vaccine groups did so liberally—but users had to hop off platform to carry out the mobbing in more conducive spaces, and that friction seemed to reduce the appeal. As a result of their closed nature, persistent communities on Facebook talked and shared opinions, coming to consensus in a more traditional, deliberative way. This produced a sense of permanency and long-term goals. Backyard Chickens and Melanoma Warriors were not populated by people trying to capture attention in the arena.
Facebook’s Trending Topics feature, too, was focused on surfacing viral articles that many users were sharing and discussing—not wayward comments by hapless people. However, it nonetheless became a source of controversy, because committed financially and ideologically motivated groups began trying to game it. These manipulators encouraged members (or used fake 'sockpuppet' accounts) to mass-share articles at the same time. In an effort to minimize the impact of gaming and reduce the reach of content mills monetizing clickbait headlines, Facebook added a layer of human editorial curation to the trends it displayed. However, a small but vocal cluster of right-wing influencers and their fans did not like this at all. Many of the clickbait sites produced right-wing hyperpartisan content, and, when these were downranked and removed from trends, the company faced a litany of accusations of anticonsiderate bias from a faction of ideologically aligned users, right-wing influencers, and hyperpartisan media, who spun the situation into a tale of Facebook being biased against conservatives. (A faction is a combative and ideologically homogeneous subset of the larger crowd.)
The manufactured controversy—Trending Topics—proved to be a watershed moment: Facebook responded to the complaints from prominent conservatives by firing its human curators, and the feature became an algorithmically mediated free-for-all, which primarily communicated that working the referees ('ref-working') was an effective strategy for achieving a policy outcome favorable to one’s interests. It also demonstrated conclusively that a faction could be riled up by the belief that their viewpoint was being unfairly suppressed, even if that was not actually the case.
After the human curators were fired, wild, fake viral news stories proliferated— demonstrably untrue stories such as 'Pope Francis shocks world, endorses Donald Trump for president' or 'Breaking: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out for Backing Hillary.' Facebook would eventually kill the feature entirely; it is now, in 2024, trying to reintroduce it on its Twitter-competitor, Threads. But at that point, both the formula for capturing mass attention via platform affordances and the value of loudly complaining had been uncovered; Pandora's box was open.
At their best, groups on Facebook help people from all over the world find community; there are support groups for rare diseases, fan groups for specific artists, and spaces for activists to connect as they solve humanitarian challenges. However, the most extreme groups on Facebook became echo chambers that actively fostered distrust of any expert, influencer, or media not the tribe. QAnon would not have existed as it does today without Facebook’s inadvertent algorithmic recruitment, plus its tools for sustained connection on a massive platform where users already
spent a lot of time. At its worst, Twitter made mobs—and Facebook grew cults.
Mobs and cults, each subsets of crowds, require fuel to sustain their outrage and paranoia or mythology—to sustain their bespoke reality. Without new kindling, they risk burning out. And on today’s internet, there’s no shortage of fuel, due to a media practice with origins forty years before Twitter or Facebook even existed.
In the previous century, TV broadcasters had to confront something of a novel problem: technology had transformed the length of the news cycle. No longer were news organizations limited by just one publication per day or per week—they could broadcast all day, every day. And as the number of stations and channels increased, they competed for stories that would attract listeners and viewers. This incentivized somewhat sensational coverage: stories of dubious newsworthiness might be published solely because of their potential to capture and hold attention. “Electronic media cannot bear to suffer a pause of more than five seconds; a pause of thirty seconds of dead airtime seems interminable,” mused Daniel Boorstin.
Media’s 'solution' to this challenge extended beyond merely covering the sensational—it led to the creation of content for content’s sake and the elevation of people who were famous for being famous. Media manufactured 'important' moments that captured public attention but were in fact meaningless. These supposedly newsworthy moments were only so because they were covered on the news. Boorstin dubbed them pseudo-events in one of his most seminal books, 1962’s The Image: A Guide to Pseudo-events in America. Pseudo-events are 'synthetic' media moments, events that exist solely to be reported on: a senator’s anodyne press conference on the steps of Capitol Hill or a ribbon cutting ceremony for a business under new ownership.
Bernays, writing long before television, had in fact instructed invisible rulers to manufacture news from nothing in service of public relations (propaganda). Facts that the invisible ruler wished to call attention to could be presented as news; ideas could be developed into events so that they too could claim attention as news. 'The public relations counsel, therefore, is a creator of news for whatever medium he chooses to transmit his ideas. It is his duty to create news no matter what the medium which broadcasts this "news,” he argued. Once created, interesting news could theoretically spread across whatever channels already held the public’s attention.
Boorstin, forty years later, recognized the effect of the strategy: “The power to make a reportable event is thus the power to make experience.” Pseudo-events were produced in response to media incentives, but they had an impact on the audience. The public was, in good faith, consuming what had been sold to them as “news.” They watched with the intent of becoming informed, and yet their attention was being pointed at nonsense. They were not entirely innocent—Boorstin believed that many people wanted to be fooled—but ultimately he directed the majority of his ire at the media outlets selling a false vision of reality to the public.
Sixty years later, the power to create pseudo-events has been democratized along with every other aspect of media. Influencers are masters of attention capture; it is their business. And today they can create pseudo-events of a caliber beyond Boorstin’s wildest dreams, enticing a crowd to spend time watching them unbox a lipstick or recount on TikTok the latest manufactured controversy on Twitter about, for example, whether a dad who made his child figure out how to open a can of beans with a manual can opener was abusive—a real battle that trended on Twitter for nearly half a day.
The things on social media that pass as significant moments of mass attention capture—two celebrities angrily @ing each other; a corporation tweeting a banal apology for a previous poorly worded tweet; 'The Jews,' which trended on Twitter in May 2023—would, in a media era past, have been nonstories or relegated to explicitly sensational outlets like tabloids. Today they reach an audience, draw them in as participants, and dominate the daily conversation—but they remain ersatz at their core. They have lost even the pretense of being 'news' and more often serve as a socially acceptable vehicle for outrage. And today the audience is no longer a passive consumer but an active participant.
Pseudo-events may differ depending on which faction a user belongs to—anti-vax, pro-choice, transit activist, QAnon. Within their online community, your neighbor or your sister may well be blind with rage about an obscure topic mentioned in an ephemeral tweet that you’ll never see.
And yet, it is completely real to them: if you make it trend, you make it true.
Social media’s pseudo-events are especially potent because they’re not confined to that realm—they can and do jump to mass media too.
I remember waking up one morning in January 2019, opening Twitter, and seeing an extraordinary amount of outrage in my feed about a situation that seemed to have transpired in Washington, DC, between a teenage boy in a MAGA hat and a Native American Omaha Nation elder with a drum. The moment, captured on video, seemed to show a smirking, disrespectful boy getting in the face of the elder, creating a disruption at a protest at the Lincoln Memorial. The video was pretty clearly on track to become a big deal; a lot of very prominent media people, activists, and left-leaning verified influencers with large followings were tweeting about it. Some were pretty nasty: the kid has a “punchable face,” one reporter commented. As I watched the viral video clip, I did feel inclined to believe that the kid was a jerk; I didn’t feel inclined to tweet about it, however, because after years of watching trending outrage content, I’ve also come to believe that there’s little upside to offering my personal take. Selective edits and crops of videos can reframe a moment, and what you see isn’t always what it seems."
I remember waking up one morning in January 2019, opening Twitter, and seeing an extraordinary amount of outrage in my feed about a situation that seemed to have transpired in Washington, DC, between a teenage boy in a MAGA hat and a Native American Omaha Nation elder with a drum. The moment, captured on video, seemed to show a smirking, disrespectful boy getting in the face of the elder, creating a disruption at a protest at the Lincoln Memorial. The video was pretty clearly on track to become a big deal; a lot of very prominent media people, activists, and left-leaning verified influencers with large followings were tweeting about it. Some were pretty nasty: the kid has a “punchable face,” one reporter commented. As I watched the viral video clip, I did feel inclined to believe that the kid was a jerk; I didn’t feel inclined to tweet about it, however, because after years of watching trending outrage content, I’ve also come to believe that there’s little upside to offering my personal take. Selective edits and crops of videos can reframe a moment, and what you see isn’t always what it seems.
Indeed, as the day went on, longer video clips of the situation emerged—including one showing the native elder approaching the boy, not the other way around. Right-wing media accounts and large-audience right-wing influencers went nuts—here again, they said, was mainstream media getting it wrong, yet more evidence of anticonscernative bias against a kid in a MAGA hat. Antimedia commentary trended on Twitter.
The next day, still more video emerged—uncut, nearly two hours long—in which it became clear that a third party that hadn’t appeared in either of the earlier two outrage cycles had also played a major role in events. A small group of Black Hebrew Israelites had been off to the side, yelling at the MAGA boy and his friends, harassing them as well as other passersby. The Native American elder had stepped in between the two groups, but he gave conflicting accounts as to what his motivations had been, saying alternately that he was trying to diffuse the tensions and yet simultaneously implying that the boys had been the instigators ( a claim not borne out by the long video). As I watched the longer video, it seemed clear to me that the boys had not provoked the encounter at all... but the influencers and media I'd seen tweeting about the incident in the early hours of the viral video were not talking about it anymore.
As things had unfolded, for me and probably many others, it felt nearly impossible to figure out what sources were reliable and what the most accurate summary of events actually was. Points of view seemed split along predictable factional-partisan lines. The boy in the hat eventually sued several mainstream media outlets for defamation; several suits were dismissed, but some outlets chose to settle.
The point of this example is not to litigate the specifics of this one event or even to describe the challenge of figuring out what really happened. Rather, it’s to highlight that for multiple days, many thousands of people on the internet, hundreds of influencers, and then numerous media articles disagreed about what happened, and argued, never needed to be an online moment at all. Nothing had really happened. No one was injured. No one had to fixate on these people or go dig through their lives to find out who they really were. Three groups of people had a short disagreement, a few moments of real-life tension. What resulted was a pseudo-event, a spectacle, something that influencers called attention to and media covered in ways framed to appeal to their particular audiences. Depending on whom you followed, you heard about it at a different time, saw it described in very different ways, and heard that the 'other side' was a bunch of liars, manipulators, and villains. Depending on which influencers and outlets you trusted, you formed a particular view of events, who was right or wrong in the situation, and what their online punishment should be.
Social media’s curation algorithms, particularly trending topics, push these pseudo-events and manufacture responses into the fields of those who have previously engaged with similar types of content—bait dangled at those most likely to take it. While I would argue that this mess was bad for everyone involved, it was great for platform engagement and influencer engagement, and the online crowds got some excitement out of a morning of righteous indignation.
The pseudo-events, now highly participatory, are not always so fraught. The mass media are not always quite so directly involved. Indeed, cultural divides between the two ecosystems are often visible: mainstream media’s coverage may gawk and marvel at the absurdity of an online moment (In other news, crackpots on TikTok are saying…) or write think pieces about what it all means. But that distinction is less important than the fact that, through its coverage, the bait of the day nonetheless makes it to the nightly news, reaching a new and more diverse audience of millions. Social media hands off the baton to its 'predecessor,' mass media, and mass media hands it back—these two ecosystems are not separate and distinct but rather form two parts of a larger whole.
The Trinity
The influencer, algorithm, and crowd are distinct, but their incentives are inseparable. The influencer is always aware of the algorithm and the crowd because they determine her reach and revenue; the algorithms boost relevant influencers to keep the crowd on site, steering the attention of the latter and the output of the former; the crowd members volunteer to be nudged and monetized but in exchange find community, entertainment, and camaraderie—including with prominent figures who can call attention to things they care about.
Bernays’s concept of invisible rulers from the 1920s finds resonance in the realm of modern influencers. Influencers are opinion leaders with the power to influence what people believe or buy and the capacity to shape sentiment and norms within their niche. They intimately understand how to connect with others in a digital age, developing intimate trusted relationships with their followers, yet are also capable of achieving reach commensurate with mass media.
The algorithms, which emerged with and shaped this media ecosystem, are also invisible rulers. Created and controlled by private companies, they determine what content is shown to potentially billions of users, effectively guiding not only their online experiences but reshaping communities and norms in ways that have significant real-world impact. Algorithms shape the flow of information and influence the narratives that users encounter. By tailoring content based on user preferences and engagement patterns, they create personalized experiences and engagement patterns.
They not only reinforce existing beliefs but nudge users in sometimes terrifying directions. This selective exposure to information can reinforce biases and shape users’ perceptions in ways akin to the invisible rulers’ influence over public opinion.
And then there is the audience—traditionally, the target subject to the influence of invisible rulers. Indeed, the crowd’s behaviors, decisions, and preferences are increasingly shaped by the curated content its members consume and the algorithmically mediated interactions they have. And yet the sense of shared identity and the camaraderie fostered within these digital spaces lead to collective behavior and the formation of online factions in which they actively participate. The members of the crowd influence not only each other but the influencers. Social media users have become both the targets and the agents of influence, blurring the line between the invisible rulers and their subjects.
QAnon influencers and adherents like Polly are fond of a saying: ‘We are the media now.’ By this they mean that they set the agenda for what the public talks about. This is a bit of an exaggeration, but only a bit. Within each niche, the influencer-algorithm-crowd trinity shapes consensus and defines the beliefs of their public."
This Trinity thrives on the proliferation of attention-capturing pseudo-events; sensationalism, spectacle, and tribalism are rewarded and reinforced. The environment is tailor-made for spreading rumors far and fast and for reinforcing connections and ideology within niches—not bridging across them. Influence is local to small networks, centered around influencers or within partisan factions or the bespoke realities of growing conspiracist communities. There is little common ground between the niches; one online faction often does not even see something that is outraging another, unless it erupts into a fight between them.
Not all influencers covet pseudo-events or factional brawls. Some just want to post about gardening or astronomy, and damn the clout or money. But incentivized influencers like Polly, particularly those focused on politics, are many. And their role is dynamic: projecting accessibility despite power, building trust while inflaming emotions, and seeking new ways to monetize—all the focus of our next chapter.
មតិយោបល់
ប្រកាសផ្សាយមតិយោបល់