Renée DiResta is an associate research professor at the McCourt School of Public Policy at Georgetown.
For the past two decades, most online discourse has occurred on a handful of social media platforms. Their dominion seemed unshakeable. The question wasn’t when a challenger to Twitter or Facebook might arrive but if one could ever do so successfully. Could a killer new app, or perhaps the cudgel of antitrust, make a difference?
Today, those same platforms still enjoy the largest user bases; massive breakout successes like TikTok are the rare exception, not the rule. However, user exodus to smaller platforms has become increasingly common — especially from X, the once-undisputed home of The Discourse. X refugees have scattered and settled again and again: to Gab and Truth Social, to Mastodon and Bluesky.
What ultimately splintered social media wasn’t a killer app or the Federal Trade Commission — it was content moderation. Partisan users clashed with “referees” tasked with defining and enforcing rules like no hate speech, or making calls about how to handle Covid-19 content. Principles like “freedom of speech, not freedom of reach” — which proposed that “borderline” content (posts that fell into grey areas around hate speech, for example) remain visible but unamplified — attempted to articulate a middle ground. However, even nuanced efforts were reframed as unreasonable suppression by ideologues who recognized the power of dominating online discourse. Efforts to moderate became flashpoints, fueling a feedback loop where online norms fed offline polarization — and vice versa.
And so, in successive waves, users departed for alternatives: platforms where the referees were lax (Truth Social), nearly nonexistent (Telegram) or self-appointed (Mastodon). Much of this fracturing occurred along political lines. Today the Great Decentralization is accelerating, with newspapers of record, Luke Skywalker and others as the latest high-profile refugees to lead fresh retreats.
It was once novel features, like Facebook’s photo tagging or Twitter’s quote tweets, that drew users to social media sites. Now, it’s frequently ideological alignment that seduces users. People are decamping to platforms that they believe match their norms and values — and, in an increasingly polarized America, there is a chasm between the two sides.
Yet there’s more to this migration than meets the eye. Beneath the surface lies a profound shift in the technology underpinning online socialization. In the latest wave of decampment — primarily to Bluesky — users are seeking out an ideological alternative to the increasingly right-wing X. They may be leaving for the vibes, but they are also stepping into a world that is foundationally different in ways that many are only beginning to grasp. The federated nature of emerging alternatives, like Mastodon and Bluesky — platforms structured as a network of independently-run servers with their own users and rules, connected by a common technological protocol — offers a potential future in which communities spin up their own instances (or servers) with their own rules.
This movement away from centralized trust and safety teams enforcing universal rules may sound like a fix for social media’s woes. Fewer violent clashes between culture warriors. Fewer histrionic accusations of “censorship.” The players becoming the referees. Isn’t that ideal?
But new governance models come with new complexities, and it’s crucial to grapple with what’s on the horizon. What happens when sprawling online communities of tens of millions fracture into smaller, politically homogenous self-governing communities? And what does this mean for social cohesion and consensus, both online and off?
Preceding The Great Decentralization
How did we arrive here? The centralized content moderation system that has begun to fracture was shaped by a mix of American political values, societal norms and economic realities, as researcher and professor Kate Klonick argued in the “Harvard Law Review” in 2018. Klonick’s essay “The New Governors” details how platform governance policies were largely crafted by American lawyers with First Amendment pedigrees.
These platforms were privately owned and operated, yes, but their governance hewed to the spirit of American law. Nonetheless, most platforms also saw it as their duty to moderate away “obscene, violent, or hate[ful]” content. This was due in part to a desire to be seen as good corporate citizens, but also was nakedly pragmatic: “Economic viability depends on meeting users’ speech and community norms,” Klonick wrote. When platforms created environments that met user expectations, users would spend time on the site, and revenue might increase. Simple economics.
Yet, even as platforms sought to balance corporate responsibility, user safety and economic viability, the rules increasingly became flashpoints for discontent. Content moderation decisions were perceived not as neutral governance but as value-laden judgments — implicit declarations of whose voices were welcome and whose were not. Facebook’s removal of the iconic “Napalm Girl” photo in 2016 — due to its automated enforcement of rules against nudity — provoked global backlash, forcing the platform to reverse its decision and acknowledge the complexities of moderating at scale.
Around the same time, Twitter faced criticism for failing to adequately respond to the rise of Islamic State group propagandists, and to harassment campaigns like “Gamergate” (a 2014 online movement ostensibly about ethics in gaming journalism but widely perceived as a troll campaign targeting women in the industry).
These incidents underscored the tensions between enforcing community standards and protecting free expression. For many users, particularly those whose speech bordered on the controversial or offensive, the referees of Big Tech platforms seemed to wield disproportionate power, which fueled a sense of alienation and distrust. Rather than simply constraining what could be said online, the rules seemed to signal whose perspectives held power in the digital public square.
As these forces converged and hardened into the governance status quo, those who chafed under it faced a timeless choice: exit versus voice. Should they abandon a product or community in search of better options, or stay and speak out, channeling their frustration into demands for change?
German economist Albert Hirschman argued that the decision between exit or voice for dissatisfied consumers was mediated by a third factor: loyalty. Loyalty, whether rooted in patriotism or brand affinity, can tether individuals to an institution or product, making them more inclined to call for change than to walk away. For years, loyalty to major platforms was less about affection and more about structural realities; monopolistic dominance and powerful network effects left social media users with few realistic alternatives. There weren’t many apps with the features, critical mass or reach to fulfill users’ needs for entertainment, connection or influence. Politicians and ideologues, too, relied on the platforms’ scale to propagate their messages. People stayed, even as their dissatisfaction simmered.
And so, voice was the answer. Politicians and advocacy groups pressured companies to change policies to suit their side’s needs — a process known as “working the refs” (referees) among those who study content moderation. In 2016, for example, “Trending Topicsgate” saw right-wing influencers and partisan media chastise Facebook for allegedly downranking conservative headlines on its trending topics feature. The outrage cycle worked: Facebook fired its human news curators and remade the system. (Their replacement, an algorithm, quickly busied itself spreading outrageous and untrue headlines, including from Macedonian troll factories, until the company ultimately decided to kill the feature.) Left-leaning organizations ref-worked over the years as well, applying pressure to maximize their interests.
Online partisan crowds began to perceive even one-off decisions as evidence of rank bias. Content moderation calls involving seemingly inconsequential interpersonal disputes were magnified into manufactroversies — proof of platforms kowtowing to identity politics or perpetuating some sort of supremacy. There were grains of truth: moderators did make mistakes, miss context and make bad calls as they worked through millions of decisions a quarter. Yet as disagreement became a partisan sport, platforms found themselves refereeing an escalating culture war. Efforts to impose order — to prevent real people from being doxed, stalked or even just harassed — were routinely transmuted into fodder for further tribal aggrievement.
On the right, in particular, moderation disputes were reframed as existential battles over political identity and free speech itself. Despite scant evidence of any actual systemic bias, right-wing influencers galvanized around the idea that platforms were targeting them; they moved from working the refs to challenging their right to operate.
Then-President Donald Trump, in particular, angry that his misleading tweets were labeled misleading, didn’t make nuanced arguments about transparency or the need for an appeals process. Instead, he set about delegitimizing content moderation itself and threatening regulatory action. Basic interventions like fact-check labels on disputed claims — and sometimes even the mere suspicion of intervention (i.e., if a tweet did not get its perceived due in engagement) — were reframed as tyrannical acts by tech elites conspiring against right-wing populists. The referees were no longer mediators in the culture war; they had become the opposition.
As this narrative became embedded in right-wing political identity, the market responded with opportunities for exit. Alt-platforms like Parler, which emerged in 2018, were created with the express goal of catering to Trump supporters who now believed mainstream platforms were irredeemably biased. Gettr and Truth Social followed, borne of grievances surrounding the 2020 election and the January 6 riots, and moderation of the man most responsible for instigating them.
The new right-wing alt-platforms had refs on the same team, but they remained small — because the trade-off was that there were few libs around to own. There were few opportunities for partisan brawls or trolling. There were few bystanders to potentially recruit to a preferred cause. And so, political influencers, media figures and politicians across the political spectrum continued working the refs on major platforms, where the stakes — and the audiences — remained far greater.
Then, in 2022, a seismic shift occurred: Elon Musk, a true believer in the theory of the corrupt refs, bought Twitter — and anointed himself as primary referee. The platform he now called X had always been relatively small but disproportionately influential: its concentration of the media- and politics-obsessed earned it the nickname “the public square.” More accurately, it often functioned as a gladiatorial arena — a chaotic space where consensus was shaped and hapless individuals became “main characters” in mob pile-ons.
After the acquisition, Musk offered “amnesty” for those who’d fallen afoul of the old referees — including avowed neo-Nazis. Right-wing influencers on the platform seized the opportunity to work the new referee with a vengeance, and Musk responded by overhauling governance quickly and significantly in their favor. Posts that were formerly moderated, such as unfounded rumors of rigged elections or intentional misgendering of transgender users, were now fair game.
Dissatisfaction with the new referee, policies and the overall environment on X thus led to an exodus from the platform by the American political left. At first, people hopped to Mastodon, which had the advantage of already existing. Another new market entrant, Bluesky, launched its beta with an invitation-only model driven by referral networks. The progressive-left community quickly established a foothold, and its users tested the relatively novice refs during moments of dissatisfaction over its nascent moderation policies. They debated whether hyperbolic speech constituted a “threat,” and under what conditions users should be banned. In one notable early incident, users confronted Bluesky’s developers on the platform and demanded public apologies after a bug allowed trolls to register slurs as usernames. By November 2023, Bluesky had 2 million users and a reputation as a very lefty space.
In July 2023, the 800-pound gorilla entered the competition for dissatisfied tweeters: Threads, owned by Meta. Positioned as a direct competitor to X, Threads marketed itself as “sanely run,” in the words of Chief Product Officer Chris Cox. However, the promise of sanity didn’t shield Threads from ref-working dynamics. Leadership’s decision to throttle political news and block some pandemic-related searches triggered a backlash from its largely liberal user base (some of whom began to promote Bluesky as a better place to be). Despite these tensions, Threads grew rapidly, self-reporting 275 million monthly active users by late October 2024; it was, even dissatisfied users sighed, better than X.
By November 2024, however, it was Bluesky’s growth that was accelerating dramatically, fueled by Trump’s reelection and Musk’s increasingly explicit alignment with the far-right. Musk, X’s most visible user as well as its chief referee, had become a vocal Trump surrogate and election-theft truther, and his platform’s algorithms appeared to boost him and his ideological allies.
Loyalty to the old Twitter steadily declined among previously vocal power users. And so, many chose to exit: In the weeks following the election, Bluesky broke 25 million users, spurred not so much by features but by ideological dissatisfaction and the allure of a platform where governance seemed to align more closely with progressive norms.
But does it?
New Governance, New Challenges
The Great Decentralization — the migration away from large, centralized one-size-fits-all platforms to smaller, ideologically distinct spaces — is fueled by political identity and dissatisfaction. Yet what is most interesting about this latest wave of migration is the technology underpinning Bluesky, Mastodon and Threads — what it enables and what it inherently limits. These platforms prioritize something foundationally distinct from their predecessors: federation. Unlike centralized platforms, where curation and moderation are controlled from the top down, federation relies on decentralized protocols — ActivityPub for Mastodon (which Threads also supports) and the AT Protocol for Bluesky — that enable user-controlled servers and devolve moderation (and in some cases, curation) to that community level. This approach doesn’t just redefine moderation; it restructures online governance itself. And that is because, writ large, there are no refs to work.
The trade-offs are important to understand. If centralized platforms with their centrally controlled rules and algorithms are “walled gardens,” federated social media might best be described as “community gardens,” shaped by members connected through loose social or geographical ties and a shared interest in maintaining a pleasant community space.
In the fediverse, users can join or create servers aligned with their interests or communities. They are usually run by volunteers, who manage costs and set rules locally. Governance is federated as well: While all ActivityPub servers, for example, share a common technological protocol, each sets its own rules and norms, and decides whether to interact with — or isolate from — the broader network. For example, when the avowedly Nazi-friendly platform Gab adopted Mastodon’s protocol in 2019, other servers defederated from it en masse, cutting ties and preventing Gab’s content from reaching their users. Yet Gab persisted and continued to grow, highlighting one of federation’s important limitations: defederation can isolate bad actors, but it doesn’t eliminate them.
Protocol-based platforms offer a significant potential future for social media: digital federalism, where local governance aligns with specific community norms, yet remains loosely connected to a broader whole. For some users, the smaller scale and greater control possible on federated platforms is compelling. On Bluesky — which is, for the moment, still largely just one instance run by the development team — the savvy are developing tools to customize the experience. There are shareable blocklists, curated feeds (views that let users see the latest posts on a creator-defined topic, like news or gardening or sports), and community-managed moderation tools that enable the application of categorization labels for posts or accounts (“Adult Content,” “Hate Speech,” etc.). These allow users to tailor their environment to their values and interests, giving them more control over what posts they see — ranging from spicy speech to nudes to politics — and which are hidden behind a warning or concealed altogether. And while there is, presently, a centralized content labeler controlled by the Bluesky moderation team, users can also simply turn it off.
For some, this level of agency is appealing. However, most users never change the defaults on a given app or piece of technology: what they are looking for is relief from the drama, chaos and perceived ideological misalignment of other spaces. They are drawn not to “composable moderation” or “federated governance” — many, in fact, seem not to fully understand what it portends — but to the vibes of the instance. ” They want platforms to “compete on service and respect,” even as the large platforms, ref-worked by politicians with regulatory cudgels, would like nothing more than to stop making moderation calls as quickly as possible. Bluesky, on a mission to build a protocol that will ultimately render centralized moderation largely moot, has nonetheless had to quickly quadruple the size of its moderation team as users have flooded in.
And this is why it’s important to understand that the migration away from centralized refs comes with very real trade-offs. Without centralized governance, there is no single authority to mediate systemic issues or consistently enforce rules. Decentralization places a heavy burden on individual instance administrators, mostly volunteers, who may lack the tools, time or capacity to address complex problems effectively.
Some of my own work, for example, has focused on the significant challenge of addressing even explicitly illegal content — child exploitation imagery — on the fediverse. Most servers run by volunteers are ill-equipped to deal with these issues, exposing administrators to legal liability and leaving users vulnerable. Fragmented enforcement leaves gaps that bad actors, including state-sponsored manipulators and spammers, can exploit with relative impunity.
Identity verification is another weak point, leading to impersonation risks that centralized platforms typically manage more effectively. Inconsistent security practices between servers can allow malicious actors to exploit weaker links. Professionalized companies with experience (like Threads) have experience managing some of these problems, but they require an economic incentive to participate.
While federation offers users more autonomy and fosters diversity, it makes it significantly harder to combat systemic harms or coordinate responses to threats like disinformation, harassment or exploitation. Moreover, because server administrators can only moderate locally — for example, they can only hide content on the server they operate — posts from one server can spread across the network onto others, with little recourse.
Posts promoting harmful pseudoscience (“drinking bleach cures autism”) or doxxing can persist unchecked on some servers, even if others reject or block the content. People who have become convinced that “moderation is censorship” may feel that this is an unmitigated win, but users across the political spectrum have consistently expressed a desire for platforms to address fake accounts and false or violent content.
Beyond the challenges of addressing illegal or harmful content, the Great Decentralization raises deeper questions about social cohesion: Will the fragmentation of platforms exacerbate ideological silos and further erode the shared spaces needed for consensus and compromise?
Our communication spaces shape our norms and politics. The very tools that now directly empower users to curate their feeds and block unwanted content may also amplify divisions or reduce exposure to differing perspectives. Community-created blocklists, while useful for targeted groups seeking to avoid trolls, are blunt instruments. A wayward comment, a missed joke or personal animus on the part of a list creator can cast a wide, isolating net; people with nuanced views on contentious issues like abortion policy may self-censor to avoid being “mislabeled” and excluded.
Recent events on Bluesky illustrate these challenges. In mid-December, tensions erupted on the platform over the sudden presence of a prominent journalist and podcaster who writes about trans healthcare in ways that some of the vocal trans users on the platform considered harmful. In response, tens of thousands of users proactively blocked the perceived problematic account (blocks are public on Bluesky). Community labelers enabled users to hide his posts. The proliferation of shared blocklists included some that enabled users to mass-block followers of the controversial commentator. Journalists, many of whom follow people they do not personally agree with, commented that they were getting caught up in the wide net; to mitigate this, users in the community suggested that they create “alt” accounts to avoid sending unwanted signal.
Shareable blocklists, however expansive they may be, are tools designed to empower users. However, a portion of the community did not feel satisfied with the tools. Instead, it began to ref-work the head of trust and safety on Bluesky, who was deluged with angry demands for a top-down response, including via a petition to ban the objectionable journalist. The journalist, in turn, also contacted the mods — about being on the receiving end of threatening language and doxing himself. The drama highlights the tension between the increased potential for users to act to protect their own individual spaces, and the persistent desire to have centralized referees act on a community’s behalf. And, unfortunately, it illustrates the challenges of moderating a large community with comparatively limited resources.
The idealistic goal of federalism in the American experiment was to maintain the nation’s unity while enabling local control of local issues. The digital version of this, however, seems to be a devolution, a retreat into separate spaces that may perhaps increase satisfaction within each outpost but does little to bridge ties, restore mutual norms or diminish animosity across groups. What happens when divergent norms grow so distinct that we can no longer even see or engage with each other’s conversations? The challenge of consensus is no longer simply difficult, it is structurally reinforced.
What’s Ahead
Whether you like or dislike them, centralized models of top-down policy and enforcement have defined the social media experience on large platforms like Facebook, Twitter and YouTube for two decades. As Nilay Patel of “The Verge” put it, content moderation is “the product” of these platforms: The decisions made by moderation teams shape not only what users see but how safe or threatened they feel. These policies have had profound effects, not only on societal phenomena like democracy and community cohesion but also on individual users’ sense of well-being. If the Great Decentralization continues, that experience will change.
While centralized governance on platforms like Twitter and Facebook became a highly politicized front in the culture war, it’s worth asking whether the system was truly broken. Centralized moderation, despite being imperfect, expensive and opaque, nonetheless offered articulated rules, sophisticated technology and professional enforcement teams. Criticism of these systems frequently stemmed from their lack of transparency or occasional high-profile errors, which fueled perceptions of bias and dissatisfaction.
This legitimation crisis eventually tipped the scales from voice to exit — and now, the shaping of a new online commons presents both a challenge and an opportunity. Yes, there is the potential for truly democratic online spaces free from the misaligned incentives that have, thus far, defined the platform-user relationship. But realizing such spaces will take significant work.
There is also the looming question of economics. Federated alternatives must be financially sustainable if they intend to persist. Right now, Bluesky is primarily fueled by venture capital; it has broached having paid subscriptions and features in the future. But if the last two decades of social media experimentation have taught us anything, it’s that economic incentives inevitably have an outsized impact on governance and user experience.
Technologists (myself included) love to talk about faster innovation, better privacy, and more granular user control as the future of social media. But that’s not what most people think about. Most users just want good services, minimal risks to their well-being and a generally positive, entertaining environment. Ironically, these are the end states moderation has attempted to deliver. The argument that the downsides of social media participation — disinformation, doxxing and harassment — are emblematic of the triumph of “free speech” has been roundly rejected; very few users actually spend time on “absolutist” anything-goes communities; 8chan, for example, was never widely popular. And yet, our inability to agree on shared norms and values, both online and off, is pushing us apart into distinct online spaces.
Users who are drawn to Bluesky are gravitating to the culture of the main instance, which feels a bit like Old Twitter circa 2014 — a simpler, less toxic time. They crave a return to a less divisive and nasty American society. This longing reflects a deeper truth: online platforms don’t just mirror our offline values; they actively influence them.
Federated platforms will give us the freedom to curate our online experience, and to create communities where we feel comfortable. They represent more than a technological shift — they’re an opportunity for democratic renewal in the digital public sphere. By returning governance to users and communities, they have the potential to rebuild trust and legitimacy in ways that centralized platforms no longer can. However, they also run the risk of further splintering our society, as users abandon those shared spaces where broader social cohesion may be forged.
The Great Decentralization is a digitalized reflection of our polarized politics that, going forward, will also shape them.