What tears society apart is never algorithms.

What tears society apart is never algorithms.

Does the algorithm exacerbate social division? This “technological determinism” narrative is empirically weak. The core of algorithms lies in commercial “engagement optimization,” not ideological indoctrination. More crucially, foreign studies have found that polarization is mainly driven by political elites, partisan media, and underlying socioeconomic structures. Forcing people out of their echo chambers to encounter opposing views not only fails to promote understanding but also hardens their positions further.

Long before “algorithm” became a buzzword, Walter Lippmann raised profound doubts about the formation of “public opinion.” He quoted Sir Robert Peel as saying, “Public opinion is the aggregate of the stupid, the weak, the biased, the mistaken feelings, the correct sentiments, the stubborn opinions, and the newspaper articles.” Nearly a century ago, people rarely attributed social rifts to a single specific factor; instead, they sought answers from more complex structures.

In contemporary public discourse, a prevalent view holds that social media is exacerbating social division. Whether by intentionally pushing opposing views to create conflicts between different groups or gathering extremists to reinforce each other, these narratives all point to a common “culprit”: the algorithm.

This “technological determinism” perspective largely stems from the “Filter Bubble” concept proposed by Eli Pariser. He argued that algorithms’ personalized recommendation mechanisms isolate users intellectually, only showing them content consistent with their existing views, thereby fueling political polarization.

This logic aligns with our intuition, but it is largely an “unproven assumption.” In recent years, numerous studies have challenged this “common sense.”

I. The Empirical Fragility of “Echo Chambers”

First, the premise that “algorithms cause echo chambers” is quite weak in empirical research.

The commonly discussed “filter bubble” refers to algorithms creating a unique information universe for each of us through personalized ranking, thereby eroding the foundation for shared discourse. Meanwhile, “information cocoons” or “echo chambers” mean we ultimately remain in a closed media space where internal information is amplified and external information is isolated.

But in reality, are such “echo chambers” truly widespread?

A literature review published by the Reuters Institute, after examining a large number of relevant studies, pointed out that true “echo chambers” are very rare, and most people receive relatively diverse media information. A study on the UK estimated that only about 6% to 8% of the public resides in partisan news “echo chambers.”

Contrary to popular perception, multiple studies have consistently found that people who rely on search engines and social media for news are exposed to a broader and more diverse range of news sources. This is known as “Automated Serendipity” — algorithms “feed” you content you would not actively choose.

The few people who do exist in echo chambers are mainly there because they actively choose to consume only certain media, not because algorithms intentionally push such content.

In fact, views that blame algorithms for alienation often assume that platforms welcome conflict because it drives traffic. However, this assumption overlooks the true goal of platform operations: long-term user retention.

As Douyin’s Safety and Trust Center publicly explained its algorithmic principles, if algorithms merely cater to users’ existing interests, content will become increasingly homogenized, leading users to quickly grow bored and leave the platform. Therefore, recommendation systems must strike a balance between “exploitation” and “exploration,” proactively pushing new content that users may be interested in to maintain freshness and user engagement.

II. When “Listening to Both Sides” No Longer Brings “Clarity”

If the evidence that “algorithms cause echo chambers” is insufficient, then where exactly does polarization originate?

There is an ancient Chinese saying: “Listening to both sides makes one wise.” Western deliberative democracy theory also holds that when citizens encounter different views through rational deliberation, they become more moderate. If “echo chambers” are the problem, then breaking them — allowing people to “listen to both sides” — should alleviate polarization.

But what if this premise fails in the age of social media?

Professor Chris Bail, a sociologist at Duke University, conducted an ingenious field experiment to directly test this question. His team recruited a group of committed Democratic and Republican Twitter users and paid them to follow a “bot” account that specifically retweeted political statements from the opposing camp.

The design of this experiment essentially forced participants out of their echo chambers and made them “listen to both sides.” However, the results were unexpected: after one month, instead of becoming more moderate and understanding of the other side, participants generally became more extreme.

This finding proves that at least in the social media environment, forcing you to “step out of the information cocoon” not only fails to solve the problem but also exacerbates polarization.

A recent “generative social simulation” study by the University of Amsterdam further confirmed this view. When researchers used AI agents to create a “minimal platform” with only posting, retweeting, and following functions, they found that even without complex recommendation algorithms, functional disorders such as partisan echo chambers, high concentration of influence, and amplification of extreme voices still emerged spontaneously. Emotional (often extreme) content received more retweets, which in turn brought more followers (i.e., influence) to the posters, further reinforcing the dominance of extreme content online.

The significance of this finding is that the problem is not that algorithms “isolate” us, but that the basic structure of human social networks rewards identity-based, emotional, and irrational reactions, allowing these reactions to directly shape our social relationships.

III. What is the Truth About Polarization?

Why does exposure to opposing views backfire? To answer this question, we must distinguish between two distinct types of “polarization.”

Modern political science research indicates that ideological polarization — disagreements among people on specific policy positions — has not increased significantly among the general public. However, affective polarization — the growing dislike, distrust, and hostility between different partisan groups — has risen sharply. This type of polarization is not about policies but about identity; its characteristics are more reflected in “out-group hate.”

Based on this, Professor Bail proposed the “Social Media Prism” theory, which argues that social media is neither a “mirror” nor a “cocoon” but a “prism”: it distorts our perception of ourselves and others.

This distortion comes from two aspects: first, the core mechanism of social media is identity performance and status competition, which provides the best stage for extremists. Second, when extremists dominate the public discourse, moderates fall silent due to the “spiral of silence.” This “prism” leads us to mistakenly believe that everyone in the opposing camp is like the extremists we see online.

This also explains why Bail’s experiment failed: what participants were forced to see was not moderate opposing views, but the harshest voices refracted through the “prism,” which naturally deepened their affective polarization. Studies have also shown that while users can perceive an identity-based “individual opinion climate,” their “individual opinion expression” is not significantly affected by it, let alone leading to identification and irrational behavior.

IV. Beyond “Engagement”: What is the Future of Algorithms?

Algorithms operate within a pre-existing polarized environment; they do not create it.

The timeline of increasing polarization predates the emergence of modern social media algorithms, and a “top-down” pattern seems to have existed for a long time.

First, elite polarization precedes and drives mass polarization — politicians and activists were the first to adopt more distinct positions. Second, underlying socioeconomic factors, such as growing economic inequality, have a profound structural relationship with political polarization. Additionally, time-series analysis shows a strong long-term correlation between measures of income inequality (such as the Gini coefficient) and the degree of congressional polarization.

The “technological determinism” narrative that identifies algorithms as the core of the polarization crisis is an oversimplification of a complex, multi-causal phenomenon.

The phenomena described by the “algorithm alienation theory” and “foolish resonance” are more rooted in human psychological biases (such as selective exposure) and underlying sociopolitical structures. When a 450-minute in-depth analysis video of “Dream of the Red Chamber” achieves 300 million views, it is precisely the result of algorithms proactively breaking barriers and exploring users’ deep interests, not a product of “echo chambers.”

Since algorithms are only “amplifiers,” can simple technical adjustments “correct” this amplification effect? The “AI Sandbox” research by the University of Amsterdam provides thought-provoking answers. Researchers tested six widely proposed “pro-social interventions” and found that the effects of these seemingly “cure” solutions were “minimal” or even counterproductive.

For example, the intervention of forcibly breaking so-called echo chambers (i.e., “increasing cross-partisan content”) showed “almost no impact.” When AI agents were forced to encounter opposing views, they did not change their behavior, confirming once again that “cross-partisan exposure alone is not sufficient.” The chronological feed that many people call for a return to, while indeed significantly reducing “attention inequality,” brought unexpected side effects: it exacerbated the social media prism effect, giving extreme partisan users a stronger relative influence.

Therefore, effective interventions should not focus on the futile goal of creating “neutral” algorithms. Instead, they should focus more on exploring alternative algorithmic designs beyond raw engagement optimization. For instance, shifting to prioritize rewarding users’ “stated preferences” (content users believe is valuable after reflection) rather than “revealed preferences” (content users click on impulsively), or designing for “constructive discourse,” thereby rooting the power of technology in a more accurate understanding of the human and political roots of our current divisions.

 
© 版权声明
THE END
If you like it, please support it
点赞11 分享
comment 抢沙发

请登录后发表评论

    暂无评论内容