Malia    发表于  昨天 18:01 | 显示全部楼层 |阅读模式 6 0
You think you’re standing up for justice, but you’ve actually fallen into an emotionally manipulative trap carefully set by others; you think you’re feeling righteous indignation, but you’ve merely taken the bait cast specifically to trigger your anger... This thin veil hiding the truth of the post-truth era has been pierced by the Oxford Dictionary.
There are far too many rage baits—algorithms can’t take all the blame..jpg
Recently, the Oxford Dictionary team announced its 2025 Word of the Year: rage bait.
What exactly is rage bait? Simply put, it refers to content deliberately crafted with provocative language or scenarios to incite public anger and generate online traffic. Staged videos such as "daughter-in-law feeds hospitalized mother-in-law instant noodles" or "food delivery rider humiliated by customer" all fall into this category.
The Oxford Dictionary team named "rage bait" its 2025 Word of the Year.
Undoubtedly, the phenomenon of rage bait demands vigilance: when anger becomes a mass-produced tool for chasing clicks, it fuels the spread of social vitriol and degrades the quality of public discourse.
Yet, judging by public reactions, some people have resorted to the popular oversimplification—"When in doubt, blame quantum physics; when assigning causes, blame filter bubbles"—pinning the blame squarely on algorithms, with "filter bubbles" being another frequently cited buzzword.
In their view, rage bait thrives because algorithms cast a wide net, creating filter bubbles that serve as breeding grounds for its spread.
This reminds one of Plato’s Allegory of the Cave: many people mistake the shadows projected on the cave wall for reality, turning a blind eye to the true source of light.
01
Thomas Sowell once said: If you assume that humans are always rational, then most of history becomes inexplicable.
Why has rage bait proliferated online? The answer boils down to three words: irrationality.
Neurological research shows that the human brain reacts more strongly to negative stimuli than positive ones, and these reactions leave deeper psychological imprints—hence, bad news captures far more of our attention. This aligns with the principle of attentional bias in sociology: emotional stimuli are more likely to seize an individual’s focus than neutral ones.
In other words, when controversial topics surface in the public eye, users instinctively pay closer attention.
Capitalizing on this quirk of human psychology, traffic-chasing content creators have mastered the art of emotional manipulation. Their playbook typically unfolds in four steps: first, selecting the bait—identifying controversial topics and divisive elements (e.g., gender antagonism, regional discrimination, generational conflicts, class disparities); second, crafting the bait—carefully blending extreme labels, dramatic contrasts, and distorted details into a compelling narrative; third, casting the hook—using clickbait headlines, fake screenshots, and other tactics to stoke public anger; and finally, reeling in the catch—monetizing the resulting traffic.
China Central Television previously exposed how marketing accounts built their followings by staging a series of videos depicting "food delivery riders being humiliated by customers".
At its core, rage bait is a product of two forces: the irrational nature of human attention mechanisms and the distorted behavior of individuals desperate for traffic.
Some people imagine a symbiotic relationship between content creators, users, and platforms in this "anger industrial chain", where algorithms fuel a vicious cycle of "anger → engagement → recommendations → more anger".
But this is largely a misconception. The supposed symbiosis rarely exists in reality.
Facts repeatedly demonstrate that when users are bombarded with content that sows discord and amplifies division, the resulting negative emotions ultimately drive them away—prompting them to leave the platform, not linger. A toxic, hostile online environment is also unappealing to brands, leading to a sharp decline in commercial value and exposing platforms to regulatory risks.
This means that the "toxic traffic" generated by rage bait fundamentally conflicts with platforms’ goals of improving user experience, ensuring content safety, and maximizing commercial interests. Given that rage bait-induced social division and trust crises harm platform profitability, what incentive do platforms have to embrace such toxic traffic?
Many platforms have already answered this question with actions. For instance, Douyin released the Trial Measures for Governing Hot Community Information and Accounts this year, explicitly opposing extreme, incendiary, aggressive, and divisive content. It has also targeted malicious marketing accounts that deliberately manufacture conflicts or exploit trending topics for clicks.
The message is clear: in the face of rage bait, platforms act as filters, not enablers.
02
Whether it’s rage bait, the earlier concept of "fool resonance", or the notion of "algorithmic alienation", the ripple effects these buzzwords create in public discourse are essentially echoes of the "algorithmic filter bubble" theory.
Amid the spread of populism, fan culture, and online polarization, many have grown accustomed to blaming filter bubbles for society’s ills. This approach may seem like simplifying complex issues, but it actually reflects intellectual laziness and a tendency to oversimplify.
On this issue, my key arguments are as follows:
Whether "filter bubbles" exist remains debatable—but "cognitive bubbles" are undeniably real.
Scholars are still divided on whether filter bubbles are an actual digital prison or merely a hypothetical straw man. After all, the filter bubble hypothesis has never been validated by academic empirical research.
Prominent Chinese communication scholars such as Chen Changfeng and Yu Guoming have pointed out that there is no empirical evidence to support the existence of filter bubbles.
This does not mean cognitive bubbles are not real, however. Approximately six months ago, I wrote an article titled Those Who Criticize "Filter Bubbles" Are Trapped in Their Own "Cognitive Bubbles".
The article argued that algorithms act as amplifiers, magnifying the multifaceted nature of human beings through the bidirectional interaction of "humans shaping the environment" and "the environment shaping humans". Algorithms can amplify positive, constructive forces through resonance, but they also expose flaws such as people’s path dependency in seeking information, the erosion of social communication skills, and their tendency to make emotional rather than rational judgments when processing information... These factors converge to form what we might call "cognitive bubbles", "social bubbles", and "thinking bubbles".
Selective exposure to information is a self-protective mechanism of the human brain—and it’s not as scary as it sounds.
Many scholars argue that filter bubbles are a false proposition, while selective exposure to information is a genuine issue. Even Cass Sunstein, the scholar who coined the term "filter bubble", attributes its core to selective exposure.
Selective exposure is essentially a cognitive shortcut the brain uses to avoid information overload, rooted in humans’ innate preference for the familiar and aversion to the unfamiliar. It is a natural defense mechanism to reduce mental fatigue.
What is truly dangerous is not algorithms, but the monopolization and restricted supply of information.
In eras of information scarcity, "bubbles" were physical in nature—people had no choice but to consume the limited content available. Today, those who claim "filter bubbles are ruining the next generation" may forget that before the advent of social media, everyone was limited to the same narrow range of information sources.
In the age of information abundance, however, bubbles are psychological. They are not determined by distribution channels, but shaped by the brain’s information processing mechanisms.
03
The above discussion aims to address two core questions: How exactly do bubbles form? And do algorithms really create them?
Let’s start with the first question.
Recently, the world’s leading academic journal Science published a paper titled Don’t Blame the Algorithm: Polarization May Be Inherent to Social Media. It detailed the latest research findings from the Institute for Logic, Language and Computation at the University of Amsterdam.
Science published a paper arguing that social media polarization would exist even without algorithms.
The research team built a minimalist social media platform without personalized recommendation algorithms, retaining only three core functions: posting, reposting, and following. They then deployed 500 AI bots with fixed personality traits to simulate human behavior. The results showed that without any algorithmic intervention, the bots spontaneously formed ideological camps after 50,000 interactions: those with similar views followed each other, and extreme opinions spread far more rapidly than neutral content.
The conclusion is clear: the formation of filter bubbles is not caused by algorithms, but a natural product of humans’ innate social tendency to seek out like-minded individuals. Even without algorithms, social media would still be prone to polarization.
This finding is supported by social psychology: human attention allocation and memory encoding mechanisms inherently lean toward creating "bubbles". Confirmation bias makes us more likely to pay attention to information that aligns with our existing beliefs; negativity bias makes us remember anger-inducing events more vividly; and the availability heuristic leads us to overestimate the importance of recently encountered information.
Consider this scenario: an algorithm pushes 100 pieces of content to a user, 80 aligned with their interests, 15 designed to expand their horizons, and 5 random "noise". Yet users tend to selectively forget 95 of these, fixating only on the few extreme pieces that trigger anger or excitement. This has become the norm.
In his book Breaking the Social Media Prism, Duke University professor Christopher Bail reached another counterintuitive conclusion through experiments: forcing people to engage with opposing viewpoints does not make them more moderate—in fact, it strengthens their existing positions and makes them more extreme.
In other words, cognitive closed-mindedness on social media is essentially an extension of human nature in the digital realm.
It is not algorithms that weave the bubbles—it is the human cognitive system itself that creates the filters.
The prestigious academic journal Nature has also published papers using experimental data to prove that there is no evidence that algorithms are the key driver of polarization.
Now, let’s turn to the second question.
If algorithms only showed users content they wanted to see and opinions they wanted to hear, then yes—they would create bubbles.
But in reality, algorithms have evolved far beyond simple interest-based matching into multi-objective optimization systems. Douyin’s recommendation algorithm, for example, not only considers "what users like" but also proactively controls the frequency and interval of similar content, forcibly inserting diverse content into users’ feeds. It also helps users break out of their interest bubbles through random recommendations, expanding interests based on social connections, and integrating search and recommendation functions.
Take my own experience: I’m a big fan of Douyin’s "Usage Management Assistant" feature. It displays my interest distribution in a color wheel chart, and allows me to click "Explore More" to access a wider range of content. In this process, the algorithm acts as a guide for external exploration—not a jailer confining me to a bubble.
Users can access the "Usage Management Assistant" page to set time limits, rest reminders, and manage content preferences.
Interestingly, renowned communication scholar Yu Guoming argues that intelligent recommendation algorithms are inherently anti-filter bubble. "The social structure enabled by multi-algorithm information distribution platforms, in terms of information flow, can effectively prevent the emergence of the filter bubble effect." In other words, algorithms do not create bubbles—they help burst them.
This is not an unfounded claim. Douyin previously disclosed details of its "Dual-Tower Retrieval Model", which converts users and content into mathematical vectors in a high-dimensional space. This enables precise matching beyond literal semantics, uncovers users’ potential interests, and delivers content that is unexpected yet surprisingly relevant.
In his book The Inevitable, Kevin Kelly envisioned an "ideal filter" that would recommend "what my friends like that I don’t yet know about" and "a stream of content I don’t currently enjoy but might want to try".
Today, algorithmic evolution has turned this vision into reality.
04
This is not to say that algorithms are perfect—far from it. But it is important to recognize that much of the fear and hostility toward algorithms stems from misunderstanding and ignorance.
How can we dispel this confusion? The answer lies in algorithm transparency—demystifying the "magic" of algorithms.
For years, the biggest barrier to algorithm transparency has been the label of "trade secrets". But this year has seen a notable shift, with Chinese internet companies like Douyin taking the initiative to tear down this wall.
Since January 2025, Douyin has established a Safety and Trust Center, launched a website to disclose algorithm details in March, held open days and media briefings in April, fully upgraded its usage management tools in May, and hosted its first "Algorithms for Good" expert seminar in September... Douyin has built a multi-dimensional communication system covering rule explanations, case analyses, and interactive Q&As, bringing algorithms out of the black box and into the light.
The Douyin Safety and Trust Center website publishes detailed explanations of algorithm principles.
This effort to demystify algorithms helps alleviate public concerns about "uncontrolled technological power". When platforms openly share how they balance short-term engagement with long-term value through "multi-objective modeling", how they avoid content repetition with "diversity constraints", and how they break interest rigidity with "exploration factors", the understandability of algorithms itself becomes a public good.
The best defense against the stigmatization of algorithms is to place them within a transparent governance framework characterized by explainability, participation, and oversight. This will help users gradually realize that algorithms are not independent value-laden entities, but reflections of human behavior; they are not mysterious magic, but tools that can be supervised and improved. Algorithms are neither omnipotent nor infallible.
During a media briefing, when I listened to Douyin’s algorithm engineers explain why users see Content A instead of Content B, and when platform operators defined what constitutes "high-quality content", two thoughts struck me: First, reducing complex structural social problems to "algorithmic original sin", or blaming algorithms for one’s own cognitive rigidity, is a case of selective blindness and misattribution. Second, more often than not, the problem is not technology itself—it is the surrender of human autonomy.
Returning to the issue of rage bait: There is too much rage bait out there, and algorithms cannot take all the blame. We should not make algorithms the scapegoat just because they cannot speak for themselves.
Algorithms certainly have room for improvement, but the solution is not to reject them outright. Instead, we need to make algorithms operate in the sunlight, help the public understand how they work, hold platforms accountable for content governance, and empower users to reclaim control over their information diets. Only by acknowledging their limitations and leveraging their strengths can we clarify the foggy path toward a harmonious human-machine coexistence.
Among these goals, reclaiming our information sovereignty is paramount. Nathaniel Hawthorne once wrote: What you focus on shapes who you become. To rephrase this: Every individual is the first and foremost responsible for the information they consume.
Because ultimately, it is not lines of code that determine the direction of your information stream—it is your clicks, your time spent, your interactions. Your destiny is in your own hands, not in the stars. Similarly, the information you consume is determined by you, not by algorithms.

您需要登录后才可以回帖 登录 | 立即注册

Archiver|手机版| 关于我们

Copyright © 2001-2025, 公路边.    Powered by 公路边 |网站地图

GMT+8, 2025-12-15 19:47 , Processed in 0.152762 second(s), 31 queries .