On November 24, U.S. President Donald Trump signed an executive order to launch the Genesis Mission, a national-level strategy led by the Department of Energy. Aimed at marshaling the nation’s scientific resources, it seeks to transform scientific research paradigms and accelerate scientific discovery through artificial intelligence. Analysts have dubbed it the "Manhattan Project" or "Apollo Program" of the AI era, marking a further escalation of great-power competition in the realm of digital technology.
This impending reality urges us to re-examine the origins of digital technology and abandon the narrative of "technology invented by geniuses." In this exclusive interview, Liu He discusses the close connection between technology and its era, drawing on her newly translated Chinese edition of The Freudian Robot. Liu notes that cybernetics and communication technologies, the precursors to today’s AI, were products of the capitalist war machine. The historical and social contexts of World War II and the Cold War shaped humanity’s imagination of human-machine relations from the very outset. Adherents of a zero-sum, winner-takes-all geopolitical mindset confined human-machine relations to a master-slave dynamic of "control" and "being controlled"; the projection of narcissistic psychology, in turn, enabled machines to thoroughly reshape the organizational fabric of human society.
The reflections in this article transcend superficial discussions that treat digital technology as a mere "social phenomenon." From a philosophical perspective, it delves into the intrinsic operational logic of digital technology, revealing its fundamental impact on human cognitive patterns and social organization. Unlike previous industrial revolutions that transformed production relations, AI is reshaping humanity itself; humans who cannot live without machines may gradually evolve into unconscious, automatonic beings akin to the machines they rely on.
The evolution of historical demands provides fertile ground for technological progress; in turn, technology’s transformation of human society challenges existing paradigms of cognitive philosophy and social science. Our world is undergoing epoch-making changes, urgently requiring new ideological resources and expanded spaces for discourse. The next wave of technological and cognitive revolution may well echo the spirit of our times.
This article is from the WeChat official account: Cultural Review. Interviewee: Liu He. Editor: RJX. Originally published in Cultural Review, Issue 6, 2025. Cover image generated by AI.
Cultural Review: The Chinese edition of your book The Freudian Robot has just been published, focusing on the relationship between humans and machines—a topic extensively explored in science fiction. For instance, we often debate whether humans can truly control machines, whether robots will rebel like HAL 9000 in 2001: A Space Odyssey; whether we can deploy robots to serve nobler social goals; or, as in Blade Runner 2049, whether robots with souls and consciousness can be considered human, and where the boundary between humans and machines lies. However, your approach to these questions seems to differ from these conventional narratives.
Liu He: The human-machine relationship commonly depicted in AI discourse is often imagined as a master-slave dynamic. Interestingly, the English word "robot" derives from the Czech term meaning "slave." Many people take this etymology literally, assuming that human-machine relations must be framed through the lens of master and slave.
Indeed, numerous science fiction films and novels explore this theme, all expressing the same anxiety: what if the master-slave hierarchy is reversed? What if the slaves rebel and enslave their human masters? This echoes the typical paranoia of ruling classes. Kubrick’s 2001: A Space Odyssey is a classic example of this trope—HAL 9000 attempts to murder the humans it was programmed to serve.
I argue that such science fiction imaginings obscure deeper questions that demand critical reflection: What were the genuine motivations behind inventing robots before and after the advent of digital technology? How has the human-machine relationship evolved, and what pivotal transformations has it undergone?
Ultimately, the relationship between humans and technology lies at the core of human civilization’s evolutionary history. A glance at ancient civilizations reveals that certain technological inventions—such as writing—remain integral to our lives today. We often fail to recognize writing as a technology precisely because it is so ubiquitous. People assume that learning to write is merely a matter of mastering symbols: for Chinese, years of practice to write characters elegantly; for English, memorizing the 26-letter alphabet and mastering spelling. Isn’t it just a matter of remembering these symbols? Moreover, long before computers, people wrote by hand for centuries—how is this related to technology?
The connection is profound. When ancient peoples invented writing, they relied on specific technological prerequisites. First, a writing surface: bamboo or wooden slips in ancient China, clay tablets in Mesopotamia, papyrus in ancient Egypt, and later, paper. Second, writing tools: styluses, brushes, etc.
Then there are the symbols themselves—Chinese characters or phonetic alphabets. Yet a set of symbols alone cannot constitute a writing system. In this sense, ancient writing technologies are fundamentally analogous to modern digital systems, where we type characters on computer keyboards, and circuits on computer chips process numerical and textual symbols.
This is why archaeologists use the invention of writing as a benchmark for measuring the sophistication of ancient civilizations. Not all civilizations with spoken languages developed writing systems. Archaeologists generally recognize four major ancient civilizations that pioneered writing, with China being one of them. However, China has always been part of a multilingual Eurasian continent, and not all regional languages evolved into written forms. This is not a matter of superiority or inferiority; rather, the key factor is whether there was a necessity to invent or adopt a writing system.
In my book, I reference the media historian Harold Innis, a frequent touchstone for Marshall McLuhan, the founder of media studies. Innis wrote Empire and Communications, in which he asks: Why did the Roman Empire need writing? It required long-distance information transmission to govern its overseas conquests; without this demand for communication, writing would have been unnecessary.
Similarly, when the Mongol Empire rose to power, it developed its own script to facilitate long-distance intelligence communication during its expansion across Eurasia. In the ancient world, there were two primary means of long-distance communication: if a writing system existed, messages could be delivered by horseback; if not, information was transmitted through poetic recitations, whose rhythmic structure aided memorization. Many regions in the ancient world relied on such methods to disseminate news of wars.
Therefore, when contemplating the human-technology relationship, we must focus on its political, social, and economic prerequisites—note that I use the term "prerequisites," not "background." Even in identical environments, a technology will not emerge until the necessary conditions are met. Yet today, when we discuss human-machine relations, robots, and artificial intelligence, we often overlook these prerequisites. Extensive academic research has demonstrated that the development of Mongolian and Manchu scripts was linked to imperial conquest and expansion, driven by the need for long-distance communication.
Ironically, surrounded as we are by modern technologies, we rarely reflect on their origins and evolution. We tend to assume that a group of brilliant minds invented new technologies, put them into practice, and these technologies continuously improved until they reached advanced levels. This narrative has been overused; it is time to tell new stories. When historians of science and technology discuss inventions, they always situate them within broader political, cultural, and economic contexts—not just backgrounds. Similarly, when examining the relationship between contemporary digital technology and ourselves, we must pay close attention to the specific contexts in which digital technology emerged.
The invention of communication technology, cybernetics, and game theory—all precursors to today’s AI—are inextricably linked to the histories of World War II and the Cold War. For example, Claude Shannon, the father of information theory and a pioneer of communication technology, led the top-secret Project X during World War II. In the 1940s, after the war ended, parts of this project were declassified and published as Shannon’s A Mathematical Theory of Communication, which revolutionized the field.
This declassified research focused on encryption technology, particularly for telephone communications—such as conversations between the U.S. President and the British Prime Minister. Another scientist, Norbert Wiener, the inventor of cybernetics, sought to solve the problem of how fighter jets could accurately shoot down maneuvering enemy aircraft in mid-air. Building on this research, he developed the theory of the feedback loop.
John von Neumann, the father of game theory, designed the world’s first stored-program computer in the United States, turning Alan Turing’s theoretical vision into reality. Von Neumann was also deeply involved in U.S. wartime technological development during World War II, including the Manhattan Project, which developed the atomic bomb. In short, all these technological breakthroughs were aimed at solving specific battlefield problems and upgrading military equipment; their original purpose was to address military technological challenges.
Later, during the Cold War—especially as the U.S.-Soviet arms race intensified—the United States faced a shortage of linguists proficient in Slavic languages and Russian, yet had a vast volume of Soviet scientific and technological intelligence to decipher. This prompted efforts to use machines for translation, giving birth to machine translation technology. Machine translation deals with natural languages, and natural language processing (NLP), a core area of artificial intelligence, emerged from Cold War-era machine translation research.
Cultural Review: In other words, early digital technologies had little to do with "understanding" or "imitating" humans; instead, they sought to solve problems related to symbol and language processing—a technological trajectory entirely different from popular imagination. How did research aimed at solving these specific problems shape humanity’s understanding of itself and of human-machine relations?
Liu He: Alan Turing referred to computers as "symbol-manipulating machines," whose core function is to generate infinite combinations from a set of simple symbols. The earliest scientists to design computers and communication circuits—such as Turing and Shannon—developed digital technology based on this principle. This raises a critical question: What is the relationship between these symbols and language? Does a machine’s ability to process symbols equate to "understanding" them? This is actually a nonsensical question, because machines do not need to understand—they only need to process combinations of 0s and 1s, i.e., the symbols themselves.
In any case, all technologies share this characteristic: they emerge to address specific, necessary problems. Morse code, invented in the 19th century, was designed for rapid long-distance information transmission, particularly military communications. Early telegraph machines were primitive and unable to process complex symbols, so they used a simple code system consisting of dots and dashes. Phonetic alphabets—such as the 26-letter English alphabet—happen to be relatively simple symbol systems.
When developing his mathematical theory of communication, Shannon conducted extensive research on Morse code from the telegraph era. He discovered that in addition to dots and dashes, telegraph code required an indispensable third symbol: the space. He further observed that spaces are also crucial to alphabetic writing systems. Why? Because communication machines rely on written symbols, transmitting information in the form of numbers or letters. Without spaces between letters in writing, the boundaries between English words would disappear. Shannon referred to the space as the "27th letter" of the English alphabet.
Shannon went on to ask: What is the mathematical structure of written English? What is the probability of a space appearing in a combination of English letters? For example, how frequently does the letter "a" appear? How often does "e" appear? Is the combination "ea" more common than "ae"? Shannon not only calculated the frequency of spaces in English text but also developed algorithms to predict which letter was most likely to follow any given letter, or which word was most likely to follow any given word.
This predictive method is now widely used in artificial intelligence technology. Building on this research, Shannon invented a 27-character "machine-readable English" alphabet. As I argue in my book, Shannon’s transformation of the English alphabet from a phonetic system to a logographic system represented a pivotal semiotic turn.
Shannon’s machine-readable English has nothing to do with whether machines can "understand" the symbols they process. On the contrary, this invention inadvertently challenges long-held misconceptions about language. If machines operate automatically based on the probability of symbol combinations, does the human relationship with language follow the same logic?
Here, research in communication theory and cybernetics raises a profound question: Do humans truly understand the language they use? For example, to what extent are our thoughts and speech driven by the probabilistic combinations of words and phrases themselves?
In his research, Shannon found that the results generated by machines based on frequency calculations were often nonsensical sentences—what we commonly refer to as "gibberish." Yet machines cannot distinguish between meaningful and meaningless sentences. By comparison, where does human "understanding" originate? We then realize that children acquire language through a lengthy process of learning, requiring constant repetition, correction, and memorization, all governed by the logic of social life.
Children are not afraid of meaningless syllables; on the contrary, they enthusiastically imitate them. They have no difficulty accepting "nonsensical" speech, whereas educated adults often do not know how to process it, dismissing it as gibberish. In daily life, adults are actually surrounded by a great deal of meaningless speech, but we typically filter it out, retaining only what is meaningful to us.
The question arises: Is our judgment of what is meaningful or meaningless also governed by the "frequency" of word combinations? Just as Shannon calculated the mathematical structure of written English (through frequency analysis of letter combinations), does a similar probabilistic logic underpin the human relationship with language symbols? In other words, is human processing of language and text analogous to how communication machines process numbers and words?
This is one of the key insights of cybernetics. Cybernetics examines the symbiotic relationship between machine behavior and animal behavior—including human behavior—particularly in signal transmission systems. As early as 1943, two leading cyberneticists, Warren McCulloch and Walter Pitts, published a paper arguing that the behavior of neurons in the human brain resembles the logical behavior of mathematical symbols, both governed by the presence or absence of signals. They insisted that the propositional logic driving neural activity is identical to the propositional logic driving machine operations. Their paper exerted a profound influence on subsequent research in neural networks.
Cultural Review: Cybernetic scientists essentially treated the human brain as a machine. According to their theory, human understanding of the world also operates through an unconscious probabilistic structure, which seems to completely challenge traditional discussions of free will. From this perspective, how do you believe cybernetic research has impacted epistemology and the entire tradition of Western philosophy?
Liu He: "Free will" derives from ancient Western theology—a topic I will set aside here, as it involves many complex issues that would distract from our core discussion. The mathematicians and engineers in cybernetics research groups focused on solving specific problems arising during the Cold War, with no intention of elevating their work to philosophical inquiry. Nevertheless, certain members of these groups possessed a unique intuition, and in the process of addressing technological problems, they put forward fascinating philosophical insights.
Take Shannon, for example. He built numerous toy machines—which he called "useless machines"—such as a mechanical mouse that could navigate mazes. Among them was a small box with a button on top: pressing the button opened the lid, causing a hand to extend and push the button back down before retracting. Shannon named this the Ultimate Machine.
Arthur C. Clarke, the author of 2001: A Space Odyssey, saw this box at Bell Labs. Later, Clarke described it as a machine that "does nothing but turn itself off," admitting that it filled him with "a sense of unease." It seemed to mimic what Freud called the death drive—the instinct toward self-destruction.
Today, many scientists claim that we can replace human limbs with prosthetics, implant chips in the human brain, and achieve immortality. This pursuit is reminiscent of Emperor Qin Shi Huang’s search for the elixir of life—a quest to overcome death. In the age of modern technology, if the brain malfunctions, we can add a chip; if an arm is lost, we can replace it with a prosthetic; some even fantasize that one day, memories stored in the human brain can be uploaded and downloaded. Yet Shannon’s Ultimate Machine does nothing but turn itself off, and the word "ultimate" in its name carries profound implications.
Another illustrative example is the uncanny valley, a concept proposed in the 1970s by Masahiro Mori, a Japanese engineer specializing in robotics. The laboratory where Mori worked developed highly realistic bionic arms—their movement and skin texture were indistinguishable from those of a human hand.
However, when you shake this bionic hand, you find it cold to the touch—it looks like a living human hand but feels like a corpse. This ambiguity triggers a sudden sense of unease. Mori ultimately drew on Freud’s concept of the "uncanny" to develop an influential set of industrial standards. In fact, the sense of dread Mori experienced is the same uncanny feeling Clarke encountered in Shannon’s laboratory. Mori’s explanation was Freudian: this unease stems from the ambiguity between life and death.
Of course, when confronting robots, we can explore deeper philosophical questions through the lens of human-machine relations. Mori’s design standards essentially address one question: When building robots or any human-like components, to what degree of realism should we aspire to avoid arousing suspicion or even fear in humans? In other words, we must produce machines—not humans or living beings—that pose no threat to humanity. Yet this ultimately boils down to the question of how to manage human-machine relations—the same recurring question of "whether humans can control machines."
Why is our imagination of human-machine relations perpetually trapped within this framework? This brings us back to the earlier discussion of the "master-slave dynamic"—humans have always considered themselves the center of the universe, superior to other animals and machines. Yet why do we simultaneously fear being replaced by the machines we create? Because the cybernetic worldview is rooted in a dynamic of "control and being controlled." If machines can control humans, they may eliminate humanity—a worldview deeply intertwined with the war technologies of the Cold War and even the earlier World War II.
For example, game theory, which emerged during World War II, conceptualizes global geopolitics as a chess game—a zero-sum game where one side’s victory is the other’s defeat, where one side must destroy the other. All these ideas and concepts have been reflected in our discourse on human-machine relations.
Does this worldview have deeper psychological roots? This requires introducing a key psychoanalytic concept: human narcissism and the aggression it engenders. A common textbook definition of humans describes them as "animals that invent and use tools." Yet scientists have discovered that other animals also use tools—logically speaking, shouldn’t this definition be revised?
However, instead of revising the definition, people have begun to praise tool-using animals. What does this reveal? It shows that humans are constantly praising and affirming themselves, without engaging in genuine self-reflection. In our interactions with chatbots, we often witness this irrepressible narcissism—people project their desires, fears, and anxieties onto machines, imagining that they are conversing with another human being. This is, of course, a human problem—not a technological one.
In reality, chatbots only serve to distance us further from one another. You even lose the desire to understand others, becoming perpetually trapped in narcissism. Thus, a common scenario unfolds: when a person finds themselves in a situation without access to chatbots, forced to establish real relationships with others, they often encounter various emotional setbacks. To avoid these setbacks, they retreat to chatbots, engaging in narcissistic projection. The question here is: Can humans survive without social relationships?
It is not difficult to imagine the severe social consequences that may ensue—even the emergence of a pathological civilization. In fact, the earliest chatbot in the history of artificial intelligence was designed to simulate pathological human psychology. Named ELIZA, it was invented in 1966 by computer scientist Joseph Weizenbaum at the Massachusetts Institute of Technology. Weizenbaum wrote a simple program that allowed the machine to act as a "therapist" and the user as a psychiatric patient. When users input their problems, the "therapist" would engage them in "conversation," leading users to hallucinate that they were genuinely interacting with a therapist.
In the early 1970s, to advance chatbot technology, a professor at the Stanford Artificial Intelligence Laboratory further developed human-machine dialogue systems, programming machines to simulate schizophrenic patients while users acted as therapists. This essentially marks the origin of the chatbot lineage: from the very beginning, chatbots were designed to simulate human psychological pathologies—not intelligence. Yet as AI technology has advanced, some claim that chatbots can now engage in fluent, even in-depth conversations with humans, indicating that they have achieved human-level intelligence. Is this truly the case? If so, how do we explain the emergence of a pathological civilization?
Returning to your question about the epistemological turn and its impact on the tradition of Western philosophy: I believe that while modern Western philosophy has undergone significant development, encompassing numerous schools and traditions, its mainstream remains philosophy of consciousness. Cartesian mind-body dualism continues to exert a profound influence on contemporary philosophy, despite its many unsubstantiated assumptions about the brain, consciousness, and rationality—assumptions that completely ignore the existence of the unconscious.
Some philosophers, such as Leibniz and Kant, showed interest in the structure of the unconscious. However, the core concern of Western philosophers has always revolved around the question: "What is the human subject?" Today, when attempting to understand the new paradigm of human-machine relations, we must confront different questions: What is the structure of the unconscious? Where does the uncanny feeling originate? Why has the question of whether humans control machines or machines control humans become so acute? Yet once we begin to explore these questions, concepts such as "free will," its contradictions with technological determinism, and the underlying tradition of "philosophy of consciousness" are no longer sufficient to address the new philosophical questions raised by cybernetics.
Cultural Review: You just mentioned how the historical contexts of World War II and the Cold War shaped specific social psychologies, which in turn influenced perspectives on human-machine relations. Now that we are far removed from the Cold War and have entered a new era, how do you believe this framework—or these problems—will evolve?
Liu He: When I first published this book 15 years ago, I argued that if humans fail to properly manage their relationship with machines, they will evolve into "Freudian robots." This term refers to a phenomenon where machines imitate human behavior, and humans, in turn, imitate machines—in this interactive dynamic of "human-machine simulacra," machines and humans co-evolve and mimic each other. This forms a cybernetic feedback loop in itself.
Cultural Review: The idea of machines imitating humans is relatively easy to understand. What does it mean for humans to imitate machines?
Liu He: When Alan Turing invented the computer, he used the typewriter as a reference point. He noted that the computer’s superiority over the typewriter lay in its ability to write, read, and store information—using the word "scan" to describe the act of reading. Why do I mention this? Today, how many of us read articles on our phones not by "reading" in the traditional sense, but by "scanning"?
In this way, our memory capacity has undergone significant degradation. Memory has not only been "outsourced" to electronic devices but also the relationship between our eyes and brains has been fundamentally altered by machines. As a result, people are increasingly unwilling to master what Nietzsche called the "art of slow reading." Only by accelerating reading speed can we process the flood of information we encounter daily. Yet in reality, it is only through slow reading—even rereading—that humans can engage in genuine thinking and understanding. However, human reading habits are becoming increasingly machine-like, prioritizing efficiency and speed above all else. This represents a fundamental transformation: because machines cannot think, humans are gradually losing the ability to think as well.
The consequences of this human-machine interaction are extremely serious and alarming. They not only pose a crisis to our understanding of human capabilities and limitations, our relationship with the world, and even the definition of "humanity" itself but also raise numerous questions about humanity’s future survival.
In my book, I also reference the parable of the "Old Farmer of Han Yin" from the Zhuangzi. The story tells of Zigong encountering an old farmer who irrigated his fields using a clumsy method, carrying water in jars. Zigong asked him why he did not use a mechanical device, which would make the task much easier. The farmer replied: "He who uses machines will inevitably engage in mechanistic affairs; he who engages in mechanistic affairs will inevitably develop a mechanistic mind." Here, the "mechanistic mind" refers to the human spiritual world, encompassing psychology, thought, emotion, ethics, and more.
Zhuangzi’s parable suggests that machines can transform our spiritual lives and reshape our social relationships. Therefore, I repeatedly emphasize that if we fail to clarify the human-machine relationship, we will inevitably become trapped in the feedback loop of human-machine simulacra, evolving into Freudian robots—becoming machines ourselves.
Is this trend inevitable? Is it irreversible? It is difficult to predict at present. But I believe that at the very least, we must begin to reflect on these questions. Once we can discuss and critically examine these issues, we open up new spaces for ideological inquiry.
Cultural Review: Humans imitating machines and pursuing the principles of efficiency and speed sound very much like the logic of capitalism. In the preface to this year’s Chinese edition, you also mention this keyword, which is less prominent in the main text. What do you believe is the relationship between the development of capitalism and the emergence of Freudian robots?
Liu He: There is a vast body of literature on the logic of capitalism, including studies of post-industrial society and contemporary media society. I am dissatisfied with much of this research because it shares a common limitation: it rarely delves into the intrinsic logic of digital technology itself to conduct meticulous research and philosophical analysis. Marx’s Das Kapital remains a seminal work precisely because it does not merely describe capitalism as a "social phenomenon" at a superficial level. Instead, it penetrates the intrinsic logic of commodities themselves, conducting a layered analysis of commodity dualism and labor dualism, and proposing concepts such as surplus value and socially necessary labor time.
Today, as digital technology exerts an enormous impact on human society and human-machine relations become increasingly intertwined, we cannot afford to treat digital technology as a mere "social phenomenon." The real challenge we face is whether we can conduct a meticulous and in-depth philosophical analysis of the intrinsic logic of digital technology itself, thereby grasping the pulse of our times—just as Marx did in his analysis of the intrinsic logic of commodities. I wrote The Freudian Robot to answer this crucial question. The entire book is dedicated to this goal, striving to provide a thorough historical analysis and philosophical reflection on the logic of digital technology itself.
Historically, the foundations of artificial intelligence and digital technology lie in cybernetics, which evolved from World War II and the Cold War. Therefore, neither artificial intelligence nor digital technology is the product of the isolated development of science and technology; they are closely linked to the historical development of capitalism, particularly to the capitalist war machine. For example, during the two world wars of the twentieth century, the immense driving force of military machines continuously encouraged technological innovation, deploying these innovations in service of war. The development of cybernetics is the most typical manifestation of this war logic. I have analyzed this in detail in my book and will not repeat it here.
The intrinsic logic linking digital technology and capitalist development has another dimension: the profit-driven nature of capital. In its pursuit of profit, capital seeks to reduce costs, particularly through the relentless pursuit of cheap labor—a key driver of the emergence of Freudian robots. Since the Industrial Revolution, automation has replaced manual labor; with the advent of the digital revolution, the popularization of computers and artificial intelligence has begun to replace mental labor. In numerous fields such as legal document drafting, programming, civil service reporting, healthcare, education, and entertainment, many people have lost or are at risk of losing their jobs. Not only are people being displaced from the workforce, but they are also becoming increasingly dependent on machines due to the erosion of social bonds—these changes are the historical conditions that may transform large segments of the population into Freudian robots.
Cultural Review: Since we are discussing capitalism and the Industrial Revolution, let us return to that historical context. Today, we talk about how AI and digital technology are changing human behavior and cognitive structures—was this trend not already evident during the Industrial Revolution, when automated machines replaced humans? For example, at that time, assembly line workers became increasingly like automated machines, required to perform precise, punctual, and efficient tasks, repeating the same work day after day, like human mechanical arms. What are the similarities and differences between this phenomenon and the Freudian robots we discuss today?
Liu He: When we discuss the Industrial Revolution, we primarily refer to the replacement and reorganization of manual labor by machines. For example, assembly lines dehumanized workers, creating an alienated relationship between workers, their labor, and their products—this is Marx’s classic theory of alienation. Through his analysis of commodities and labor value, Marx proposed the theory of human alienation, an important framework for analyzing capitalist modes of production and relations of production. However, when discussing digital technology, we cannot limit ourselves to the discourse of human alienation by machines; instead, we must fundamentally re-examine the definition of "humanity" and begin to explore the concept of the Freudian robot that I propose in my book.
Of course, many AI researchers argue that even if AI replaces white-collar mental labor, it will not matter—just as during the Industrial Revolution, we can strive to create new jobs, allowing white-collar workers to transition to other occupations or even stop working altogether. However, all these assumptions implicitly assume that the impact of digital technology on our society will be confined to labor, production, and the so-called economic base. I disagree with this view.
Like all machines, artificial intelligence will undoubtedly bring many conveniences to our lives and work. Yet the emergence of the Freudian robot signifies a fundamental transformation in human-machine relations, potentially leading to an unprecedented evolution of humanity itself. This evolution will not be limited to labor practices; it is already fundamentally reshaping all behavioral patterns in human daily life—our reading and writing habits, modes of thinking, ways of survival, relationships with ourselves, interactions with others, emotional expression, and even family and social relations. All these aspects will be reorganized through the medium of machines.
Cultural Review: Today, many technological optimists argue that when human jobs are replaced by AI and people no longer need to work, society will return to a bygone era of prosperity. Freed from labor, humans will have more opportunities to communicate fully and rebuild social relationships. However, based on your analysis, by that time, humans will have been completely transformed—not only will they not revert to an ideal state naturally, but they will also most likely face a new and unprecedented predicament.
Liu He: Exactly. In my book, I define a Freudian robot as "any networked entity that embodies the feedback loop of human-machine simulacra and cannot escape the cybernetic unconscious." The cybernetic unconscious refers to a learned automatism and repetition, such as the probabilistic nature of language discussed earlier. For example, in our daily lives, human communication can no longer occur without machines.
"Humans" have evolved into composites of "self + smartphone" or "self + computer." This is not the same kind of predicament brought about by the Industrial Revolution, nor is it the familiar problem of labor alienation. The term "networked entity" refers to the fact that humans are no longer isolated, atomized individuals; each person is a component of big data, embedded within a network. This is a social behavior that transcends the individual, existing outside the physical body. In this context, whoever controls the network controls everyone—thus, power relations have also undergone a fundamental transformation.
When discussing the cybernetic unconscious, I must mention Jacques Lacan, a pivotal thinker of the twentieth century and a French psychoanalyst. Lacan argued that language operates like a machine, possessing a certain automatism and repetition. Human interactions are driven by language, not the other way around. Language is the fundamental bond that connects society; society cannot function without discourse, and language itself is an autonomous, external entity—much like a machine outside the human body.
The emergence of the cybernetic unconscious makes understanding the world particularly difficult; we must abandon all traditional concepts of consciousness philosophy, such as human agency. Lacan’s insights capture the essence of the human relationship with language. He arrived at these conclusions precisely because during the Cold War, he closely followed trends in technological development, particularly cybernetics, game theory, and communication theory.
Lacan further observed how probabilistic statistical methods have fundamentally transformed humanity’s understanding of society and nature, not only revolutionizing the paradigm of scientific and technological research but also challenging the traditional philosophical concept of causality. For example, modern medicine now relies heavily on big data, moving beyond the simplistic model of one cause leading to one effect. In fact, the pioneers of cybernetics were well aware of the philosophical implications of their research. For instance, they proposed the concept of circular causality, and Wiener developed the theory of the feedback loop—both of which ultimately converge on the computer’s mode of symbol processing and the cybernetic unconscious.
Cultural Review: Speaking of changes in cognitive patterns, we note that when cybernetics or the concept of "the human brain as a machine" first emerged, it dealt a severe blow to humanity’s belief in autonomy and free will. Yet after all these years, people seem to have accepted this concept as self-evident, readily acknowledging that the brain is governed by neural electrical signals while still assuming that humans possess autonomy. Today, the concept of the cybernetic unconscious no longer elicits such a profound cognitive shock. Why is this the case?
Liu He: This may precisely be a sign that humans are evolving into Freudian robots. Machines operate automatically; they have no cognition, let alone autonomy. Do Freudian robots have cognition? Perhaps they can pretend to have autonomy. Lacan once said that you do not realize a machine is operating until it malfunctions—just as you only become aware of how you walk when you injure your knee. We all know that the human brain is a network of neurons, yet we do not perceive how this relates to our behavior, passively accepting its automatic operation.
A typical example I have observed in the United States is democratic elections. Whenever the electoral machine is activated, people emerge to praise their democratic system, yet they do not question who the "people" are, why they are voting, or where their information comes from—this is, in fact, a form of mass manipulation.
The United States no longer has democracy—not because it is controlled by a single ruling class, but because democratic politics itself has evolved into an electoral machine. The ruling class only needs to control the electoral machine to control the electorate. The Democratic and Republican parties in the United States frequently hire mathematicians to calculate how redrawing electoral district boundaries can alter vote distributions—a blatant contempt for voters.
The majority of voters either fail to recognize that democratic politics has degenerated into an electoral machine, naively believing that casting a vote defends their interests or upholds their beliefs, or they are aware that it is an electoral machine but have no alternative but to vote. In any case, each act of voting perpetuates the operation of the electoral machine.
Ultimately, we all live in an era of networked existence and the cybernetic unconscious. Traditional political, social, and cultural theories are unable to explain the epoch-making changes unfolding in our world. What can we do? I believe we need to seek new ideological resources and open up new spaces for thought—otherwise, we will all become Freudian robots.
|