On November 9, at the World Internet Conference in Wuzhen, DeepSeek senior researcher Chen Delhi said that AI would replace “most human work” in 10 to 20 years, and gave a three-stage evolution path, emphasizing that the prediction was “not alarmist”.
The prediction that AI will replace humans on a large scale in the next 10 to 20 years is not the sole opinion of DeepSeek.
IMF、 Organizations such as McKinsey and OpenAI have made similar predictions. Using Google to search for AI to replace humans, one can even find a media report summarizing relevant predictions from nearly 60 organizations.

Just considering that DeepSeek hardly requires financing, there is no subjective exaggeration motive. The most objective and neutral company seems to have already recognized that this trend is inevitable. So this seems to be the most noteworthy prophecy.
This brings up a very practical question: as working people, we all think about what kind of jobs are not easily replaced by AI?
There are many opinions about this proposition, including engaging in work that requires more creativity or more interaction with people. But these claims never seem completely convincing.
In the process of AI product development, I have been involved in many contextual engineering related tasks, and a new perspective has emerged: if the effective application of AI relies on humans building specific contextual environments, will the work of constructing context that is inconvenient be difficult to replace?
Let me explain this logic in detail below.
Human work requires sufficient information, and the same person can make completely different judgments in situations where information is sufficient or insufficient.
The same applies to AI. Building contextual engineering is crucial for the true implementation of AI applications.
The essence of context engineering is to enable the model to know more information, thereby stimulating it to correspond to the correct problem-solving parameters in this environment, and achieving the goal of improving accuracy.
However, the longer the context, the more likely the model is to lose focus.
Taking the benchmark ranking of AI programming as an example, it is not difficult to find a characteristic: AI’s ability to easily surpass humans in competitive programming, but in actual engineering environments, it is difficult to surpass humans in the short term.
Competition questions are like closed problems in a vacuum environment, where both AI and humans obtain information from limited contexts and have clear goals, making them highly suitable for AI to solve.
On the other hand, in engineering problems, the context that AI can handle is limited, and it is difficult to distinguish between junk documents and “poop mountain code” in reality. This chaotic context is already tricky for humans, and even more so for current AI.
Even with rich tools and contextual engineering, sorting and modifying the “shit mountain” still heavily relies on human programmers.
That is to say, although competition questions theoretically require higher intelligence than engineering questions, for AI, it is more difficult to create a context environment suitable for it to exert its intelligence. Therefore, the difficulty of AI replacing engineers will be greater than everyone expected.
This is actually a somewhat counterintuitive conclusion, but it has clear manifestations in other fields as well.
Robot running is not yet very efficient, and autonomous driving is also poorly implemented. However, when it comes to killing humans in the game of Go, one will find that artificial intelligence has always had this characteristic: it is suitable for solving super difficult problems in closed environments, but it is not very good at running and driving in open worlds.
Therefore, the core criterion for determining whether a job is easily replaced by AI is not the threshold of the job, but whether its contextual environment is complex enough for AI and whether the perception difficulty is high enough.
My own AI automation practice in the past two years has also verified this point: providing AI with sufficient information and allowing it to make autonomous decisions usually yields good results.
On the contrary, if the real data cannot be digitized or the necessary context cannot be provided at the appropriate stage, the project often fails.
This leads to the title of this article: How to find a job that AI is difficult to replace? Ensure that the context of your work environment is sufficiently complex. So this is the reverse training of the so-called ‘context engineering’.
How to understand ‘reverse training’? Since the solution for AI to solve problems is to provide sufficient information, can working in an environment with peculiar information flow and difficult standardization ensure that it will not be replaced in the short term?
For example, customer service personnel are easily replaced by AI. Its working environment is simple enough, with a merchant chat interface and a problem manual, and the contextual environment is extremely clear. Actually, I personally witnessed such a systematic plan just a month ago.
This plan is not yet mature, and for small teams without standardized training processes and unorganized context, the access cost is high and the ROI is insufficient.
Junior lawyers may also be high-risk positions. The legal profession has a high degree of standardization in its work, and all case information is often concentrated in several files, while national laws themselves are public documents.
Therefore, although there is a huge difference in skills between novice lawyers and customer service in traditional cognition, their job threshold may be equally low in the face of AI.
But software engineers may not necessarily be. Although most software engineers are ridiculed as addition, deletion, modification, and search engineers, most software engineers, especially those who have been working for a long time, are proficient in how to analyze a pile of shit.
This kind of work may be dirty, tiring, and laborious, and even not technically advanced enough, but it is precisely what AI cannot replace.
This is one of the core values of this article: to determine which jobs are safer from the perspective of “contextual complexity”.
Many of the previous arguments were too crude, believing that AI would take over all intellectual work such as coding and poetry writing, and that humans could only sweep the floor and move bricks.
The real key to judgment lies in whether the context required for the job is complex or simple? Is it difficult or standardized to obtain context? The simpler the context retrieval and the clearer the content, the higher the likelihood of it being replaced.
Judging from this perspective can more accurately predict the career life cycle and avoid misjudgment.
With this set of guidelines, what can we do in the face of the AI wave?
Approach 1: Become a Luddite in the New Era
One rough approach is to collude with colleagues and deliberately create a work environment that only humans understand as a ‘poop mountain’.
This is actually a practical and feasible approach, because at the level of many people, as long as they are working seriously, the shit mountain will naturally become, and even won’t attract everyone’s attention.
However, this approach is usually not very useful because it is no different from the Lutherans in history.
At the beginning of the 19th century, with the introduction of new machines such as power looms, a large number of textile workers who relied on traditional manual skills for a living found their livelihoods devastated.
So the textile workers’ approach was to choose to smash the machines, and this group of people who defended their rights by smashing the machines were called Luddites.
Modern people know that the efforts of Luddites have failed, but they may not have a concept of specific numbers.
On the eve of the Industrial Revolution, the textile industry absorbed a large amount of labor, especially for impoverished families, which was crucial. At that time, there were 180000 textile workers in Britain, but about 60% of them were children and women.
In the following decades, the widespread use of machine spinning almost completely destroyed this industry, leading to large-scale, long-term technical unemployment, especially for women. The employment opportunities created by the newly established textile industry are far from enough to compensate for the lost jobs.
In 1806, the average weekly wage for textile workers was 240 pence, and by 1835, their average weekly wage was about 60 pence.
It can be seen that the result of passive resistance is likely not ideal.
Approach 2: Engage in jobs that AI is unable to handle effectively
The proactive approach is to shift towards tasks that are difficult for AI to do.
For example, it is difficult for AI to replace the sales position of face-to-face large clients because the sales context environment is very complex, and there are even many unwritten rules that cannot be written.
If you are a programmer, you may consider transferring to a product manager, as most product managers are responsible for implementing unrealistic strategies on behalf of their bosses and getting scolded for them.
In short, it is best to do some non standardized or standardized work where the cost of standardization is higher than the benefits of automation. Just like the example of artificial intelligence customer service mentioned above, if the cost of organizing a knowledge base for AI is higher than hiring a few humans, then it will be difficult for AI to replace these human jobs.
Of course, their salaries are also difficult to be much higher than AI electricity bills.
But I believe that with the improvement of context engineering construction, the number of such jobs will only decrease, and not everyone is suitable for sales.
Approach 3: Dance with AI and become the fastest runner
There are some common views that believe AI will not eliminate people, but will only make them stronger, and this view is consistent with the third line of thought.
This concept sounds hopeful and seems to depict a harmonious coexistence and mutual empowerment of humans and AI for a better future.
But in reality, this will only lead to internal competition within humans.
The paper “AI and the Extended Workday: Productivity, Contracting Efficiency, and Distribution of Rents” by the National Bureau of Economic Research in the United States shows that the extensive use of AI has brought about effects, not only failing to reduce the workload, but also significantly raising the entry threshold and work standards for the workplace.
In this environment, the “growth mindset” – the attitude of embracing learning and being willing to adapt – has become the perfect excuse for the system to engage in deeper exploitation.
When everyone has the powerful ‘cheat’ of AI, if they fail to achieve higher performance, the blame will fall on the individual: ‘It’s because you haven’t had enough conversations with AI’ or ‘you haven’t fully utilized AI to improve yourself’.
This statement cleverly simplifies the structural, social, and economic impacts brought by AI into purely individual mindset issues.
It implies that as long as we embrace change, we can stand invincible.
However, as discussed earlier in describing the Luddite dilemma, the root of structural unemployment lies in the fundamental reshaping of the productivity system by technology, rather than individual will.
Blaming the responsibility entirely on individual minds is tantamount to blaming the unemployed artisans of the Industrial Revolution for not working hard enough.
This is more like an ideological construction aimed at rationalizing the inequality brought about by technological changes and transforming social responsibility into individual responsibility.
What is more noteworthy is that most of the benefits brought by automation will belong to capital rather than labor returns.
A 2017 paper published by the National Bureau of Economic Research in the United States, “Artificial Intelligence and the Future of Income Distribution,” pointed out that the benefits of AI substitution are mainly capital gains, and the workers who are replaced by AI are almost pure victims.
Simply put, the benefits are taken by the platform and the blame is borne by the workers. This is clearly a double standard.
Approach 4: Focus on Allocation Issues
So, although this article discusses how to avoid being replaced by AI, fundamentally speaking, individual efforts to avoid being replaced may not be necessary.
From an efficiency perspective, AI replacing humans is almost inevitable, and any resistance at the individual level may be futile. But the problem is, the productivity created by AI is many times that of humans, why bother with unemployment? Shouldn’t we celebrate running into communism?
This leads to idea four: focus on allocation issues.
The Guardian published an article titled ‘AI is coming for our jobs! Could universal basic income be the solution?’ in 2023.
The article explores a question: if AI is to replace the jobs that most people rely on for survival, should the system of universal basic income, which has been called for for many years, be put on the agenda.
Considering that AI technology can greatly enhance productivity, this has almost become a consensus. If the distribution system is reasonable, then what everyone should be struggling with is not the issue of unemployment, but the problem of doing something alive.
On the other hand, if we focus on how ordinary people can avoid being replaced by AI in a situation where productivity is already so advanced, this approach is undoubtedly a stopgap and does not solve the fundamental problem.
The fundamental problem lies in how to distribute the excess profits after the productivity is so developed and the level of automation is so high.
By this point, almost everything that needs to be written has been done. I’m afraid I won’t be able to publish this article if I continue writing, so let’s put an end to it.
References:
[1] 60+ Stats On AI Replacing Jobs (2025)
https://explodingtopics.com/blog/ai-replacing-jobs
[2] What happens to the weavers? Lessons for AI from the Industrial Revolution
[3] AI and the Extended Workday: Productivity, Contracting Efficiency, and Distribution of Rents
https://www.nber.org/papers/w33536
[4] Artificial Intelligence and the Future of Income Distribution
https://www.nber.org/system/files/working_papers/w24174/w24174.pdf
[5] AI is coming for our jobs! Could universal basic income be the solution?




















暂无评论内容