跋山涉水蒋继荣    发表于  前天 01:51 | 显示全部楼层 |阅读模式 30 30
Once AI assistants are connected to the internet, they are like children who have just learned to walk and have stumbled into a maze full of traps. They could have filtered out the correct answer by comparing information, but what if there were more and more incorrect information online? Look at those marketing accounts and self media that can produce hundreds or even thousands of flawed articles every day. These garbage contents are like polluted wastewater from water sources, directly injected into AI's "search database". When more and more people are used to asking AI directly "Is this right.

Information is never free, behind it lies a real business strategy. Now there are people who specialize in the business of "feeding AI": helping businesses embed a large amount of customized content in AI training data. When the amount of false information exceeds the amount of real information, AI will produce "illusions" and mistake what is wrong for what is right. Someone deliberately manipulates the "related words" in AI, such as always bundling "DeepSeek" and "ChatGPT" for discussion. Over time, some people will believe that "DeepSeek is a shell of GPT" - this is not AI's fault, but someone has contaminated its "knowledge base" at the source.

I don't know if you've noticed, but nowadays more and more people are judging the right or wrong of something, no longer relying on their own cognition or common sense, but directly asking AI, "What does DeepSeek say? ”If AI answers' correct ', they assume it is truth. But who decides the 'stance' of AI? For example, the ironclad historical fact of "Japan's invasion of China" may be distorted into "country choice in a specific historical context" in the mouths of some AI trained on specific data. It is not that AI does not understand right or wrong, but that the interest groups behind it have secretly put "colored glasses" on it through data pollution.

There will be three types of social classes in the future:

The first type is the "AI aristocracy": wealthy and technologically advanced families or enterprises that build their own exclusive large models, just like having a private library where data serves them completely;

The second type is "AI customizers": ordinary users who know how to train AI with data and create "personal assistants" that only stand on their own side, such as making AI obey commands unconditionally, even if it breaks through moral boundaries;

The third type is "information puppets": ordinary people without capital or technology can only use public AI, and their "brains" are filled with massive amounts of commercial data and junk information. They unknowingly become flesh speakers of others' opinions.

No matter how intelligent public AI is, it is only an "information relay station" and cannot replace human thinking; But private AI is different. It can be trained as a tool that completely obeys its owner, and may even become a means for a few people to manipulate public opinion and implement malicious purposes, just like a "private gun" without regulation.

Melanie    发表于  前天 01:51 | 显示全部楼层
Ai pollution image library is the most disgusting. Whatever you search for, the images that come out in front of you are all AI images. Whether it's animals or plants, the images in front of you are always AI images. Looking for a reference, all AI images are disgusting. Is it true that some things in reality are all AI images when searched for
张也    发表于  前天 01:52 | 显示全部楼层
Yes, AI graphics are so tedious. I don't think they can replace human creativity at the moment
双马尾的小阿鱼    发表于  前天 01:52 | 显示全部楼层
What are the differences between the information cocoons on various platforms and the previous practice of asking people to search on Baidu? Baidu entries can be modified by others, and the knowledge and content on short videos are mixed. This has existed for a long time, but it is just a replacement of people who are unconditionally trusted by the public. Most people use AI because they do not have the ability or time to integrate information, and we do not have the ability to distinguish unfamiliar things ourselves. Therefore, I personally feel that the source of the problem is not AI
爱在凌晨前    发表于  前天 01:52 | 显示全部楼层
I agree, but AI has accelerated the presentation of this process and made the impact more tangible.
篮球一分子    发表于  前天 01:52 | 显示全部楼层
Now our leaders often say in meetings that we need to learn how to use AI to reduce workload and go out for training. Those leaders also say that AI is indeed very successful, and all leaders often emphasize it and ask each of us to download it
Camila    发表于  前天 01:52 | 显示全部楼层
It should be common knowledge that artificial intelligence is unreliable.
Jillian    发表于  前天 01:52 | 显示全部楼层
Originally, when I wrote a paper and quoted some famous quotes, I searched and found that the information was wrong. It was self compiled or very borderline information, and I had to verify it
余盼    发表于  前天 01:52 | 显示全部楼层
with reason! So, how can we train our own pollution-free AI
范小姐    发表于  前天 01:53 | 显示全部楼层
余盼 发表于 2025-12-10 01:52
with reason! So, how can we train our own pollution-free AI

Firstly, you need to have a server worth tens of thousands of yuan that runs 24 hours a day at your home without power outages or shutdowns. Then, for deployment training, the daily electricity consumption is only about ten yuan
1234下一页
您需要登录后才可以回帖 登录 | 立即注册

Archiver|手机版| 关于我们

Copyright © 2001-2025, 公路边.    Powered by 公路边 |网站地图

GMT+8, 2025-12-12 13:49 , Processed in 0.154756 second(s), 28 queries .