Chat with former OpenAI scientist: GPT-5, innovation, and the deception of AI

After a long wait and difficult delivery, GPT-5 was finally released on August 8th.

144539217219

Many domestic media still refer to it as the “strongest basic model in history”, but in reality, there are many doubts about GPT-5 from the outside world – there were pseudo popular science and demonstration errors in the content of the press conference, and the performance progress of GPT-5 is far from expected in the eyes of some people.

At this point, I had a conversation with two former scientists from OpenAI: Kenneth Stanley and Joel Lehman; They have personally experienced OpenAI in the pre ChatGPT era and are also the authors of the book ‘Why Greatness Cannot be Planned’. (The podcast was released as early as September, and the text version has just been released. Sorry for the inconvenience.)

In 2023, this book sparked a reading trend in the Chinese technology industry. That was the year when ChatGPT swept the globe and sparked the wave of big models. However, Ken and Joel chose to leave OpenAI. As early as 2015, they put forward a unique viewpoint – innovation requires abandoning the pursuit of goals and pursuing novelty, and all great innovations are unexpected.

At that time, this argument was not widely recognized and accepted until the emergence of ChatGPT at the end of 2022, which was strongly validated because ChatGPT itself was an unexpected project.

Three years have passed, and Ken and Joel continue to promote the concept of ‘goalless innovation’, while the two AI scientists have also maintained their attention to the AI community. In this interview, we talked about OpenAI’s development history and how it is both sides of the innovation curve. When GPT-5 appeared, Ken and Joel expressed disappointment that this may just mean that research on AI will become interesting again – it should be noted that in the past few years, although the world has been watching and chasing the progress of big models, in their view, the academic community is becoming boring compared to before.

These are two preachers about innovation. The content of this issue revolves around AI, but it is not limited to AI.

The following is an excerpt from the interview, which has been text optimized to improve readability.

1、 Without telling the robot where the target is, it can exit the maze faster

Wei Shijie: The Chinese version of the book “Greatness” was born in 2023, when OpenAI was already world-renowned. One of the promotional points of this book is that the two authors are core scientists of OpenAI, but the fact is that you wrote this book as early as 2015. When ChatGPT appeared, did you realize that the proposition in this book about “how to foster innovation” was strongly validated – because ChatGPT was originally an unexpected project?

Kenneth: Yes, no one realized that ChatGPT would have such a significant impact, which is consistent with the “coincidence” we described in the book.

Joel: Not only was the success of ChatGPT unexpected, but all the small stepping stones that led to it were also unexpected, such as the Transformer neural network architecture that ChatGPT is based on, which came from Google’s research. Moreover, many of OpenAI’s initial explorations had nothing to do with language models. They have some research, such as creating robotic arms that can play Rubik’s Cube and electronic games, which are very cool and exciting.

Wei Shijie: This book deeply explores the composition of the soil for innovation. All the stories start in 2008, and I know the original inspiration for this book came from Kenneth’s unexpected discoveries in a series of experiments (such as teaching robots to walk and getting them out of mazes), right?

Kenneth: Actually, I think before robots, it can be traced back to the Pick Breeder, which was an experiment we conducted to build an “open system”.

Open systems “refer to those systems that can sustain long-term innovation, decision-making, and discovery. Human civilization is an open system.

In order to build an ‘open system’, we designed the Pick Breeder experiment: people will log into a website, see a bunch of images, click on one of them, and this image will generate ‘offspring’. Just like children in life, ‘offspring’ will be slightly different from ‘parents’. What you are doing is to ‘reproduce’ the images from generation to generation.

The driving force behind writing this book is that we have discovered a special observation: the best way to discover great things is not to deliberately search for them.

For example, if you want to find a bird, you will always fail, and it will only appear when you don’t deliberately search for it. If you only think about butterflies, you won’t cultivate pictures related to butterflies.

This result is shocking. It seems to contradict everything I believe in and have been taught. I have a background in computer science and engineering, and my education is that the way to achieve something is to set it as a goal and then purposefully move towards it. But the experimental results of ‘Pick Breeder’ are exactly the opposite, the best way to achieve significant success is not to deliberately pursue it.

I am fascinated by this observation result. It’s so strange that I’m starting to think it might have a more profound impact. I began to realize that diversity itself may be the reason for success. Novelty is a way to acquire diversity.

Joel got involved at this moment. I still remember the first time we met, he walked into my office and I poured all these ideas into him. I said, maybe you can create an algorithm specifically for searching for novel things. He accepted all of this and truly created this novel search algorithm. This is where robots began to emerge – we conducted a series of experiments using robots combined with this novel search algorithm, such as how to get robots out of mazes as quickly as possible. The answer is that when you don’t tell them where the target is, they can get out of the maze faster. I won’t go into detail one by one, that’s too long.

But Joel and I realized that the social impact of this matter is greater than the algorithm itself – it may have significant implications for how to operate human systems, as in the past, almost all human systems were always set towards goals.

Wei Shijie: Why do you think it must be written into a book?

Kenneth: Once I attended an AI conference and a student from the Rhode Island School of Design invited me to give a speech at our school. This is a group of artist listeners who are beyond the academic circle of AI, and I am a bit excited.

But this exceeded my expectations. When I told them that innovation doesn’t necessarily have to start with clear goals, those students were very excited, and some of them almost cried. Because they say, I have never been able to explain my choices clearly to my parents or teachers because they always ask, ‘How do you make money?’? What is your life goal? Why do you do these meaningless things? This is the first time I feel like I can clearly articulate why I decided to spend my life that way, because I am pursuing an interesting path.

This made me even more clear about an idea: if we could write it out, it could be a great book. If people shed tears over this matter, then it must be very important. This will be an important social dialogue.

2、 Recalling the early days of OpenAI: Sam was a good leader at that time

Wei Shijie: After completing this book, you almost simultaneously joined Uber AI Lab (2016-2019) and OpenAI (2020-2022). Why did you choose these two companies? How to identify whether a startup company has innovation capability?

Joel: It’s difficult to know beforehand. But I think an interesting startup is usually made up of interesting people, and interesting people usually have interesting ideas.

Kenneth: Joel and I joined these two companies, it’s a bit like what was written in that book, neither was planned, it was just a coincidence. But in 2020, I thought OpenAI was a good company. OpenAI has many smart people and is also working on very interesting projects, so it’s not surprising that there will be some innovations.

As for how to identify attractive startups, it depends on who you are, an investor, and an employee, and the evaluation criteria are different. As an employee, I will check if it aligns with my interests?

As an investor, my perspective may be slightly different. I am not so concerned that everything must succeed. I should bet on things that are innovative and interesting enough – if this thing succeeds, it will be a huge success because it is completely different from other ongoing things.

If it is novel enough, it is worth taking this risk.

Wei Shijie: What interesting explorations did OpenAI have at that time?

Kenneth: Oh, in 2020 OpenAI, I saw early GPT models, which is now old news, but at that time, technology advanced very quickly, it was very interesting, and even shocking. Now, the future has arrived.

Joel: It was really interesting back then, there were indeed many smart people there. As Ken said, you can be exposed to some emerging technologies ahead of others. That kind of experience is extraordinary.

Wei Shijie: (at OpenAI) Who do you respect and love the most?

Joel: This is an interesting question. I have great respect for Ken. Let’s lead the team together. However, when it comes to personal preferences, I really like Jan Leike. He is the security manager there, and he is a good person. I really enjoy chatting with him. However, as you know, there are many interesting people at OpenAI, so it’s really difficult to choose

Kenneth: This is a very difficult question. Of course, I would choose Joel, but… I respect many people there. I respect most of the leaders there, I really don’t have a definite answer to give.

Wei Shijie: Sam has publicly expressed his admiration for the book ‘Great’, but with the OpenAI palace intrigue incident, Sam Altman’s public image has become blurred and controversial. I heard a viewpoint that Ilya is a true god and Sam is a false god. As former members of OpenAI, how do you view Sam’s leadership over OpenAI? What kind of person is he?

Kenneth: I worked at OpenAI from 2020 to 2022, all before those controversial events. But during that time, I thought Sam Altman was a good leader. He is pragmatic, cautious, and good at communication. He will carefully weigh the issues. At that time, I didn’t see anything to criticize.

I, like everyone else, was surprised by the conflict that occurred at OpenAI. I didn’t expect this to happen, hearing about it was a huge shock. You know, Sam seems to be firmly in control of the situation, I didn’t expect anyone to try to dismiss him. So, like everyone else, I am completely confused, and even today I am not sure what exactly happened.

Joel: Yeah, it’s like a series of extraordinary events, all of this happening is a bit crazy. You know, involving a lot of money and power, people are striving to do what they think is best, but they have different perspectives.

3、 The field of AI used to be much more interesting than it is now

Wei Shijie: With the release of GPT-3.5 at the end of 2022, when the results of “open exploration” were seen, the global artificial intelligence industry began a fierce competition. Did you feel the change in atmosphere when you were at OpenAI? How do you perceive it internally?

Joel: I think I can say that there have been significant changes over time. When you have a product with such a huge impact, you will start to shift from simply exploring to pursuing something more deeply. You can feel the gears starting to rotate towards more utilization modes.

As time goes by, the overall attention is constantly increasing, and this technology itself is relatively simple – I mean training language models and using larger datasets on more and more GPUs, and others are also starting to follow suit. Competition has intensified.

Kenneth: I think I would agree with Joel’s point of view that there have indeed been some changes within OpenAI because when you dig for gold, you tend to focus, and the most dramatic changes may occur after we leave because the truly infectious GPT has greatly changed the company.

Wei Shijie: What is the specific phenomenon?

Kenneth: The company’s application business sector is developing rapidly. At first, it was just a research laboratory, but suddenly it became a powerful commercial department. So the nature of the company has changed internally. Of course, they are still conducting research, but suddenly the company has become very business oriented.

Wei Shijie: OpenAI in 2023 faces a crossroads. On the left side is its genetic makeup, and as a non-profit organization, it continues to explore cutting-edge technology; On the right-hand side, there is the chase to become the next tech giant; Is there any misunderstanding about this company from the outside world?

Kenneth: 2023 was after we left. So I can only guess the situation at that time. But I guess they won’t use such extreme language to view this matter. I think both of them want to do it, they want to continue building a great research laboratory and also take advantage of business opportunities. Everyone there knows that research is important. I think Sam also knows that research is important.

But obviously, the huge business opportunities before them cannot be ignored. Undoubtedly, this will have some impact on the company’s culture.

Wei Shijie: Do you feel sorry for the change in OpenAI’s atmosphere?

Kenneth: I don’t feel regretful. I think the changes we are witnessing are changes in the entire industry. This is not just an issue with OpenAI. What I mean is that AI has truly undergone significant changes, as I have been working in the field of AI for a long time. In the past, AI was just a small discipline, but today it has a huge social impact. When something almost like a mini game suddenly becomes the most important thing in the world, it takes time to process and absorb, which is a very complex and difficult transition.

Wei Shijie: What about Joel?

Joel: I feel a bit regretful, not because I have a nostalgia for research before the New Revolution, but because there were more diverse things and many interesting and crazy studies in the past. But now there are too many papers on language models. I have a feeling that things related to language models are spreading too quickly in society. It’s super fun and exciting. And I have been using these tools all along. But the past of this field is more interesting than the present.

Although scaling is also an important science. But for some reason, for me, technology that only allows for economies of scale has less intellectual appeal, even if it brings astonishing results.

Wei Shijie: Do you personally prefer OpenAI to become a great business company or a great research institution?

Joel: I think OpenAI’s legacy is definitely as a great research institution, although currently it is more known as a commercial giant. I think I always expected it to be a bit like a research institution, but when I joined, I didn’t expect it to become a business giant.

Kenneth: The same goes for me. It’s quite surprising that it has become such a huge business organization. In 2020, it was more like a research institution, but in some ways, it matched our book – when you read it in 2019, you had no idea what its business plan was. The lack of a truly clear plan is the reason for all success. OpenAI is just opportunistically following the trend. Interestingly, the result was a great success.

4、 Talking about GPT-5 and Sustainable Innovation: Bored for a Long Time, AI Field is Exciting Again

Wei Shijie: Previously, GPT-5 experienced difficult childbirth. A few days ago, GPT-5 was just released, and some people believe that OpenAI has started a toothpaste like innovation. What do you think?

Kenneth: Hmm, let me think. I think there is a common problem with AI at present, that is, whether the path that has been followed in the past few years, which relies solely on scaling and data (Scaling Law), is really the path to AGI? Interestingly, we have seen signs of slowing down the Scaling Law. It seems that there are differences among people on this issue.

I believe that the expansion path itself cannot lead all the way to the so-called Holy Grail, and it is not the only important factor in intelligence. If we only follow this path, it is not surprising that new models do not always bring revolutionary results.

But in fact, as a researcher, I precisely believe that now is a very interesting period. If progress slows down, there is room for more interesting ideas to come in and start new work. From a research perspective, the field of AI may once again become exciting.

Joel: I think it’s not an exaggeration to say that the leap from GPT 3 to GPT 4 is more profound than the leap from GPT 4 to GPT 5. Why is that? I think following the Scaling Law path is a bit lacking in momentum. Once again, this is more of my personal opinion.

I suspect that this is the situation. At present, the training methods of these models are somewhat strange compared to the way humans understand things, and they may only be stepping stones for future development. I don’t think Transformer is the end of this story. So I am looking forward to something different appearing.

Wei Shijie: Does OpenAI’s story tell us that once “open exploration” is successful and the results are seen, it is easy to attract strong competition or various temptations to disrupt this innovative atmosphere – is this difficult to avoid?

Kenneth: Yes, I think this is an unavoidable danger. Throughout history, we have seen innovators do revolutionary things, become top figures, but later be overturned by an unknown new innovator, and such stories keep happening.

There is indeed a phenomenon where great success may make you tend to be conservative. This is a deep-seated issue, because it’s not just someone who has lost the willingness to innovate, but also because the path of innovation you’ve taken will not have a second success.

You have followed certain principles and heuristic methods on the road to amazing discoveries and achieved success. You think you can do it again, but in reality, the next innovation will always have a different story. The past effective model is no longer the future effective model, and I believe all innovators will fall into this trap.

Joel: That’s why some innovators reshape themselves every 10 years or so. It’s really difficult to resist one’s own nature, intuition, and curiosity.

Wei Shijie: How do you view Google? It is not only a great commercial company, but also the birthplace of AI research.

Joel: Yes, when it comes to Google, it mentions a very interesting aspect – they invented the Transformer architecture, and it was this architecture that truly sparked the language model revolution. As a large company, they may indeed be more cautious when deploying this technology. This may not necessarily be a bad thing. I remember that Google had a chat like model before.

Wei Shijie: Google was the first to discover the direction of big models, but missed out on GPT. And a large number of AI talents have been lost.

Joel: Yes, but once this giant is awakened, I think they are basically on par with Anthropic and OpenAI now, and they may have caught up in some aspects. But indeed, as a larger company, it may become more bureaucratic and encounter various situations due to being swallowed up by its own success. This is a story that keeps repeating itself, but they are still conducting excellent research.

Kenneth: OpenAI is an unexpected disruption. It is obvious that watching things originating from Google develop in this way is a painful story for Google. But this is the eternal story of innovation, it plays out time and time again. Just like Kodak’s failure to regain its foothold after being disrupted by digital photography, such stories have been playing out in the field of technology. However, I believe that Google, as Joel said, has to some extent regained its footing, so now it is quite competitive. And DeepMind has certainly achieved some considerable accomplishments, even beyond language models such as AlphaFold. But they did miss an opportunity. I’m sure they feel regretful about it.

Wei Shijie: How to avoid goal management stifling innovation in mature enterprises?

Kenneth: This is a big issue and a very important one for companies like Google. In order to innovate, sometimes you have to give up some goals. I believe that as companies mature, dominate, and achieve great success, these large companies are more likely to be goal oriented. Google is one of the promoters of OKR and KPI (OKR goals and key results), and these methods are applied throughout the industry. This is a bit like a reinforcement loop, where the goal leads us to success. So we set more goals. In that environment, it’s actually difficult to say that we need to abolish bureaucracy, remove goals, and let some people do interesting things. Large companies are struggling in this regard.

Big companies will say that we should establish a research laboratory like Google, such as Google Brain. Everyone should be happy, but what actually happens is that people are aware that the evaluation will ultimately be based on the company’s OKRs. They can see that colleagues get promoted because they have done things that contribute to Google’s profitability. When you see this situation, you may think that they claim to care about it, but the ones who receive rewards are the ones who truly drive the achievement of the goal. So I will also respond to this signal because I also want to receive bonuses this year and a salary increase.

That’s why startups often have disruptive advantages, because if an innovation lab is set up within a large company, people outside the lab will become jealous and angry. People in the laboratory don’t need to help advance goals, they can do anything they find interesting. So others will say, ‘Why can those people do this? They are holding so much money and doing nothing like children on the playground,’ which poses a cultural risk for large companies. The whole contradiction lies in the fact that, from an innovative perspective, doing something useless is actually the best approach.

So I believe that the challenges faced by large companies are enormous and require real courage to face them. You told me that I must achieve goal X this year, otherwise I won’t give you a raise. This way, I will feel safe. On the contrary, you told me that I can do things as I please, which makes me feel insecure. This is the significance of our book, I believe it is meant to empower people with this courage.

I hope the leadership of these companies can read this book, not because it will help Joel and me make money, which is just a negligible amount of money for us. But it’s because it can change the company culture, which can empower them to make these very difficult decisions, such as when someone creates an innovation laboratory. To truly protect it and make it play its rightful role, people need this brave change.

Wei Shijie: I am sure many Chinese entrepreneurs have already read the book ‘Greatness’. But a history about OpenAI is: Founded in 2015, OpenAI encouraged scholars to actively explore various cutting-edge directions in its early days, which sounds in line with your advocacy of free exploration rather than planning; But in two years, hundreds of millions of dollars will soon be burned out. The key question behind this is: Is’ open exploration ‘really sustainable? That may mean more resource investment, who should foot the bill for this’ open exploration ‘?

Joel: In certain specific environments, openness and exploration are sustainable. Universities have basic research, which is usually funded by the public. In the long run, the public’s gains will outweigh their investments. This type of investment has returns, and the returns are quite substantial. But when you try to open up and explore in a corporate environment, the situation becomes even more tricky because the company has a time frame that must be followed, within which you will face pressure to optimize profits.

In the worst case, if you don’t make enough money, the company may go bankrupt. So I believe that only companies with a solid foundation truly have the freedom to explore as widely as possible. But even considering some more limited exploration methods is still useful.

Kenneth: I believe that as long as the company is healthy, open exploration is absolutely sustainable. So I believe that success should not be an obstacle to open exploration, just like societies where people can barely afford to eat will not have innovation. Innovation only begins when you have the ability to take risks, because that doesn’t mean you’re going to die. It’s quite counterintuitive that many people believe that success will destroy the ability to do it. But precisely because you succeed, you can continue to invest in innovation. Adding another point, small companies can also engage in open innovation, which is definitely possible.

Wei Shijie: In the context of running out of funds in 2017, OpenAI ultimately chose to focus all its resources and manpower on big language models, which gave birth to the later ChatGPT. ——Does this conflict with ‘greatness cannot be planned’? How to strike a delicate balance between divergence and convergence?

Joel: This is very interesting.

Kenneth: When we arrived at OpenAI in 2020, they were doing more than just language models. So I don’t think they are completely focused on language models, but I believe what you’re saying may be that some decisions have already been made. Perhaps some people believe that increasing investment in this area is a path forward, but I am not aware of it. But this is possible and not contradictory to the viewpoints in the book, as our book does not oppose beliefs.

We are not saying that you can never make decisions. If you strongly feel that a certain path is interesting, our book will support you to keep going. Just what we want to say is that you shouldn’t choose to take that path just because you think it will lead to a distant goal.

If OpenAI had decided to invest in language models in 2017, they definitely didn’t realize or anticipate that ChatGPT would be created in about six or seven years. So, the goal of this decision in 2017 may be more of a belief that this is an interesting technology, and obviously, in hindsight, this is a great belief. What I mean is, GPT 1 and the like, it was junk at the time. It is not a good product in any way.

But it was very interesting, and those who had the courage to invest more in this area at the time were very smart people.

Joel: There is an interesting phenomenon where you may exhibit divergence through convergence. OpenAI is an example of this, where the belief that scaling up neural networks produces interesting results has been consistent since its early days. As a company, we have reached a consensus on this assumption, while others have not truly reached the same level of belief.

Something that truly has a high level of belief will have rewards. The research will continue and more stepping stones will be built throughout the system, hoping to continue developing in this direction.

5、 Deception of targets in AI racing

Wei Shijie: What do you think of the ongoing race on artificial intelligence? Did you mention the concept of ‘deceptive targets’ in today’s AI racing competition?

Kenneth: This is definitely reflected in the competition. You can always see reports of these benchmark tests, such as some math and programming tests, and we see that AI can actually succeed and win gold medals in math Olympiad competitions. From a benchmark perspective, this is a remarkable achievement.

But this is exactly what our book warns us of: doing so may be harmful to innovation and deceptive. In a sense, we may see an increase in the scores of these tests, but the intelligence level we initially thought did not actually improve. If AI reaches the level of a PhD, why hasn’t it invented many new mathematical achievements?

What may be happening is that we are walking on a deceptive path. Interestingly, this does not seem to have caused any further concerns. People will say, oh, it will come. Now, these scores are immediately released every time a new model is released, and the model performs better in benchmark tests than all other models. If you think about it carefully, you will feel suspicious. In fact, we are catering to benchmark testing rather than pursuing something else, which is intelligence.

Being fixated on certain specific goals can make us lose our overall perspective. Losing the overall perspective is a form of self deception.

Joel: Yes, I completely agree with Ken’s viewpoint. Benchmarking may be misleading. If we look back at the history of AI, we have experienced such cycles before. About 20 or 30 years ago, neural networks were exciting, but after I think the book ‘Perceptron’ was published, they stopped working. Then they went through winter, summer, and so on.

Wei Shijie: Tides rise and fall.

Joel: Yes, ups and downs. We don’t think this is the end of the story. If AGI is close at hand, I don’t know if we are prepared, but anyway, I may still dive in. However, I do believe that there is a significant risk of deception here.

Wei Shijie: From the perspective of innovation atmosphere, which startup company in Silicon Valley or around the world is most similar to OpenAI back then?

Kenneth: When was that? Do you mean when OpenAI was founded?

Wei Shijie: OpenAI before 2020 or 2022.

Kenneth: This is a great temptation, and I really want to cite an example from our own company. Is that okay? The feeling of starting a new open entity team reminds me of the scene when we started the open entity team at OpenAI.

Wei Shijie: Can you introduce Lila company more?

Kenneth: Okay, Joel and I recently joined Lila Science Company, which is trying to create so-called scientific superintelligence. It adopts the concept of super intelligence, which is also a topic that many companies are discussing. But Lila specifically targets science, which is the progress of science itself.

There are several unique aspects to these assumptions: one of them is that currently we won’t really see a great scientific revolution, which is exactly what many companies are talking about – when we achieve super intelligence, the real huge benefits will be new drugs, new energy, new materials, all of which will come because of super intelligence. However, Lila believes that this cannot be achieved only by Internet data. You must interact with the real world in order to test hypotheses and obtain more information. Therefore, when we gather various scientific expertise, we will ultimately generate an intelligence that is different from what other cutting-edge laboratories are exploring.

From another perspective, science is one of the most open things you can come into contact with. It is like an endless tree of discovery.

Kenneth: Lila’s scale is somewhat similar to the scale we had when we joined OpenAI. This is a compact, agile, and flexible environment suitable for trying new things. And everyone is excited about what they need to do next. The feeling is that the next major event will happen here. Joel , To some extent, I also had this feeling towards OpenAI at that time.

Wei Shijie: Have no other companies from Silicon Valley flashed through your mind?

Kenneth: I am not deliberately avoiding mentioning other companies. Undoubtedly, innovation is emerging in Silicon Valley. But the problem we face is that the next major breakthrough is always intangible before it becomes apparent. I find it difficult to be completely certain about it, openness is a huge phenomenon. If I knew, I would tell you.

Wei Shijie: What do you think about the development of AI in 2025? Is it developing quickly and exciting?

Joel: It seems to be still developing rapidly. The coding model surprised me a lot, and this year seems to be the year when people start using AI to write code, which is really amazing. They are not perfect, and there are also some rough areas. But in my opinion, this seems to be a core practical development.

Kenneth: But coding brings another thing, which is that you suddenly have to debug something that you don’t understand what’s happening because you haven’t written the code. And this takes more time than debugging in the past. So we are facing both this tremendous acceleration and another type of deceleration. We believe that with these improvements, bugs will decrease over time. So the acceleration will get faster and faster. But it is at an interesting and subtle turning point, it is interesting to see where it goes from here.

Wei Shijie: Have you noticed Deepseek’s innovation?

Joel: Well, I think it’s hard to ignore it. At least in a sense, it has had a huge impact on people, and the stock market has also been affected as a result. This is a super achievement. Obviously, this is another step in the constantly evolving competitive dynamics, you know, which teams and individuals from which countries are creating the best models in the world. Moreover, I believe that China has been striving to catch up with the United States in this regard.

Wei Shijie: I also noticed that due to the emergence of Deepseek, Silicon Valley’s attitude towards Chinese AI startups has changed.

Kenneth: Can you confirm what kind of change you’re referring to?

Wei Shijie: Silicon Valley will now pay more attention to Chinese AI startups.

Kenneth: Oh, that’s right, I definitely think so. It must have attracted everyone’s attention. Since that moment, many things have changed.

Wei Shijie: Currently, the competition in technology between China and the United States is quite intense. How do you view the development of technology in both countries? Will China and the United States take different paths?

Kenneth: This has already led to a tense situation. I think it’s very difficult to predict what will happen in the future. We have seen the international development trend, where some events seem to be pushing countries towards division, but then there are other events that are surprising as countries come together again. So it’s hard to say how all of this will ultimately end.

Wei Shijie: You have an important viewpoint that ‘society’s superstition towards’ goal management’ may hinder breakthrough innovation ‘. From a historical perspective, China has been a country particularly adept at formulating plans, and we have also achieved results from this planning approach. What do you think about the future development of Chinese technology? Is there anything you want to remind China of?

Kenneth: Well, the answer is a bit obvious. You know, if China really… likes planning. And our book is called ‘Why Greatness is Unplanned’. So we can remind China that our book may serve as an interesting perspective and a rebuttal to the underlying traditional notion of achievement. I know there are many plans in China, and I don’t want to oversimplify them because Chinese people must know much more about China than I do. As both Joe and I acknowledge, we are not against all planning, but in certain areas, especially disruptive innovation, planning may actually be harmful. The future of China is an innovative issue, and you must strive to find the right things that suit you to do. China must innovate for itself.

Joel: I have heard from some friends that education in China is facing many challenges, with a large number of exams. In the United States, we also have a culture of exams, which may remind us that the content reflected in these exams may not be what we truly care about. How to allocate quotas for top educational institutions has always been a challenge. I think the hope lies in finding more humane ways to discover people with potential, and the exams people take may not truly reflect their potential, but rather their ability to focus on the exam. So I hope that this information applies not only to China, but also to the United States, and we should reduce unnecessary exams for children.

6、 The Value Practice of “Openness” and Philosophy of Life

Wei Shijie: I saw the Chinese calligraphy behind Ken. Do you know what these three words mean?

Kenneth: Yes, I think it means openness.

Wei Shijie: That’s right, who gave this to you? Is it a gift?

Kenneth: It was indeed a gift from someone to me. They commissioned a Chinese artist to create this calligraphy piece and then mailed it to me. It represents my research field to some extent. So I hung it on the wall and thought it was very good.

Wei Shijie: It’s very beautiful, and it’s related to our interview today.

Kenneth: Yes, that’s right. very interesting.

Wei Shijie: With the book ‘Greatness’ gaining worldwide attention, has anything changed in your lives? When Kenneth left OpenAI, he seemed somewhat discouraged and confused? Have you felt any better down in the three years since you left?

Kenneth: This is an interesting comment. If I looked depressed and confused at that time, it was indeed true.

Actually, to some extent, I was really worried about the impact of AI itself at that time. Because I am beginning to realize all these new global impacts that I was previously unaware of. So I have a few years to think about this issue. Indeed, after several years of contemplation, I feel more at ease in understanding what is happening. This makes me feel good.

Yes. I would say that this book has had a great impact on my life. It gave me the opportunity to meet many people, you know, because I used to be just an AI researcher. But because of this book, I had the opportunity to talk to such diverse people and learn about many different things in different professions and lifestyles. Even once, a grandmother called my office and asked if I could meet her grandson to make him less stubborn. She said she heard me on the radio. Such things are very meaningful to me. Just like I’m talking to you now, if it weren’t for this book, this situation wouldn’t have happened.

Joel: One thing I am reflecting on is that when you engage in basic research work and write a paper, it is difficult to link the paper to any specific positive impact on people. But when someone thanks you in a very specific way for something you write, truly grateful that it touched them and may even change their life, I think this could be one of the biggest takeaways. Yes, people care about what we write, it’s really amazing.

Wei Shijie: In your book, you also give advice to ordinary people – suggesting that people can better pursue their curiosity and harvest their own perfect life. Can you share how you are currently arranging your work and life?

Kenneth: Go pursue curiosity. In my own work and life, I have a bit of faith that because these things are interesting to me, I will do them well, and then good things will happen. But I’m not sure what a good thing it will be. So this is quite in line with the viewpoint of this book. Writing this book has given me more confidence in this method.

Joel: My answer is very similar. Somehow, my pursuits outside of work always blend into my research in some unexpected way, and I try to find stepping stones to unexpected treasures in my own life. Yes.

Wei Shijie: Looking back now, what are the stepping stones on your life journey?

Joel: When I was a child, my dad got a magazine called “3 to 1 contact”, which is an American magazine. Behind the magazine, there are some basic programs and some programs that you can input. It was that magazine that sparked my interest in computers. There are many stepping stones like this.

Kenneth: I must admit that at some point when I decided to recruit Joel, it clearly brought me a lot of consequences that I couldn’t even predict, including sitting here now accepting this interview.

Wei Shijie: This is very warm. There is a line in the book that I love the most: “When you give up your obsession with your goal, you may actually stumble upon a treasure that changes the world.” So, have you ever “stumbled upon a treasure that changes the world”? Please share a memory related to this?

Kenneth: I actually miss playing Pick Breeder a long time ago when I did something very interesting without intentionally doing it. And now, thousands of people from different countries are influenced by the concept of ‘why greatness cannot be planned’, which is simply insane.

Joel: There have been many times in my life when something completely overturned my entire life in ways I never expected. I think there may be moments like this in each of our lives.

© 版权声明
THE END
If you like it, please support it
点赞9 分享
comment 抢沙发

请登录后发表评论

    暂无评论内容