OpenAI’s ChatGPT App Store has been live for over a week. Regardless of its current performance, it has undeniably opened a new strategic horizon for all major AI companies—including Chinese giants like ByteDance, Tencent, and Alibaba.
For any AI Agent, the biggest pain point is context switching. Jumping from a chat interface to an external app involves too many steps and suffers from abysmal conversion rates. On the other hand, radical approaches like Doubao Phone—taking full control of the operating system—trigger fierce backlash from existing super apps.
OpenAI’s move offers an industry-wide epiphany: large language models can build their own mini-program ecosystems internally, without relying on pre-existing external platforms.
This means domestic AI products like Doubao, Yuanbao, and DeepSeek no longer need to depend on their parent super apps to develop Agents. They themselves have the potential to become the next generation of super apps.
Moreover, compared to OpenAI, Chinese tech giants possess a far stronger foundation in mini-program infrastructure—and can replicate this model immediately.
I
Ever since ChatGPT launched its “Apps” feature, OpenAI has explicitly positioned it as a competitor to Apple’s App Store.
This isn’t OpenAI’s first attempt at ecosystem building. Earlier efforts like Plugins and the GPT Store were preliminary explorations.
However, Plugins—requiring manual developer activation and scattered across multiple entry points—and GPTs—essentially collections of prompt templates—failed to resonate with mainstream users.
The new App Store represents a fundamentally different corporate strategy.
In form, it adopts a mature app distribution model, featuring clear categories like “Featured,” “Lifestyle,” and “Productivity,” along with discovery mechanisms such as leaderboards and trending lists.
Initial launches include well-known services like Canva and Booking.com, later joined by Adobe Photoshop—spanning design, travel, and entertainment.
But its true innovation lies in being “conversation-native.” Users never leave the chat window and don’t need to download anything. Simply typing “@” followed by an app name—or letting ChatGPT intelligently recommend based on context—triggers seamless third-party service integration.
At its core, this update packages previously fragmented and abstract API calls into a curated set of applications orchestrated by an Agent. The chat box evolves from a Q&A tool into a unified service gateway and task orchestration hub.
The key breakthrough of ChatGPT Apps is that they handle all intermediate steps for the user. Once a request is made in the chat, the user receives the final result directly—without managing the process.
For example, in one case, a user only specified a budget and timeframe—but didn’t indicate which software to use or how to proceed. ChatGPT autonomously decided every step in between.
Users no longer need to think, “Which app should I use for this?” Instead, AI interprets the task, breaks it down, and dispatches the optimal execution path.
This enables end-to-end service delivery—from information lookup to hotel booking, from data analysis to content creation—all within a single, unified interface.
This shift in user experience is profoundly instructive: it liberates users entirely from tedious operational friction.
For developers, OpenAI’s provided SDKs and UI libraries aim to lower barriers—even enabling non-coders to build custom AI assistants. This signals that AI competition is shifting from pure model capability to platform ecosystem construction and integrated service delivery.
The GPT App Store isn’t so much a “store” as it is the definition of a new paradigm: users state a need and receive a result, with minimal interaction required.
II
Domestic large-model companies can—and should—emulate OpenAI, and arguably execute it even better.
First, there’s broad consensus: Chinese tech giants enjoy a far richer foundation in mini-program infrastructure and user habits than OpenAI.
According to the China Academy of Information and Communications Technology (CAICT), WeChat Mini Programs boast over 900 million monthly active users, Alipay Mini Programs nearly 700 million, and even the relatively late entrant Douyin (TikTok) Mini Programs are approaching 300 million MAU.
Chinese users are already deeply familiar with the “use-and-go, no-download” lightweight experience of mini-programs. Upgrading this interaction from “tap” to “converse” carries low user education costs. Thus, replicating the GPT App Store model presents virtually no technical or adoption barriers for Chinese AI models.
Take ByteDance’s “Doubao” as an example. By 2025, its daily active users (DAU) surpassed 100 million—a milestone achieved with the lowest marketing spend in ByteDance’s history, according to CAICT.
Undeniably, Doubao’s success stems from the nurturing of Douyin’s 900-million-DAU traffic reservoir.
But if Doubao builds its own internal Agent ecosystem akin to ChatGPT Apps, its strategic position would undergo a fundamental transformation. It would cease to be merely a traffic consumer or auxiliary feature of Douyin, and instead evolve into an independent “service-dispatching operating system”—potentially even feeding value back into ByteDance’s broader business matrix.
At that point, Doubao’s moat would no longer be just traffic, but its role as a next-gen gateway commanding service orchestration capabilities.
Yet this promising transition comes with a massive disruption to the existing business model: advertising.
Currently, whether WeChat, Alipay, or Douyin, their mini-program ecosystems rely heavily on ad revenue.
Gaming mini-programs dominate mini-program traffic. According to the 2025 China Gaming Industry Report, the domestic mini-game market generated RMB 53.535 billion in revenue—a 34.39% year-over-year increase—with in-app purchases accounting for RMB 36.464 billion (68.11%) and ad monetization contributing RMB 17.071 billion (31.89%).
It’s not just mini-programs—many apps depend on ads for income. Douyin is a prime example. While it hasn’t disclosed exact figures, advertising is clearly one of its primary revenue streams.
The problem is this: the operational logic of AI Agents is inherently at odds with advertising. An Agent’s core mission is to fulfill user requests efficiently and precisely—which means it will instinctively skip all non-essential steps, including ads.
In the recently launched Doubao Phone, when a user says “order food,” the AI automatically bypasses splash screen ads and in-app pop-ups in external apps. The user never sees any promotional banners, pop-ups, or recommendation feeds.
This dramatically enhances user experience—but devastates ad exposure.
If AI-embedded mini-program ecosystems become mainstream, traditional ad-based business models could collapse at scale.
But ChatGPT already has viable monetization paths. So, AI-integrated mini-programs can adopt proven alternatives. I envision three models:
First, service commission—similar to Apple’s 30% App Store cut. The platform takes a percentage of every transaction completed via AI, whether booking hotels, buying goods, or ordering meals, as long as the deal closes within the ecosystem.
Second, subscription—following ChatGPT Plus’s success. Offer premium, ad-free, faster, and more capable Agent services to paying users. This revenue stream can directly offset high compute costs and provide stable profits.
Third, transaction fees—akin to Meituan or Didi. As a service broker, the platform charges fixed or variable commissions per completed transaction.
All three models share a common shift: from “traffic monetization” to “value monetization.”
Revenue is no longer tied to user attention time, but directly linked to tangible value created for users. For ad-dependent platforms, this transition will be painful—but perhaps necessary.
In the long run, this may be the inevitable evolution of business models. By embedding mini-program ecosystems, AI large models provide the most direct pathway to transform from selling traffic to selling services.
III
Yet even if AI companies successfully build internal mini-program ecosystems and design new monetization models, they still face an age-old, fundamental challenge: traffic.
This is the ultimate question every platform must answer: How do you get users to open and use your app frequently?
When an AI chatbot positions itself as the next “super app,” it inherits all the challenges once faced by predecessors like WeChat, Alipay, and Douyin.
No matter how powerful the features or rich the ecosystem, without sufficient user activity, it remains a castle in the air.
Users only develop extended service needs on a platform they use often and find intuitive. If engagement with the AI chat tool itself is infrequent, the chance of triggering embedded booking or shopping mini-programs becomes vanishingly small.
AI models face a unique dilemma here, distinct from traditional apps.
Users open Douyin for “purposeless” entertainment, scrolling through feeds to kill time. They launch WeChat out of high-frequency social necessity.
But today, users engage with AI chatbots primarily as “on-demand tools”—only when they have a specific need. There’s no natural, immersive scenario that encourages habitual, unconscious usage.
This “use-and-exit” utility nature makes building user stickiness and long-term habits exceptionally difficult.
To overcome this, AI platforms must adopt a phased, layered strategy.
The immediate priority is identifying a high-frequency use case to anchor user habits.
Doubao’s approach is embedding AI into screenshots—a simple, frequent action. By integrating AI search, translation, and summarization directly into screenshot workflows, it cultivates a reflex: “Whenever I take a screenshot, I expect translation or Q&A features.”
Through convenience, it gradually builds trust and behavioral inertia.
Once basic habits form, the platform can then guide users toward more complex, low-frequency but high-value scenarios—like trip planning or legal consultation—precisely where mini-program ecosystems shine.
This process resembles building a highway: you need enough convenient on-ramps to attract traffic before the main corridor’s commercial value materializes.
There’s also a delicate timing issue.
Launching an independent AI app too early—fully detached from the parent super app’s traffic support—risks insufficient user adoption before habits solidify, stalling ecosystem development.
After all, most users still operate under mobile internet paradigms and haven’t yet adapted to completing all tasks within an AI chat interface. Time is needed for education and habit formation.
Conversely, delaying too long—remaining overly dependent as a subordinate feature within existing super apps—may cause these AI products to miss the window to become the next-generation platform gateway.
Once users grow accustomed to fragmented AI experiences—asking Doubao one thing, searching DeepSeek another—it will be exponentially harder to migrate them to a unified, system-level AI platform.
Thus, the mini-program ecosystem strategy for AI models is essentially a battle for user mindshare and behavioral patterns.
There’s another layer: service providers being integrated feel deeply ambivalent.
They crave new traffic but fear becoming mere “API endpoints” or “data suppliers” behind the AI—losing direct control over their users.
In the mini-program era, developers retained some autonomy and direct user interaction. But once embedded into AI workflows, they may lose all user touchpoints.
Still, this is also an opportunity. Everyone is experimenting—how to leverage this new traffic channel while preserving brand independence.
This isn’t necessarily a zero-sum game, but rather a realignment of roles.
We shouldn’t imagine this transition as overly disruptive. Just as the shift from desktop web to mobile apps once seemed daunting—small screens, awkward inputs—mobile internet quietly became dominant through sheer convenience.
The same quiet revolution may now be unfolding in AI.
|