Here’s a harsh truth: by next year, you won’t find a single large-scale game project that doesn’t use AI assistance.
Except for a few studios stubbornly sticking to hand-crafted workflows, most will inevitably adopt AI—simply because the efficiency gap is just too massive. However, using AI in game development isn’t quite like the “one-click AI generation” many people imagine.
What you think AI game development looks like:
Enter a prompt → AI generates assets → Import directly into the game → Development complete!
What AI-assisted game development actually looks like:
Feed massive amounts of reference data → Define extremely precise output specifications → Craft highly complex prompts → Heavily edit and refine AI outputs → Carefully organize and annotate everything → Use the results as part of a curated game asset library.
So today, I’d like to focus specifically on how AI assistance is being applied in art production pipelines. I’m not deeply familiar with other areas, so I’ll stick to what I know.
Currently, game studios broadly refer to any AI application in their workflow as “AI assistance,” which actually encompasses more than just AIGC (AI-generated content).
By the end of this year, major studios—both domestic and international—have largely finalized their AI-assisted art pipelines. This area has seen the fastest industrialization. So by next year, whether it’s Tencent, NetEase, miHoYo, or Sony, Ubisoft, Rockstar, even veteran studios like Larian Studios won’t be able to avoid adopting AI—because the productivity advantage is simply overwhelming.
So, where exactly is AI being used in art production? If your studio is exploring AI integration, this article might offer some useful insights.
Let’s break it down one by one.
First, during early project phases, AI is primarily used for concept design—including worldbuilding, environments, and characters. At this stage, everything is vague and exploratory. Even the most experienced designers and top-tier artists have to sketch out ideas one by one to find the right direction. This results in the familiar concept art sheets we often see. Because high-quality concept art is expensive to produce, studios sometimes later publish art books to recoup some costs.
Take The Division 2’s concept art as an example.
But now, with AI assistance, things are different. Large studios typically deploy private, on-premise AI models; smaller teams may use either private or public models depending on their resources. I’ll use a private AI setup as an example:
The team feeds the lead artist’s previous works and other valuable reference images into the AI for fine-tuning. Then, concept artists only need to provide rough line drawings, and the AI can instantly generate polished concept art—even filling in missing details automatically.
For instance, take this Overwatch character concept. The artist only needs to sketch a basic figure outline (and honestly, many artists don’t even bother with that anymore). Then, by inputting keywords like “post-apocalyptic,” “grungy weapons,” “manic expression,” and “cybernetic limbs,” the AI can produce a highly finished concept piece like the one below.
But what if you’re not satisfied? It depends on the degree of dissatisfaction. Say you dislike the weapon in the image—most artists would simply erase it and redraw it manually. If you lack direction altogether, you might partially or fully regenerate the image to keep exploring.
This output speed vastly outperforms traditional manual sketching—especially in outsourced art scenarios, where showing quick visual iterations to clients becomes much easier.
It’s quite possible that future art books will consist mostly of AI-generated content.
This is one of the most typical applications of AI assistance.
The same applies to environment design. Concept artists might define several base elements—mountains, caves, forests, swamps—and then use AI to generate concept images, followed by manual refinement.
For example, in the 2026 concept art for A Record of a Mortal’s Journey to Immortality, clear signs of AI assistance followed by human touch-ups are visible. While minor imperfections may remain, these images fully serve their original purpose—as effective visual references.
Another auxiliary application is 3D conversion. The image below is just an illustration: an artist first draws a 2D image (left), then uses AI to convert it into a 3D model (right). This technology isn’t yet production-ready for actual game assets—but it’s still useful as a reference tool.
How? Suppose I need a panoramic view of a small island town. I can pose the AI-generated 3D model to my desired composition, render it back into a 2D image, and then manually refine it. This process, which used to take nearly a week, can now be completed in just 1–2 days with AI assistance.
Players of indie games—especially those infamous “thesis defense”-style titles—have likely already encountered AI-assisted mass production techniques.
It’s similar to the earlier process, but now character parts—heads, hair, torsos, arms, waists, legs—are categorized and processed separately by AI, then assembled together, as shown below. Achieving precise results still requires extensive training data and carefully crafted prompts, and all outputs need significant manual polishing before becoming usable assets.
This still demands rigorous design guidelines, standardized workflows, and quality control—it’s not about replacing humans, but integrating AI into existing pipelines.
From these examples, we can see that AI assistance mainly serves three purposes: sparking inspiration, aiding composition, and filling in missing elements.
But I must emphasize: throughout all these processes, artists are never idle. They’re still working—just shifting from using Photoshop alone to combining AI + Photoshop. AI boosts their efficiency; it doesn’t replace them. Of course, I won’t deny that some studios recklessly overuse AI just to cut corners—but that’s true of any tool. Tools themselves aren’t good or bad; it depends on how they’re used.
My stance is clear: as long as we clearly understand the boundaries of AI’s capabilities, it’s a powerful tool—not a monster. And vice versa. |