Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

GPT Image 2.0 empowers Seedance 2.0: Everyone can shoot Hollywood blockbusters.

CN
PANews
Follow
3 hours ago
AI summarizes in 5 seconds.

Author: Changan I Biteye Content Team

After the Spring Festival of 2026, actors in Hengdian began posting videos on Douyin expressing their frustration about not getting any work.

“The group chat of the crew is quiet; in previous years, by the Lantern Festival after the New Year, the announcements would already be coming. This year, waiting until the end of February, not a single one has come.”

This year's Spring Festival saw the quiet launch of ByteDance's video generation model, Seedance 2.0, which swept through the entire short drama industry.

This article aims to clarify three things:

  • What has happened in the industry after Seedance 2.0?

  • How are AI short dramas made?

  • And what opportunities does this present for ordinary people?

1. A Model that Changed the Entire Industry

During the Spring Festival, ByteDance's SeeDance 2.0 video generation model was officially launched, and Tim from the film whirlwind repeatedly described it as “terrifying” six times in his test video.

It has reshaped the entire video industry from the production end: you don't need a photography team, you don't need actors, you don't need a filming location. With just a piece of text description and a reference image, you can generate a video ready for release in just a few minutes.

With the lowered threshold, two types of previously difficult-to-satisfy demands were unleashed.

  • Turning impossible-to-shoot scenes into videos: Creating secondary creations of films and TV shows, for example, “Have you ever rescued a fox at the foot of Tianshan?”

  • Scenes that you want to see but cannot: This type belongs to the essential emotional needs; some imagery may never have the chance to be filmed again, and AI has given these images a chance to exist.

These two points together indicate the same thing: after the emergence of AI video generation tools, the way video as a medium is used has changed. It is no longer exclusively produced by professional teams and equipment but has become something that every ordinary person can use to express themselves, convey emotions, or even just for pure entertainment.

This capability has led to an explosion in two types of video content.

1️⃣ Short video content centered on entertainment and traffic

This type of content is less complex than short dramas; it doesn’t require consistency of characters across multiple videos, nor is there a continuous plot to maintain. Essentially, it strips away some tedious repetitive tasks, letting AI take over.

The most typical example is AI digital avatars. The method is simple: upload a photo to generate a digital avatar, write a script, and AI automatically synchronizes mouth movements, provides dubbing, and outputs the visuals.

Another type involves visualizing jokes. Many pure text jokes circulating online have punchlines but lack visuals, limiting their spread. Now some people specialize in transforming these jokes into videos, adding subtitles and voiceovers, turning a text joke into a short video.

2️⃣ AI short dramas centered on plot

Short dramas are much more complex than short videos because the plot is continuous. The same character must appear from the first episode to the sixtieth, the face must not change, the clothing must stay the same, and the scene style needs to remain consistent. This demand for consistency raises the difficulty of the workflow by an order of magnitude.

Due to ByteDance's restrictions on real human images, a large number of creators have turned to a direction that doesn't require human faces—anime dramas.

Anime dramas use AI-generated anime characters in place of real people, avoiding compliance issues while opening another door: adapting web novel IPs. Fantasy, counterattack, system flow—these genres, which have hundreds of millions of readers on Tomato and Qidian, are naturally suited to be made into animated short dramas.

2. From Script to Final Cut: The Complete Workflow of an AI Short Drama

Many people see a video and think it’s simply inputting a plot description, and the model does it all itself.

Actually, it is not like that. A quality AI short drama has a clearly defined workflow behind it, with each step having corresponding tools, and the quality of each step directly affects the final output.

Step 1: Write a storyboard script

Scripts require you to clarify every shot. The format is similar to this: Shot 3, kitchen, close-up, male lead takes ingredients from the fridge, the camera moves from the hand to the face, expression tired, duration 5 seconds, voiceover “It’s that time again.”

The more detailed this step, the more stable the subsequent generation will be. AI models understand clear visual instructions, not vague storytelling feelings. The better the storyboard script is, the lower the randomness will be in every subsequent step.

Step 2: Establish reference libraries for characters and scenes

This is the easiest step to overlook, yet the most essential one.

The biggest issue with AI-generated videos is not video quality, but consistency. The same character might have one face in the previous episode and a different one in the next. Background colors may drift, and clothing details may disappear. If there are no fixed reference images for constraints, it is practically impossible to make a continuous series of more than three episodes.

The solution is to specially “finalize” the characters using raw image tools before officially generating the video—create images from front, side, three-quarters angle, and eyes separately, fixing hair color, skin tone, clothing, and style. Do the same for the main scenes. This library of images will be referenced in all subsequent shots and is the foundation of the entire workflow.

⚠️ A small trick: if you want to use Jimong to generate real-person videos, you can mosaic the eyes in the person’s front photo, use the text "This character is generated by AI" in the image, and showcase the eyes separately to bypass the platform’s face detection restrictions.

Step 3: Control the card drawing rate with the first frame

Those who have created AI videos know the term "drawing cards." What is the probability that the generated video result can be used directly? The quality of the prompt and reference image must be good enough to significantly reduce the number of cards drawn.

The approach of professional teams is to first generate the first frame of each shot using raw image tools, then use this image as a reference input to let Seedance generate subsequent dynamics from this frame.

In this step, the quality of raw image tools directly determines the ceiling of the final video. The better the image is generated and the more fixed the details, the better the video generated after feeding it to Seedance.

This is also why the emergence of GPT Image 2 has had an impact on the entire industry. GPT Image 2 has a significantly higher understanding of image descriptions, capable of producing high-quality reference images from scene descriptions, resulting in more stable faces and more controllable styles. As reference image quality improves, the quality of the finished product in all downstream segments follows suit; this creates a chain reaction.

Step 4: Edit and compile

Once the segments are confirmed, use editing tools like Jianying or others to splice together, add subtitles, voiceovers, and background music. Seedance 2.0 supports generating sound effects and music while creating videos, and the synchronization of mouth movements and sounds is already quite stable, which can save a significant amount of post-production work.

3. Traditional Short Dramas vs AI Short Dramas: An Unequal War

Having discussed the workflow, what about the costs? How much does it actually cost to produce a 60-episode AI short drama? How big is the gap compared to traditional live-action short dramas?

  • Standard membership: Continuous monthly subscription of 199 yuan, with 2210 points available each month, which can generate about 200 seconds of video, equating to a cost of about 1 yuan per second.

  • Premium membership: Continuous monthly subscription of 499 yuan, with 6160 points available, capable of generating about 560 seconds of video, lowering the cost to about 0.89 yuan per second.

However, this pricing is not fixed.

This year, Jimong has raised prices multiple times; the original annual membership was priced at 2599 yuan, equivalent to about 216 yuan per month, granting 15000 points each month.

Now, the annual membership has increased to 5199 yuan, and in April of this year, the monthly points were cut from 15000 directly to 6160, reducing points by over 60%, meaning the video length that can be generated under the same budget has been decreased by more than half, leading to a 60% increase in actual costs.

Generating 1 second of video on Jimong consumes 11 points. Assuming each episode of the short drama is calculated as 1 minute, under the condition of not needing to draw cards, the actual cost for one episode is about 46 yuan.

The drawing rate varies greatly due to the quality of the prompts and scene complexity. Assuming each video generally requires generating four times to obtain a usable segment, the actual computing cost of one episode of the short drama is about 184 yuan. This is under the premise of stable prompt quality and relatively simple scenes. If the plot is complex and character movement is extensive, the number of draws will only increase.

In addition to computing power, there are operational costs. A small AI short drama team usually consists of 3 to 5 people, including screenwriters, card drawers, and editors, with a monthly fixed expenditure of about 35,000 to 70,000 yuan for personnel, office rent, and utilities. Assuming a monthly output of 10 dramas, the comprehensive cost per episode, including operational costs, is around 500 yuan.

Traditional live-action short dramas are categorized into male and female genres, with obvious cost differences.

  • Male genre dramas: With more action scenes and special effects, the production cost for a 60-episode series typically exceeds 500,000 yuan, translating to about 8300 yuan per episode;

  • Female genre dramas: Primarily focused on emotional plots, costs are relatively controllable, with 60 episodes costing around 350,000 to 400,000 yuan, or about 5800 to 6700 yuan per episode.

In comparison, even with team operational costs added, the comprehensive cost of AI short dramas does not exceed 500 yuan per episode. For the same short drama, the cost difference between traditional live-action production and AI production ranges from 15 to 40 times.

This disparity means that traditional short dramas invest hundreds of thousands, and if the topic choice is mistaken, it can lead to significant losses, causing the entire team to take months to recover. AI short dramas, costing hundreds per episode, can run ten topics at the same time under the same budget, swapping quantity for probability and speed for opportunity.

4. What Does This Mean for Ordinary People? Are There Opportunities?

By 2025, the scale of China's micro short drama market will reach 67.79 billion yuan, with a user base of 696 million, meaning over half of China's internet users watch short dramas. This is the fertile soil for AI short dramas, removing the need to cultivate a new market, as AI short dramas have already formed a stable paying habit.

On this foundation, the Douyin platform has also begun to proactively push traffic and funds towards AI original videos.

Douyin, in collaboration with Jimong, has launched the "AI Creation Wave Plan S2": every two weeks, ten pieces of quality content are comprehensively evaluated, and regular content is rewarded with 1500 yuan in cash. Authors on the leaderboard also gain priority for industry collaboration opportunities, brand project chances, and short film project application support.

Under the platform's incentives, this month’s creative surge on Douyin has already produced a batch of content clearly of higher quality than before. AI public welfare short dramas like “Paper Airplane,” “Century Kindergarten,” and “Goodbye” have generally received high numbers of likes.

Monetization paths are also very straightforward; domestic creators can pursue platform traffic sharing, Tomato novel CPS commissions, and brand project opportunities concurrently.

  • Account fans reaching 10,000 to 50,000 can earn 500 to 2000 yuan per single brand deal;

  • With 50,000 to 100,000 fans, rates can range from 2000 to 5000 yuan;

  • 100,000 to 500,000 fans can earn from 5000 to 20,000 yuan per deal.

The platform also has a revenue sharing plan, with Douyin's mid-video plan paying about 60 yuan per 10,000 views and Kuaishou's Magnet Star paying about 40 yuan per 10,000 views, with low thresholds.

Game promotion, APP acquisition, brand placement—these advertisers are already investing heavily in short drama content; AI short dramas merely provide them with a lower-cost option.

In Conclusion

With such a large market, what kind of people are suitable to enter?

For those with no background, it is best not to jump straight into short dramas. Short dramas have high demands for character consistency and scene coherence, with complex workflows and significant trial-and-error costs; a more pragmatic path is to first practice with short videos.

There are many accounts on Douyin that transform circulated text jokes into videos; these do not require continuous storylines or fixed characters, with each video being standalone content. Such accounts gain followers quickly and have high view counts, suitable for establishing their own IP and audience base. More importantly, this type of video requires almost no consideration for character consistency and scene consistency, allowing creators to focus entirely on content selection and pacing.

Once the account is up and running and there’s a grasp of tool and platform rules, they can then work on improving content quality, gradually attempting to tackle more complex short drama workflows.

The AI short drama sector has not yet seen a true monopolist; tools are iterating, workflows are evolving, and today’s team that has perfected a set of processes might be reshuffled by a better model tomorrow. This means the first-mover advantage is not as significant as imagined, and later entrants still have a chance.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by PANews

1 minute ago
The U.S. Secretary of Defense emphasizes that Iran will not possess nuclear weapons.
3 minutes ago
Pakistani media: The American delegation may arrive in Islamabad tomorrow.
6 minutes ago
Binance launches a keyless Agentic Wallet for AI Agents.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarPANews
1 minute ago
The U.S. Secretary of Defense emphasizes that Iran will not possess nuclear weapons.
avatar
avatarPANews
3 minutes ago
Pakistani media: The American delegation may arrive in Islamabad tomorrow.
avatar
avatarForesight News
3 minutes ago
OpenAI President Brockman: My brother Ultraman and the 72 hours after he was "taken down."
avatar
avatarPANews
6 minutes ago
Binance launches a keyless Agentic Wallet for AI Agents.
avatar
avatar白话区块链
9 minutes ago
From Hyperliquid's Perspective: Vertical Integration Is the Core of Competition
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink