The standardization of new technologies and new fields does not necessarily result in hindering development. The issuance of "bans" often also has the aspect of promoting development through compliance.
Author: Xiao Sa Team
On March 27th, a certain security center of a music platform released the "Announcement on the Governance of Improper Use of AI-generated Virtual Characters," aiming to further manage the use of AI-generated entities on its platform, especially the use of AI-generated virtual characters as hosts for short videos and live content creation.
Given the unusually severe wording used in this announcement, it has sparked concerns within the industry about the future prospects of AI virtual character hosts on self-media platforms. So, is the release of this representative platform rule a blessing or a curse for the development of AI-generated content? Is the delineation of the new "restricted area" a good or bad thing for the future application of AI virtual character hosts?
AI-generated virtual character hosts: opportunities and risks coexist
Compared to the early "skin-wearing virtual hosts" played by "middlemen" through real-time motion capture, the current "AI virtual hosts" with AI animation generation and voice synthesis technologies seem to have a more unstoppable momentum. Although there is still considerable room for improvement in long video generation and real-time interaction using this technology, in the short video and live content industry, where the broadcast content and demand scenarios are relatively fixed, the use of AI to create exclusive images tailored to specific market demands and automatically generate voice and video content based on set prompts at a lower cost is increasingly favored by creators.
Traditional live broadcasts, which are labor-intensive, have relatively fixed working hours, and have difficulties in shaping the host's image, are at risk of collapsing if not careful. In contrast, AI virtual hosts have demonstrated a significant inherent advantage. Especially after the era of the pandemic, the market demand for e-commerce live streaming and self-media traffic diversion has been continuously increasing, creating enormous business opportunities. Many self-media platforms themselves provide, or even encourage, more users to use AIGC as a convenient way to increase content creation. Taking a certain music platform as an example, although it does not directly provide AI-generated virtual character creation services, it has long launched a comprehensive creation module with "AI-generated images" and "AI-generated videos" functions.
It can be seen that even a certain music platform, which has just put the brakes on in terms of regulations, is by no means thinking of completely banning AI creation. However, it is undeniable that currently, due to the significant reduction in the entry barriers for content creators brought about by AIGC technology, many creators with varying levels of technical expertise and compliance awareness are participating in this business opportunity, leading to a considerable degree of legal and compliance risks.
The foremost issue is the legal and social problems caused by the fraudulent use of AI technology. According to relevant security reports, the growth rate of AI-based deepfake fraud cases in 2023 is as high as 3000%. Behind this astonishing growth trend is the enormous legal risk brought about by the improper use of AI. Additionally, since AI-generated virtual characters are based on multiple AIGC technologies such as text, images, and sound, they are always difficult to completely avoid potential infringement of others' intellectual property rights in terms of character design, background music, live content copywriting, and more. Furthermore, the image of virtual characters poses potential risks to others' portrait rights, privacy rights, and personal information rights, and using AIGC for deceptive product promotion may also harm consumers' right to information and choice.
It can be said that the application of AI virtual character hosts and similar technologies in the field of short videos and live streaming for product promotion is currently in a state of enormous opportunities combined with significant risks.
Starting with a certain music platform, an analysis of the compliance points for AI-generated virtual hosts
Therefore, it seems understandable why a leading platform like a certain music platform has taken a relatively cautious approach to the compliance management of AI creation. In fact, in the aspect of AI virtual hosts, a certain music platform has not "dampened the enthusiasm" for the first time. As early as May 9, 2023, it released the "Platform Norms and Industry Initiatives for Content Generated by Artificial Intelligence," which explicitly defined the behavioral norms for AI-generated videos, images, and derivative virtual live hosts on the platform, and formulated the "Content Identification Watermark and Metadata Norms for Content Generated by Artificial Intelligence" based on its platform rules in accordance with the "Deep Synthesis Management Regulations for Internet Information Services," further refining requirements such as prominent identification and avoidance of confusion.
The main focus of this "ban" by a certain music platform is on the "repeatedly prohibited and expanding trend of using AI to generate virtual characters to publish content that violates scientific knowledge, fabricates rumors, and spreads rumors" on its platform. Specifically, it mentioned three typical situations:
First, generating false foreign character images, using false overseas identities to exploit patriotic sentiments and seek attention.
Second, generating false images of handsome men and beautiful women to deceive interactions or publish emotionally manipulative content, diverting users to private dating and chatting tools, or even engaging in fraud.
Third, generating false images of elite individuals to publish low-quality content such as inspirational quotes, financial and pseudo-intellectual knowledge, and even diverting traffic to profit through selling courses or joining groups.
It can be seen that the above situations all involve using a certain type of AI-generated virtual character image to target specific social groups with emotional needs, engaging in low-quality content creation, attracting followers, deceiving traffic, and even engaging in fraudulent activities. In these activities, AI-generated virtual characters themselves are not used for direct deception; they serve as a "preparatory stage" for subsequent illegal and non-compliant behaviors, with a considerable degree of concealment.
Therefore, what a certain music platform prohibits and restricts is not the creation of content by platform users using AI virtual hosts, but rather the hope to, through its own compliance construction, work with its users to control the illegal and non-compliant risks that may exist in related fields. It would be somewhat unfair to simply view this behavior as a restriction on users' use of AI creation, as the ultimate goal is still to regulate potentially risky non-compliant behaviors and promote better development in related fields through compliant means.
User or platform, whose responsibility is the compliance of AI virtual hosts?
The reason a certain music platform does this is also to fulfill its responsibility as an internet service platform, as well as a provider and disseminator of AI-generated content. Currently, most of China's management regulations for generative AI services tend to focus on the responsibilities of the platforms. For example, in the "Basic Security Requirements for Generative AI Services" released by the National Internet Security Standardization Technical Committee, the inspection and evaluation responsibilities for generative AI mainly target the service providers. Similarly, the "Interim Measures for the Management of Generative AI Services" restrict the "safe harbor principle" for service providers, requiring them to immediately rectify any illegal behavior and report to the competent authorities, and to establish a sound complaint and reporting mechanism to promote the timely handling of non-compliant content.
From the announcement released by a certain music platform this time, it can also be seen that it reiterates the relevant norms. Under the relatively severe wording, it implicitly reveals the hope for user cooperation, joint supervision, and active reporting of improper use of AI-generated content. Currently, the combination of platforms and users in the management of AI-generated content mainly focuses on the first point in its strict norms, which is the "prominent identification" of AI-generated content. According to Article 12 of the "Interim Measures for the Management of Generative AI Services," providers should label images, videos, and other generated content in accordance with the "Deep Synthesis Management Regulations for Internet Information Services" and the "Network Security Standard Practice Guide—Methods for Labeling Content Generated by Generative AI Services." Including a certain music platform, most self-media platforms that provide or accept AI-generated content have clearly defined methods and requirements for labeling their generative AI content.
The current official approach of a certain music platform still relies on users to cooperate in reporting AI-generated content and for the platform to add relevant prompt text. This mode of operation has its shortcomings when it comes to releasing more realistic AI virtual content, as it relies on the creators to add labels such as "virtual idol/virtual person" in the copy, otherwise, there are deficiencies in enforcement. Although this announcement reiterates the punishment measures for violators, given the current processing mode, it seems more like a hope to further reach a tacit understanding between users and the platform. While it may seem that the construction of platform rules, combined with detection algorithms and other technical means, is the optimal solution for service providers to take compliant measures, the effectiveness of these measures in this era of opportunities and risks still needs to be tested in practice.
It should be noted that the image of virtual characters cannot be used as a shield for the responsibility of their creators, and the compliance of AI creation is not solely the platform's wishful thinking. In this announcement by a certain music platform, the most prominent display of its strict norms and governance determination is the indication that after listing typical violations, it will take internal measures such as removing videos and banning accounts for those who violate the rules by using AI to generate virtual characters. Additionally, for the clues of some black-market groups improperly using AI-generated virtual characters for criminal activities, the platform will take action by reporting to the authorities. Indeed, the tacit understanding mentioned earlier may objectively exist between users and the platform. Creators engaging in non-compliant creative activities will not only face punishment according to platform rules, but depending on the severity, they may also bear other legal consequences, and even criminal liability.
In conclusion
We always emphasize to everyone that the standardization of new technologies and new fields does not necessarily result in hindering development. The issuance of "bans" often also has the aspect of promoting development through compliance. After all, only by doing things right can we go far.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。