Home Page >  News List >> Tech >> Tech

Where has the supervision of AI generated content gone, with celebrity faces carrying goods and fake essays?

Tech 2023-06-07 19:35:21 Source: Network
AD

Led by ChatGPT, the AI big model has become a "training ground" for global technology companies. The speed of technological iteration is beyond imagination, but it's not yet time to be amazed, and the various chaotic phenomena arising from it have taken a step forward

Led by ChatGPT, the AI big model has become a "training ground" for global technology companies. The speed of technological iteration is beyond imagination, but it's not yet time to be amazed, and the various chaotic phenomena arising from it have taken a step forward.

Recently, the police in multiple regions have alerted the attention of new "AI fraud" cases. Subsequently, the exchange of "celebrity face" live streaming products by AI has sparked infringement disputes, and the false essay generated by AI has caused a "flash crash" in iFlytek's stock price, overturning the prediction of AI's false commercial influence.

It can be foreseen that the development and application of AI technology will continue to bring controversy and governance challenges. The concentrated emergence of issues such as deep forgery, false information, bad information, plagiarism and infringement, and dissemination ethics poses a challenge to the governance of cyberspace order, including content governance, and puts forward higher requirements for governance tools, methods, and means.

Looking back at the field of ecological governance of online content, where has the regulation and governance of AI gone? What are the experiences of the platform worth learning from?

Release of deep synthesis management regulations

Clarify the subject's obligations for the first time

The risk assessment of AI generated content is not overnight. In 2017, Deepfake technology was introduced, and various types of AI face changing pornographic videos and the generation of false information began to emerge. In 2019, an app called "ZAO" sparked a huge controversy over user agreements involving excessive collection of user information and privacy security.

With regard to the risks brought about by the deep forgery technology, the Administrative Provisions on the Deep Synthesis of Internet Information Services, which came into effect in January this year, made a response for the first time from the regulatory level, clarifying the obligations of service providers, users, technical supporters and other relevant subjects.

A detailed study of the regulations has found that the regulation of deep synthesis roughly clarifies several directions, preventing the dissemination of illegal and false information, data security and personal information protection, and avoiding confusion or misidentification by users.

For example, requiring deep synthesis service providers and technical supporters to have the necessary obligation to inform users, that is, to inform individuals who agree to be edited; Require technical measures to be taken to add signage that does not affect user usage, especially in scenarios such as text generation, voice generation, face generation, and immersive realism.

Liu Xiaochun, Executive Director of the Internet Rule of Law Research Center of the University of the Chinese Academy of Social Sciences, believes that there are two governance measures in this provision that deserve attention,One is to extend the obligation of content governance to the entities that provide technical support to producers and disseminators, requiring relevant entities that truly have control over technology to effectively assume governance obligations and take measures from the source; Secondly, on the basis of existing content governance methods, it is clearly stipulated that adding technical "identification" can help users effectively identify AI generated content, and implement it in content governance based on risk level classification.

Soliciting Opinions on the Management Measures for Generative Artificial Intelligence Services

Clarify the scope of content regulation

With the popularity of ChatGPT and other applications, in April of this year, the National Cyberspace Administration released the "Management Measures for Generative Artificial Intelligence Services (Draft for Comments)" (hereinafter referred to as the "Draft"), which sent a positive signal to encourage the healthy development of AIGC while regulating it.

The Opinion Draft has a total of 21 articles, clarifying that generative artificial intelligence refers to the technology of generating text, images, sound, video, code, and other content based on algorithms, models, and rules, and making specific regulations for the use of generative artificial intelligence products or services.

In terms of content regulation, special mention is made of behaviors such as false information, discrimination, unfair competition, and infringement of the rights and interests of others. In addition, the "Opinion Draft" also puts forward higher requirements for service providers in terms of prevention and guidance, including specific labeling rules, prevention of excessive user addiction, optimizing training data to influence user selection, guiding users in scientific understanding and rational use, etc.

Dr. Li Mingxuan, from the Cross disciplinary Research Institute of Renmin University of China and the Hillhouse School of Artificial Intelligence, has long been concerned about the regulation of generative artificial intelligence algorithms. He believes that,The Opinion Draft not only emphasizes the prevention and governance of artificial intelligence risks, but also emphasizes the innovative development of artificial intelligence technology.For example, in Article 3, it is specifically mentioned that "the state supports independent innovation, promotion and application, and international cooperation of basic technologies such as artificial intelligence algorithms and frameworks, and encourages the priority use of safe and trustworthy software, tools, computing, and data resources".

Practice andDifficulties:

Existing technical standards for content identification

Difficulty in avoiding counterfeiting through model training

However, researchers believe that under existing technical conditions, it is still worth exploring whether the relevant requirements in the above regulations can be implemented and what practical paths they may take. For example, all relevant requirements involve the issue of content identification, but can all AIGC content be identified? How to identify AIGC content? This requires more detailed and comprehensive regulations, as well as platforms to explore the best practical paths.

In May this year, Tiktok released the Platform Specification and Industry Initiative for AI Generated Content, providing the industry with technical capabilities and standards such as AI content identification, and providing a sample of industry practice for the implementation of regulatory measures.

In Liu Xiaochun's view, from this industry practice, it can be seen that for the regulation and governance of artificial intelligence generated content, adding watermark labels to help users intuitively distinguish content attributes and sources is a cost-effective and effective technical approach compared to other governance methods. Moreover, as long as the design is appropriate, adding labels can balance user experience and content aesthetics, without affecting creativity and content presentation.

In terms of other governance measures, for example, algorithm and artificial intelligence service providers can be required to optimize data and model training to affect output content or avoid fraud, which is also a governance approach. However, this approach is costly, difficult to implement, and the effect is not ideal.

Li Mingxuan told Nandu reporters that based on the opinions of some experts in the field of artificial intelligence, requirements such as "the content generated by generative artificial intelligence should be true and accurate, and measures should be taken to prevent the generation of false information" and "for generated content found during operation and reported by users that does not meet the requirements of this method, it should be prevented from being generated again within 3 months through model optimization training and other methods" may be difficult to implement in practice under existing conditions.

From the perspective of content dissemination and industry development, the precise and effective governance of artificial intelligence generated content and deep synthesized content requires not only the active action of content dissemination platforms, but also the comprehensive participation of various stakeholders in the industry, "Liu Xiaochun said.

Global progress:

The first artificial intelligence bill may be introduced

Since the beginning of this year, the world has also accelerated the exploration of AI regulation, and currently, countries have varying attitudes towards artificial intelligence.

At the Group of Seven (G7) Digital and Technology Ministers' Meeting held in Japan on April 30th, participants agreed to launch a "risk based" regulatory bill for artificial intelligence. According to the joint statement, the G7 reiterated that regulation should also "maintain an open and favorable environment" for artificial intelligence (AI) technology.

Europe earlier supported the regulation of artificial intelligence. The development of the EU Artificial Intelligence Regulation Act has been ongoing for several years. On May 11th, two committees of the European Parliament passed a negotiation authorization draft for the proposal of the Artificial Intelligence Act. It is reported that this draft will be submitted to the plenary session of the European Parliament for voting in mid June. The European Parliament stated that once approved, it will become the world's first regulation on artificial intelligence.

On May 4th, the White House announced an allocation of $140 million to research and develop guidelines for AI regulation. In recent years, the US Congress has successively proposed the Malicious Forgery Prohibition Act of 2018, the Deep Forgery Reporting Act of 2019, and the Identification Generation Anti Network Act, both of which require that false content generated by artificial intelligence should include embedded digital watermarks and audio or video content with clear identification changes.

From the current perspective, both technological development and commercial applications are still in the early stages of development, and a more inclusive and cautious attitude should be adopted to give enterprises more trust and development space.

Produced by: Nandu Big Data Research Institute

Network Content Ecological Governance Research Center

Interviewer: Southern Capital Reporter Zhang Yuting


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])

Mobile advertising space rental

Tag: Where has the supervision of AI generated content gone

Unite directoryCopyright @ 2011-2024 All Rights Reserved. Copyright Webmaster Search Directory System