How to prevent security risks in the era of large models
AD |
Our special correspondent to Wuzhen& Nbsp; Liu& Nbsp; Yang& Nbsp; Li XuanminWith the continuous popularity of generative artificial intelligence models represented by ChatGPT, global technology companies and research institutions are laying out their own large models. The rapid development of generative artificial intelligence and large models has also brought new challenges to network security
Our special correspondent to Wuzhen& Nbsp; Liu& Nbsp; Yang& Nbsp; Li Xuanmin
With the continuous popularity of generative artificial intelligence models represented by ChatGPT, global technology companies and research institutions are laying out their own large models. The rapid development of generative artificial intelligence and large models has also brought new challenges to network security. At the 2023 World Internet Conference Wuzhen Summit, which opened on the 8th, domestic and foreign security experts explained their understanding of the new challenges of network security in the era of artificial intelligence big models from their respective perspectives. A reporter from Global Times saw at the "Internet Light" Expo that most of the booths had exhibits related to big models, including a comprehensive solution for big model security that "defeats magic with magic".
What challenges does the artificial intelligence big model bring
At the "Artificial Intelligence Empowerment Industry Development Forum" held at the Wuzhen Summit of the World Internet Conference on the 9th, Chinese and foreign experts jointly released the "Research Report and Consensus Document on Developing Responsible Generative Artificial Intelligence" (hereinafter referred to as the report). The report mentions that in recent years, generative artificial intelligence has made breakthrough progress in understanding and generating text, code, images, audio and video, and is expected to significantly improve social productivity. At the same time, the report also uses a dedicated chapter to interpret the risks brought by generative artificial intelligence.
The report states that generative artificial intelligence technology has amplified technical security risks while iteratively upgrading. In terms of data, data feeding brings problems such as value bias, privacy leakage, and data pollution: the inherent bias of training data leads to biased content in the model; The massive amount of training data has expanded the risks of data security and privacy protection. In terms of algorithms, the generation characteristics and security vulnerabilities of algorithm models can lead to risks such as "hallucinations" or false information, and model attacks. The report also mentions two deeper risks: the reconstruction of human-machine relationships through generative artificial intelligence may lead to ethical misconduct in technology; Strong task processing ability can easily lead to human thinking dependence; The uneven development of generative artificial intelligence has further widened the gap in human social development.
Global Times reporters also experienced the security risks that generative artificial intelligence may bring to ordinary people at the "Internet Light" Expo. At a company booth focused on youth programming, reporters experienced a facelift experiment where "what they see may not necessarily be real". After standing in front of the machine for a few seconds, AI provided two face changing models based on the reporter's appearance characteristics. One was "Mr. Bean" in seconds, and the other was a bearded face. According to the booth staff, this is the science popularization of the risk of "changing faces" for teenagers.
A Chinese cybersecurity expert who has participated in the 10th World Internet Conference told Global Times reporters that in the past few years, we focused on cloud security, big data security, and Internet of Things security, but today big model security is a topic that must be taken seriously.
How to Ensure Security for Major Events with Large Models
According to public reports, during the 2012 London Olympics, multiple media outlets were hit by cyber attacks; The 2016 Rio Olympics were hit by large-scale APT attacks and DDoS attacks; The 2018 Pingchang Winter Olympics also suffered the largest cybersecurity incident in Olympic history. The Global Times reporter learned at the "Internet Light" Expo that the recently successfully held Hangzhou Asian Games is not only a competition for sports performance and large-scale event organization, but also a competition for cybersecurity attack and defense.
A reporter from Global Times saw a huge display screen at the "Internet Light" Expo site, dynamically and intuitively demonstrating the situational awareness and rapid response capabilities during the Hangzhou Asian Games network security process. According to Liang Hao, Vice President of Anheng Information, the MSS Asian Games Dome - the new generation active defense operation center displayed on site is a comprehensive operation center serving the security protection system of the Hangzhou Asian Games. It is actually defined based on the construction scope of this Asian Games, involving 88 competition and non competition venues throughout the entire Asian Games. Relevant safety details, measures, and issues can be timely grasped through the perspective of this operation center. If necessary, the relevant IT rooms in the specific venue can be checked at any time, and the security protection equipment can be checked in which slot. If there are any abnormalities, interruptions, or alarms, they can be promptly reported and disposed of.
It is reported that the number of athletes, scale of events, and number of venues at the Hangzhou Asian Games are approximately four times that of the Beijing Winter Olympics. If traditional methods are used, it may require thousands of people to complete security. However, in reality, the on-site security at this Asian Games only has over 400 people. How to meet security needs with fewer personnel on such a large scale is actually using large model technology. We have combined large models in the field of security to help on-site analysts quickly assist in research and provide solutions. Especially in terms of ensuring key activities, there is a high demand for timeliness. For example, during the Hangzhou Asian Games, over 26 million network attacks were observed, and automated correlation and blocking techniques were used throughout the entire interception process, including the use of artificial intelligence and the capabilities of large models Force
How to "defeat magic with magic"
In order to address the challenges posed by generative artificial intelligence and large models, leading domestic companies in the field of cybersecurity have also showcased their solutions at the "Internet Light" Expo. What impressed Global Times reporters the most was using AI to deal with AI and "defeating magic with magic". For example, at the Qi'anxin booth, the reporter saw a series of powerful protective capabilities provided by the "Big Model Guard". Alibaba booth also proposed a "big model security full link solution".
Zhou Hongyi, the founder of 360 Group, who participated in this World Internet Conference, stated in an interview with Global Times that all digital technologies are inevitably double-edged swords. With software, there will be vulnerabilities, and with vulnerabilities, they will be attacked. Large models of artificial intelligence are also inevitable. Zhou Hongyi believes that the security issues faced by artificial intelligence can be divided into three levels: primary/recent problems, intermediate/mid-term problems, and advanced/long-term problems. The primary/recent issues are technical attacks, namely network attacks, vulnerability attacks, data attacks, especially providing poor data to large models or contaminating their training data, which may lead to incorrect results. This type of problem is relatively easy to solve. The more difficult to solve are intermediate/mid-term issues, mainly related to content security. For example, large models can become great helpers for people and tools for bad people. It can help a hacker with low proficiency write better attack code and scam emails. How to make its content more controllable? How to prevent artificial intelligence big models from doing evil? This goes beyond the realm of technology. Although some people have built built-in so-called "security barriers" for large models, they are easily vulnerable to injection attacks or algorithm attacks. One of the solutions to this type of problem is to develop small-scale models specifically targeting malicious questions, allowing large models to filter through small models before answering user questions.
Zhou Hongyi believes that from a high-level/long-term perspective, a large model can integrate all human knowledge, and then communicate and prompt with humans. But for the future, if the ability of large models surpasses that of humans, can they still be willing to become human tools? "Zhou Hongyi said," My point is that the challenges brought by these technologies ultimately need to be solved by technology, and technology itself will continue to evolve. We cannot stop developing related technologies just because there are still problems that have not occurred in the future.
Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])
Mobile advertising space rental |
Tag: How to prevent security risks in the era of
The popular 'BB machine' in the 1990s used to sell for 2000 yuan, but does it still have a signal now?
NextWhy hasn't China cancelled the Green Train? Who is it for? It's because of these people!
Guess you like
-
China's Mainland General Display Exports Continue to Grow, But at a Slower PaceDetail
2024-11-05 10:37:59 1
- Detail
-
The State of Cybersecurity in the Financial Services Industry: DDoS Attacks, API Threats, and Mitigation StrategiesDetail
2024-11-04 17:35:37 1
- Detail
-
Seres New Energy Vehicle Sales Surpass 30,000 Units in October, AITO M9 Receives Over 160,000 Pre-orders, Topping Luxury Vehicle Sales Above 500,000 Yuan for Six Consecutive MonthsDetail
2024-11-04 15:17:57 1
-
Shenzhou-18 Triumphantly Returns! Chinese Astronauts Successfully Return to Earth, Sparking Technological ReflectionsDetail
2024-11-04 14:37:46 1
- Detail
-
Why Did WeChat's "Shake" and "Drift Bottle" Disappear? Netizens Share: From Love, Opportunities to AnecdotesDetail
2024-11-02 10:12:09 1
-
Methane Hydrate: From High Hopes to Diminished Enthusiasm, Does It Have a Future?Detail
2024-11-01 20:50:31 1
- Detail
-
Microsoft Announces End of Support for Windows 10 on October 14, 2025, Strongly Recommends Migration to Windows 11Detail
2024-11-01 12:01:58 11
-
Unlocking Another Dimension of the World: From Urban Movement to Outdoor Nouveau RicheDetail
2024-11-01 11:41:06 21
- Detail
- Detail
-
Kuaishou Rural Ecology Conference Kicks Off, Joining Hands with New Farmers to Create a Better LifeDetail
2024-11-01 10:21:07 21
-
Didi Station Bus Empowers Passenger Transport Enterprises to Transform and Upgrade, Providing Convenient and Efficient Travel Services for PassengersDetail
2024-11-01 10:18:08 1
-
Amazon Battles Temu and SHEIN: Can Increased Essentials Sales Revitalize the Giant?Detail
2024-11-01 10:04:45 1
- Detail
-
Returning to Natural Interaction: Hanwang Technology's MOUNTAIN Series Handwritten Notebook Ushers in a New Era of "Native Office"Detail
2024-10-31 15:33:14 1
-
Vtrip Technology Completes Hundreds of Millions of Yuan Series A Financing, Accelerating the Application of VtripGPT Large ModelDetail
2024-10-30 09:40:59 1