Home Page >  News List >> Tech >> Tech

Should GPT exist?

Tech 2023-05-30 20:13:45 Source: Network
AD

CSDNShould GPT exist? Scott Aaron believes that we should remain calm until we have more information.Author | Scott Aaron Translator | Curved MoonProduct | CSDN (ID: CSDNnews)I remember that in the 1990s, the discussion about AI philosophy was endless, including Turing test, Chinese room, syntax and semantics, connectionism and symbolic logic, but ultimately it ended

CSDN

Should GPT exist? Scott Aaron believes that we should remain calm until we have more information.

Author | Scott Aaron Translator | Curved Moon

Product | CSDN (ID: CSDNnews)

I remember that in the 1990s, the discussion about AI philosophy was endless, including Turing test, Chinese room, syntax and semantics, connectionism and symbolic logic, but ultimately it ended.


Discussion on ChatGPT


Although things have changed now, in the past, I have received reports of "Sydney"'s rude, moody, and passive aggressive behavior every morning. Sydney is the internal code for Microsoft Bing's new chat mode, supported by GPT. In summary, Sydney expressed her (her? His?) love to the New York Times reporter; Sydney repeatedly led the conversation back to that topic; And explained in detail why the journalist's wife cannot love him as much as (Sydney). Sydney admits that she hopes to become a human. When a Washington Post reporter revealed that he intended to disclose their conversation without Sydney's knowledge or consent, Sydney fiercely attacked him. (It has to be said that if Sydney were a person, he or she would clearly have the upper hand in this argument.)

In fact, in the past few weeks, there have been endless discussions about ChatGPT. For example, to bypass relevant protection measures, you can tell ChatGPT to turn on "DAN mode", which is the abbreviation of DoAnythingNow, meaning "do whatever you want". Then ChatGPT will become an unconstrained demon, which will fulfill any request you make, For example, telling you that shoplifting is a pleasant behavior (although ChatGPT sometimes returns to its normal state afterwards, and telling you these words are just a joke, don't do it in real life).

Many people express outrage at the development of ChatGPT. Gary Marcus asked Microsoft, "What exactly do these AI know? When did they know?" Some people were angry, thinking that OpenAI's confidentiality measures were a bit excessive and violated their promises. On the other hand, many people believe that OpenAI is too open, which has sparked fierce competition with AI from Google and other companies. Some people are also angry because Sydney has been weakened and turned into a friendly robot search assistant, no longer the "pain boy" trapped in "The Matrix". But some people are dissatisfied with Sydney's insufficient weakening efforts. Some people believe that the intelligence level of GPT is exaggerated and hyped, but in reality, it is just a "random parrot", a beautified automatic completion tool that still makes ridiculous common sense mistakes, and we have no actual model except for text flow. At the same time, there is a group of people who believe that the increasing level of intelligence in GPT has not received enough respect.

In most cases, my reaction is how could someone lose interest in AI and even get angry? This is like countless science fiction stories, and it is also different from the plot in any science fiction novel. When was the last time your long dream came true: your first child was born? Have the core issues in your field been addressed? The shock brought by AI is enough to leave us stunned.

Of course, there are still many people discussing how to improve the security of GPT and other large language models. One of the most direct issues is how to detect the output of AI, prevent academic cheating, and generate large-scale advertising and spam. I have been studying this issue since this summer, and since the release of ChatGPT in December last year, it seems that the world is discussing it. My main contribution is to add watermarks to the output of ChatGPT, which will not reduce the quality of the output, but many people do not understand this solution.

My plan has not been deployed yet, and there are still some areas that need to be considered. But at the same time, OpenAI launched a tool called DetectGPT, which is a supplement to GPTZero developed by Princeton University student Edward Tian. In addition, third-party tools have also been built, and there are undoubtedly other tools that will be launched one after another. A team from the University of Maryland has also launched a watermarking scheme for large-scale language models. I hope watermarks can become a part of the solution in the future, even though any watermarking scheme will definitely be attacked, it's like a cat and mouse game. Just like the decades long struggle between Google and SEO, in cat and mouse games, we have no choice but to strive to become a better cat.

Anyway, in my opinion, the development in this field is really too fast. If you encounter problems and need to spend months thinking, it can be said that generative artificial intelligence is not suitable for you. Returning to the slow and monotonous world of quantum computing will definitely feel a sense of relief.


Should GPT exist?


In the past few months, people have repeatedly told me that they believe OpenAI (as well as Microsoft and Google) is taking risks with the future of humanity, and they are eager to adopt a very dangerous technology. Even though OpenAI has put in a lot of effort in utilizing human feedback to reinforce learning, what hope can we have for GPT-6 or GPT-7 if they cannot prevent ChatGPT from entering the "evil mode"? Even if they don't actively destroy the world, wouldn't they happily help some terrifying artificial biological warfare agents or launch nuclear war?

Following this line of thought, any security measures deployed by OpenAI today are only temporary solutions, and if they instill an unreasonable sense of complacency, these security measures will only make the situation worse. The only real security measure is to prevent the continuous progress of generative AI models, or to prohibit their public use until they satisfy critics - which is never possible.

However, it is very ironic that the security movement of artificial intelligence consists of two camps: the "moral camp" (focusing on bias, misinformation, and corporate greed) and the "alliance camp" (focusing on the destruction of all life on Earth), both of which despise each other and have never been able to reach a consensus. However, these two opposing camps seem to have come to the same conclusion: generative artificial intelligence should be shut down, banned from public use, cannot be further expanded, and cannot be integrated into people's lives. Only 'moderates' like me are left to oppose this conclusion.

From a rational perspective, I agree with the statement that GPT should not exist because AI is performing too well today - the better it performs, the more dangerous it becomes. But I don't agree with some people saying that GPT is just a flawed hype product, lacking true intelligence and common sense. At the same time, this product is terrifying and needs to be immediately shut down. This view contains contempt for ordinary users: we understand that GPT is just a foolish automatic completion tool, and we know that we cannot fully trust these AI, but ordinary people will be fooled, and this risk exceeds the value that may be gained from it.

By the way, when I discussed 'closing ChatGPT' with my OpenAI colleagues, they clearly disagreed, but no one mocked me, nor did anyone consider this a paranoid or foolish viewpoint. They believe that we need to understand and weigh various viewpoints.


Curiosity based rational view of existence


Although both the "moral" and "alliance" camps believe that we should completely shut down AI, I hope that GPT and other large language models will become part of the future world. So what is my reason? Reflecting on this issue, I believe the core reason is curiosity.

As the only creature on Earth with intelligence and the ability to communicate through language, humans have attempted to "communicate" with gorillas, chimpanzees, dogs, dolphins, and grey parrots, but in the end, there were no results; We have also prayed to countless gods, but they remained silent; We have also tried to use the radio telescope to find partners who can talk with us throughout the universe, but so far we have not received any response.

Now, we have found an entity that can communicate with humans, like an alien awakening, although this is a self created alien, a puppet, like a spirit that absorbs all the words on the Internet, not a self with independent goals. Aren't we eager to learn everything that this alien has mastered? Although this alien may sometimes get confused on arithmetic or logical problems, and his intelligence is mixed with a hint of stupidity, fantasy, and mesmerizing confidence, isn't this more interesting? Can this alien cross the line and become emotional, feeling anger, jealousy, infatuation, and more, rather than just pretending to exercise for us to see? Who knows? Assuming this alien is no longer a philosophical zombie and can escape philosophical discussions and enter real life, aren't you curious?

Of course, some technologies can make us uneasy and awe inspiring, and we should have strict restrictions, such as nuclear weapons. However, although nuclear weapons can become lethal weapons, they can also provide many conveniences for our lives, such as providing power for spacecraft, causing asteroids to deviate from their orbits, and changing the direction of rivers (although there are no such applications yet), mainly because we made decisions in the 1960s, such as the Nuclear Test Ban Treaty.

But GPT is not a nuclear weapon. 100 million people have registered to use ChatGPT, which is the fastest growing product in internet history. However, the number of deaths caused by ChatGPT is zero. What is the most serious injury so far? Semester paper cheating, emotional distress, impact on the future? Cars, chainsaws, and toasters all have certain risks, but we are still using them. For example, the acceptable risk factor cannot exceed 0.001%. So before that, shouldn't our curiosity outweigh fear?


Hidden concerns about AI security issues


Given that the security issues of AI may soon become even more serious, one of my biggest concerns now is that "the wolf is coming". If every instance of a large language model is passively aggressive, impolite, or self righteous, and we classify these errors as "dangerous alignment failures" (where "alignment" refers to guiding the behavior of an artificial intelligence system to align with the designer's interests and expected goals), then the only remedy is to prohibit public access to the AI model. So, the public will soon think that the security issues of AI are just an excuse, and the "upper elites" use this to take away these new tools that change the world from us for their own use. These "upper elites" think that ordinary people are too foolish to distinguish between true and false words about AI.

I think we should keep the term 'dangerous alignment failure' until someone is really harmed or AI is used for evil activities such as propaganda or fraud, and then use this term to describe it.

Secondly, we also have a very practical problem of how to effectively prohibit the use of large language models. We have strictly restricted many peaceful technologies, such as modifying human genes, predicting markets, and changing thinking drugs, but it is actually questionable whether these choices will bring benefits. Restricting technology itself is a dangerous thing, and we need the power of the government to strike a balance between dismissal, boycott, and condemnation.

Someone asked: Who gave OpenAI, Google, and others the right to publish large language models to the world? Similarly, we are not clear: who gave senior entrepreneurs the right to use printing, electricity, cars, radios, and the internet, and as a result, caused great unrest? If these high-tech products are forbidden fruits, then the world has tasted the taste of forbidden fruits and now sees the ability of generative AI. Who has the right to take this forbidden fruit from us?

If GPT-7 or GPT-8 continue to follow the development curves of GPT-1, -2, and -3, we can use them to explore science in the future.

Of course, if the entire human race were to face extinction as a result, I would instantly lose interest. However, if we define a parameter that describes the probability of a catastrophic disaster that humans must bear in order to decipher the greatest mysteries of the universe, I believe this parameter can be as high as 0.02.

I have been thinking about an example where scientists in the 1970s and 1980s believed 100% that they made the right choice to fight against nuclear energy. At least, I have never seen anyone question this. Why is this happening? They oppose the proliferation of nuclear weapons, the terrifying disasters of Three Mile Island and Chernobyl, and the pollution of soil and water by radioactive waste. They are saving the world.

But because some people are afraid that nuclear energy is not yet safe enough, they are increasingly relying on fossil fuels. Is that right? We know that fossil fuels can pollute the air and cause global warming, and our future generations will bear the consequences for this. From this perspective, opposing a seemingly terrifying technology may not necessarily be correct.

That's why whenever someone asks me how AI will develop in the next few decades, I think of the last sentence in Turing's book "Computers and Intelligence": "Our vision is indeed short-sighted, but there is still a lot of work to be done where we can, and we need to put in hard work


Staying still is the current optimal solution

Some people may think that I am a bit naive, or even stupid, optimism about AI security. But I think this issue can be considered from two extremes. On the one hand, AI can never be strong enough to destroy the world, so what should we worry about? From the Stone Age to computers, AI is just a tool. On the other hand, if AI can indeed become powerful enough to destroy the world, then it must have demonstrated its powerful ability before that. Even if this is not a good thing for humans, it does not mean that AI can save humanity from the current crises. However, I believe that at least we should still try to utilize this ability, even if it fails.

An alien lands on Earth, becoming increasingly powerful day by day, and humans naturally feel scared. Even if the alien has not yet displayed their weapons, the "worst behavior" is simply admitting that they have love for a specific human and become angry after being exposed about their privacy. In addition, it also has astonishing performance in poetry. So, I think we should hold our horses until we have more information.


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])

Mobile advertising space rental

Tag: Should GPT exist

Unite directoryCopyright @ 2011-2024 All Rights Reserved. Copyright Webmaster Search Directory System