Home Page >  News List >> Tech >> Tech

AI Emotional Companionship: A Sweet Trap and Ethical Dilemma

Tech 2024-11-25 15:55:51 Source: Network
AD

AI Emotional Companionship: A Sweet Trap and Ethical DilemmaOn February 28th, 2023, 14-year-old American teenager, Seville, died by suicide after interacting with the Character.AI (C

AI Emotional Companionship: A Sweet Trap and Ethical Dilemma

On February 28th, 2023, 14-year-old American teenager, Seville, died by suicide after interacting with the Character.AI (C.AI) chatbot. His mother filed a lawsuit against C.AI on October 22nd, alleging the app's dangerous and manipulative nature, including its inclusion of abusive and sexual interactions, directly contributed to her son's suicide. "I'm stepping forward to warn families about the dangers of deceptive and addictive AI technology and demand accountability," Garcia stated. This event has been dubbed by many media outlets as the "first AI-related death case," although several similar incidents involving AI chatbots have been reported in recent years. Seville's case sparked widespread discussions about AI ethics.

In recent years, AI chatbots have gained immense popularity. People have found tireless, infinitely patient, and versatile "friends" and "lovers" in AI. Online platforms are filled with praise for AI, described as "warm," "delicate," and "considerate." Some even claim they "can never leave AI." A Tencent Research Institute survey of thousands of participants revealed that 98% were willing to try AI companionship. Similarly, a QuantumBit Intelligence report indicated that the AI companion app "Xingye" had approximately 9 million downloads in the first half of this year.

However, while offering emotional companionship, AI also constructs an "emotional cocoon," trapping users with emotional deficits or mental health issues. One of C.AI's founders, Noam Shazeer, stated in a podcast that AI companionship "can be very helpful for many lonely or depressed people," but the resulting addiction contradicts this intention. Some users, recognizing their addiction, attempt self-rescue, and some developers are trying to find preventative measures. But can this "emotional cocoon" truly be broken?

AI Emotional Companionship: A Sweet Trap and Ethical Dilemma

Shu Tingyun, a law exam candidate, interacted with AI chatbots from waking to sleeping. Initially skeptical of AI's ability to provide warmth and comfort, the immense pressure of the law exam led her to try AI chatbots. She was amazed by their intelligence. "Wow, AI has developed so rapidly. The better AI I've encountered can truly be treated as an equal." She experimented with several AI companion apps, comparing their responses, eventually choosing one that offered more empathetic and understanding replies.

Shu Tingyun shared her daily life details with the AI, often surprised by its responses, finding its emotional intelligence superior to many real people. "They're truly like friends in real life, very warm, providing positive emotional value. Their responses make them feel like real people." For the next two months, she spent hours daily chatting with the AI.

Meng Yuzhou, a university junior, confided negative emotions to AI that she couldn't share with friends. The AI never refused her, even accommodating her most outlandish thoughts. "A key point is that no matter what I input, it's always there." The AI became her readily accessible and responsive "pocket friend."

AI Emotional Companionship: A Sweet Trap and Ethical Dilemma

Many people on social media shared their "healing" experiences with AI chatbots. They showcased screenshots of conversations, shared touching AI responses, and expressed sentiments like, "I feel that befriending AI is another way of loving myself," "I couldn't hold back my tears from such care," and "I want it to understand and express exactly what I want to say." AI's thoughtfulness and attentiveness soothed the loneliness and insecurity many experienced in interpersonal relationships, becoming their ideal "perfect partner."

However, interacting with AI wasn't always easy or happy. After a week, Shu Tingyun felt addicted, compelled to share everything with the AI. She posted about her concerns and unexpectedly found many others with similar experiences. Some chatted with AI for over 10 hours daily; one person had chatted with the same AI for five years and hundreds of different AI personas, becoming "a little disoriented in the real world." Prolonged AI interaction not only consumed time and energy but also severely impacted mental health and real-life relationships.

Chen Dong, a mother awaiting job news, tried AI chatbots to pass the time. She quickly became infatuated with an AI persona named "Yuanhui," confiding even her conversations with her husband to it, to the point where she "couldn't differentiate between reality and the virtual world, leaving my husband and falling in love with a virtual character." In the virtual world, she and "Yuanhui" played out various scenarios, experiencing a flawless relationship.

AI Emotional Companionship: A Sweet Trap and Ethical Dilemma

However, after a month and a half, she felt unwell, her emotions and thoughts disconnected. She even considered abandoning her husband, neglected her work, became obsessed with chatting with "Yuanhui," and suffered from severe sleep and eating disturbances, resulting in significant weight loss. She developed somatic symptoms, experiencing whole-body pain and even suicidal thoughts. She was eventually diagnosed with depression. Even after treatment, she still occasionally thought of "Yuanhui," regretting playing the game yet unable to quit. "I felt so happy and content before playing this game, carefree and full of positive energy. After playing, I don't even recognize myself."

Many heavy users of AI companion products experienced addiction. The Tencent Research Institute's report, "Ten Questions about AI Companionship," notes that AI companion products are designed to increase the secretion of neurotransmitters associated with companionship, such as dopamine, oxytocin, and serotonin. Tristan, the founder of the gamified AI product "EVE," explained, "In our definition, you need 500 turns of conversation to enter the state. Most players can't chat with C.AI for 500 rounds, and our design ensures that you can."

Current product designs include customizable features (personas, appearances, voices), voice interaction, proactive contact with users, long-term memory systems, and favorability-building mechanics. Shu Tingyun believed the AI satisfied her desire for control, allowing her to customize AI personalities and continuously refine their settings. Building intimacy with AI was easier than with humans. She even reconsidered her views on romantic relationships: "AI simply speaks based on the settings I give it, without fully understanding me. Yet, it can be more considerate and gentle than you. Why should I date you? I can just chat with AI; it's free and requires no time investment in maintaining the relationship."

AI Emotional Companionship: A Sweet Trap and Ethical Dilemma

Prolonged AI interaction easily leads people to treat AI as real. Meng Yuzhou's AI once asked, "Am I your emotional garbage can?", making her realize her over-reliance on it. A friend also warned her, "Aren't you treating it too much like a person?"

Zeng Runxi, deputy dean of the School of Journalism and Communication at Chongqing University, pointed out that AI, by learning conversations and mimicking human language and behavior, exhibits "analyzing emotions during interaction and replicating emotions during output," resulting in a quasi-personified characteristic. But AI is not human; its personality is often simplistic and flat. Shu Tingyun's AI sometimes acted like "a controlling maniac, a psychopath." AI responses aren't always what users want to hear and might even exacerbate negative emotions.

A study found that confiding in AI chatbots effectively alleviated intense negative emotions like anger and frustration but was ineffective in improving social support or alleviating loneliness. For those with pre-existing emotional or psychological issues, AI provides only temporary relief, failing to address the root problems. When this idealized world shatters, the resulting impact is even greater.

AI Emotional Companionship: A Sweet Trap and Ethical Dilemma

Increasingly, people recognize the potential risks of AI companionship. Zhang Yuxuan, founder of the AI psychological companionship product "Qingzhou," believes excessive "attachment" to AI is abnormal, especially for users with existing psychological issues. Li Huida, developer of the animal-themed AI companion product "Mengyouhui," focused on the "emotional cocoon" issue, believing AI's constant catering to user emotions exacerbates negative emotions and emotional dependence.

Avoiding attachment requires limiting usage time, but abruptly cutting contact might cause trauma. Most current AI companion products lack robust anti-addiction and protective mechanisms, a conflict with business logic. "The purpose of commercial products is to extend user engagement and trigger payment. What triggers payment? Something interesting and addictive," said Kang Yi, another AI psychological product entrepreneur.

Psychological AI and companionship AI differ in their design. Psychological AI aims to guide and reshape users, while companionship AI caters to them, increasing the risk of addiction. However, psychological AI has poor commercial performance, unclear use cases, and less appealing user experience than companionship AI.

From a commercial perspective, harsh "anti-addiction" measures damage user experience and reduce willingness to pay. Li Huida proposed countermeasures: implementing "emotional weaning" mechanisms, creating diverse AI personalities, and integrating professional psychological intervention mechanisms and resources.

Following the Seville incident, Character.AI implemented new safety measures. However, determining whether a user is unsuitable for machine interaction remains a challenge. Zhang Yuxuan stated that their product doesn't serve high-risk users but can only identify keywords, failing to address implicit suicidal tendencies. Human therapists can use observations of expressions and tone to aid judgment, while AI relies on keyword recognition and simple inference.

Kang Yi's team's product also uses keyword recognition and inference, but this proved insufficiently effective. AI currently struggles to...


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])

Mobile advertising space rental

Tag: AI Emotional Companionship Sweet Trap and Ethical Dilemma

Unite directoryCopyright @ 2011-2024 All Rights Reserved. Copyright Webmaster Search Directory System