Home Page >  News List >> Tech >> Tech

ChatGPT can help doctors while harming patients

Tech 2023-06-06 00:15:33 Source: Network
AD

Since the end of last year, ChatGPT has become popular globally and is particularly sought after by capital and technology professionals in China; But there has been a lot of rational thinking abroad, and this "revolutionary" technological progress is still multifaceted, and technological progress may not necessarily lead to exciting positive results. Below, I will gradually translate some articles containing "rational" information

Since the end of last year, ChatGPT has become popular globally and is particularly sought after by capital and technology professionals in China; But there has been a lot of rational thinking abroad, and this "revolutionary" technological progress is still multifaceted, and technological progress may not necessarily lead to exciting positive results. Below, I will gradually translate some articles containing "rational" information. The following is the first article, mainly translated from the April 2023 article of "Lianhe":

Chatgpt1.mp39:16
An old horse from the male hemisphere


This chat robot, with its ability to provide medical information, has created a temptation for doctors, but researchers warn against entrusting difficult ethical decisions to artificial intelligence.


Robert Pearl is a professor at Stanford Medical School and served as the CEO of Kaiser Permanente, a US healthcare group with over 12 million patients. If he is still responsible, he will insist that all of his 24000 doctors immediately start using ChatGPT for practice.

Pell said, "I think it will be more important for doctors than the stethoscope in the past. No doctor who implements high-quality medical treatment will work without using ChatGPT or other generative AI."

Pell is no longer engaged in medical practice, but he expressed his understanding that some doctors are using ChatGPT to summarize patient care, write letters, and even seek ideas on how to diagnose patients when encountering difficulties. He suspects that doctors will discover thousands of chat robot applications that are beneficial to human health.



With technologies like OpenAI's ChatGPT challenging Google's dominance and sparking discussions about industry transformation, language models are beginning to demonstrate the ability to undertake tasks that previously only belonged to white-collar workers such as programmers, lawyers, and doctors. This has sparked a dialogue among doctors about how this technology can help them provide services to patients. Medical professionals hope that language models can extract information from digital health records or provide patients with lengthy and technical abstract notes, but there are also concerns that they may deceive doctors or provide incorrect answers, leading to incorrect diagnosis or treatment plans.

Companies developing artificial intelligence technology use medical school exams as a competitive benchmark for building stronger systems. Last year, Microsoft introduced BioGPT, a language model that achieves high scores on various medical tasks, and a paper by OpenAI, Massachusetts General Hospital, and AnsibleHealth claimed that ChatGPT can achieve or exceed a passing score of 60% on the US medical license exam. A few weeks later, researchers from Google and DeepMind launched Med PaLM, which achieved an accuracy of 67% in the same test. Although they also wrote that despite being encouraging, their results were "still not as good as clinical doctors". Microsoft and EpicSystems, one of the world's largest healthcare software providers, have announced plans to use OpenAI's GPT-4 as the foundation of ChatGPT to search for trends in electronic health records.

Heather Mattie, a lecturer in public health at Harvard University, studied the impact of artificial intelligence on healthcare and was impressed when she first used ChatGPT. She asked for a summary of how to use modeling social relations to study AIDS, which is the theme of her research. In the end, the model touched on areas she was not familiar with, and she could no longer determine whether it was accurate. She began to wonder how ChatGPT reconciled two completely different or opposite conclusions in medical papers, and who decided whether an answer was appropriate or harmful.

Mattie now states that she is "less pessimistic" than in her early experiences. She believes that ChatGPT can be used for some tasks, such as summarizing text, but the premise is that users should know that chat robots may not be 100% correct and may generate biased results. She is particularly concerned about the situation of ChatGPT in handling cardiovascular disease diagnostic tools and intensive care injury scores, as these tools have records of racial and gender biases. But she still maintains a cautious attitude towards ChatGPT in clinical settings, as it sometimes fabricates facts and is unclear about the timing of the information it is based on.

Medical knowledge and practice constantly change and develop over time, making it impossible to determine which period in the medical timeline ChatGPT obtains information when providing typical treatment plans, "she said. Is this information recent or outdated

Users also need to note that ChatGPT style chat robots can present fictional or "hallucinatory" information in a seemingly fluent manner. If a person does not verify the algorithm's answer, it may lead to serious errors. Moreover, text generated by artificial intelligence can subtly influence humans. A study released in January that has not yet undergone peer review raised some ethical issues regarding ChatGPT and concluded that even if people know that the advice comes from artificial intelligence software, the chatbot will become an inconsistent ethical advisor that can influence human decision-making processes.

Becoming a doctor is not just about memorizing encyclopedia like medical knowledge. Although many doctors are enthusiastic about using ChatGPT for low-risk tasks such as text summarization, some bioethicians are concerned that when doctors face difficult ethical decisions, such as whether to perform surgery on a patient with a low likelihood of survival or recovery, they will seek advice from chat robots.

Jamie Webb, a bioethicist at the Center for the Future of Technology and Ethics at the University of Edinburgh, said, "You cannot outsource or automate this process to generative artificial intelligence models." Last year, Webb and his team took inspiration from previous research and explored the conditions required to build AI driven 'moral advisors' for medical purposes. Weber and his co authors concluded that it is difficult for such a system to reliably balance different ethical principles. If doctors and other employees overly rely on robots instead of thinking complex decisions on their own, it may lead to a "decline in moral skills".

Weber pointed out that doctors were once told that artificial intelligence for language processing would completely change their work, but ultimately felt disappointed. In the "Edge of Danger!" competition in 2010 and 2011, IBM's Watson department won and later turned to oncology, claiming the effectiveness of AI in the fight against cancer. However, this solution, initially known as the "MemorialSloanKetteringinabox", did not succeed in clinical settings as hype suggested, and in 2020, IBM closed the project.

When hype fails, it may have long-lasting consequences. At a discussion at Harvard University on the potential of artificial intelligence in the medical field, grassroots doctor Trishan Panch recalled seeing a colleague post on Twitter the results of using ChatGPT to diagnose diseases shortly after the chat robot was released.

Excited clinical doctors quickly expressed their willingness to use this technology in their own practice, Panchi recalled, but around the 20th response, another doctor intervened and said that all the reference materials generated by the model were false. Pan Qi is the co founder of Wellframe, a medical software startup. "Just one or two similar things are enough to undermine trust in the entire system," he said

Although artificial intelligence sometimes makes obvious mistakes, Robert Pearl, who used to work at Kaiser Health Insurance, is still very optimistic about language models like ChatGPT. He believes that in the coming years, language models in the medical field will become more like iPhones, full of functions and capabilities, which can enhance doctors' abilities and help patients manage chronic diseases. He even suspects that language models like ChatGPT can help reduce the more than 250000 deaths caused by medical errors in the United States each year.

Pearl does believe that some things are not suitable for artificial intelligence to handle. He said that helping people cope with sadness and loss, engaging in end-of-life conversations with family, and discussing procedures involving high-risk complications should not involve robots, as the needs of each patient are so diverse that you must engage in these conversations to achieve your goals.

Those are conversations between people, "Pearl said, predicting that the available technologies are only a small part of their potential. If I'm wrong, it's because I overestimated the speed of technological improvement. But every time I look at it, its development speed is faster than I imagined

Currently, he compares ChatGPT to a medical student who can provide care and assistance to patients, but everything it does must be reviewed by the attending physician.


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])

Mobile advertising space rental

Tag: ChatGPT can help doctors while harming patients

Unite directoryCopyright @ 2011-2024 All Rights Reserved. Copyright Webmaster Search Directory System