Examining Generative AI: Impact on Society / AI Lends Medicine Helping Hand, But Doctors Wary of Variation in Diagnoses
Dr. Ryutaro Nomura checks a summary created by generative AI, at Kamiyacho Neurosurgical Clinic in Minato Ward, Tokyo.
The Yomiuri Shimbun
6:00 JST, May 17, 2024
Expectations are high that generative AI will improve convenience in many ways, but confusion caused by the negative impact of this technology is also spreading. This is the fourth installment of a series which explores issues and potential countermeasures in the fields of education, government, business, medicine and sports.
***
There is a neurosurgical clinic in Tokyo often visited by patients suffering from headaches. Before examining a patient, Dr. Ryutaro Nomura reads a summary of information input by the patient in advance via smartphone. The summary was created by generative AI.
The causes of headaches vary. To determine the type of headache the patient is experiencing, a number of attributes need to be confirmed, such as the characteristics of the pain and the progression of the symptoms.
“The AI instantly summarizes the necessary information. It enables me to ask targeted questions and conduct smooth medical examinations,” Nomura said, noting the technology’s convenience.
However, the summaries are not always perfect, he explained. They sometimes include such unnatural expressions as “The patient’s requests have been hurting and finding relief on repeat since long ago.”
The summaries may also leave out clues which could be used in making a diagnosis. During the face-to-face examination, Nomura gets his patients to confirm the content of the summary, making sure no important points have been overlooked.
AI-assisted summarization was commercialized last October by Ubie, Inc., a startup based in Chuo Ward, Tokyo. Currently, around 1,400 medical institutions across 47 prefectures have adopted the system.
Generative AI can also be a tool for eliminating language barriers.
About 20 nursing assistants from Myanmar, the Philippines and other countries work at HITO Medical Center in Shikokuchuo, Ehime Prefecture.
The Japanese nurses there send instructions, such as “Please change the bed sheets,” to nursing assistants via smartphone app, and Microsoft’s AI translates the message into each nurses’ native language. The Japanese nurses can also check in Japanese work reports the nursing assistants have written and sent through the app.
Thanks to the adoption of the AI-assisted chat system last summer, misunderstandings have been eliminated. Previously, misunderstandings had resulted, for example, in a “cushion” being delivered instead of the requested “suction set.”
Now foreign nursing assistants can be put on the night shift to oversee wards with critically ill patients as they can respond quickly and accurately.
Lying behind the decision by medical institutions and others to implement generative AI in their operations is the desire to make more effective use of limited human resources. This has become important since the “workstyle reform of doctors,” which regulates physicians’ overtime, started in April and similar systems are expected to become widespread.
On the other hand, many in the medical field believe it is too early to start using AI-assisted systems in medical practice.
Akihiro Nomura, an associate professor at Kanazawa University and a clinical cardiologist, commented: “A medical error carries a life-threatening risk. Unless their accuracy is improved, [such AI-assisted systems] cannot be used for diagnosis and treatment.”
Last spring, Kanazawa University and others had ChatGPT, a generative AI developed by U.S. company OpenAI, sit the Japanese national medical examination. It scored 80%, which is above the passing threshold, but there were some seriously incorrect answers.
For treatment of hyperventilation, ChatGPT chose to “put a paper bag on the patient’s mouth and get them to breathe.” This method is no longer recommended due to a risk of suffocation. The AI is believed to have answered incorrectly because it had not been trained on the latest medical knowledge.
A study made public last September by a team at Tokyo Medical and Dental University highlighted the inconsistency of ChatGPT’s answers.
When asked what kind of diseases patients were suffering from based on their symptoms, the free version of ChatGPT provided answers that varied depending on such factors as the day it was asked, even when the question was the same.
To a question whose correct answer was “cervical myelopathy,” an illness that can cause various symptoms, ChatGPT answered with a variety of diseases, such as “peripheral neuropathy” and “multiple sclerosis,” and only one in 25 answers was correct, a rate of 4%.
Preventing leaks of information is another challenge. It is essential to establish measures to prevent leaks of personal information, such as the name of the patient, any diseases they have and their test results.
“Even though generative AI is making progress, a certain number of mistakes are bound to occur,” said Ryozo Nagai, president of Jichi Medical University knowledgeable about medical AI. “Therefore, it’s necessary to discuss responsibility when that happens. We should train AI on more data related to Japanese people to improve its quality, and verify what role it can take in medicine and how it can help patients.”
Most Read
Popular articles in the past 24 hours
-
Kesennuma, Miyagi Pref., Locals Raise Carp Streamers as Symbol of...
-
Iran Situation: Significance of Rule of Law Must Be Conveyed
-
Toyota Advancing Plant-Based Biofuel Development in Fukushima Tow...
-
Trump Says Iran Had a New Site for Developing Nuclear Weapons
-
2 People Reportedly Found on Mt. Fuji; Hiking Trail Currently Clo...
-
CARTOON OF THE DAY (March 10)
-
Rapeseed Flowers Reach Peak Viewing Period at Tokyo's Showa Kinen...
-
Japan's TEPCO to Raise Electricity Rates for Corporations in Apri...
Popular articles in the past week
-
Ibaraki Pref.'s 1st Foreign Bus Driver Hired in Tsukuba
-
Amid Strait of Hormuz Blockade, Shipping Companies Scramble to Ge...
-
Govt to Utilize ODA for Ensuring Economic Security; Securing Ener...
-
Japan Govt Survey Finds Just 10% of Workers Want Working Hours to...
-
Japan's 2nd Round of U.S. Investments May Be Worth Over $100 Bill...
-
Imperial Family Watches World Baseball Classic Game Against Austr...
-
Nippon Life Insurance's U.S. Arm Sues OpenAI Over Legal Assistanc...
-
Beckoning Cats Get Makeover to Fit Modern Lifestyles with Sleek D...
Popular articles in the past month
-
Producer Behind Pop Group XG Arrested for Cocaine Possession
-
Japan PM Takaichi’s Cabinet Resigns en Masse
-
Man Infected with Measles Reportedly Dined at Restaurant in Tokyo...
-
Videos Plagiarized, Reposted with False Subtitles Claiming ‘Ryuky...
-
iPS Treatments Pass Key Milestone, but Broader Applications Far f...
-
Sanae Takaichi Elected Prime Minister of Japan; Keeps All Cabinet...
-
Nepal Bus Crash Kills 19 People, Injures 25 Including One Japanes...
-
South Korea Tightens Rules on Foreigners Buying Homes in Seoul Me...
Top Articles in Society
-
Producer Behind Pop Group XG Arrested for Cocaine Possession
-
Man Infected with Measles Reportedly Dined at Restaurant in Tokyo Station
-
Bus Carrying 40 Passengers Catches Fire on Chuo Expressway; All Evacuate Safely
-
Tokyo Skytree’s Elevator Stops, Trapping 20 People; All Rescued (Update 1)
-
Ibaraki Pref.’s 1st Foreign Bus Driver Hired in Tsukuba
JN ACCESS RANKING
-
Producer Behind Pop Group XG Arrested for Cocaine Possession
-
Japan PM Takaichi’s Cabinet Resigns en Masse
-
Man Infected with Measles Reportedly Dined at Restaurant in Tokyo Station
-
Videos Plagiarized, Reposted with False Subtitles Claiming ‘Ryukyu Belongs to China’; Anti-China False Information Also Posted in Japan
-
iPS Treatments Pass Key Milestone, but Broader Applications Far from Guaranteed

