Examining Generative AI: Impact on Society / AI Lends Medicine Helping Hand, But Doctors Wary of Variation in Diagnoses

The Yomiuri Shimbun
Dr. Ryutaro Nomura checks a summary created by generative AI, at Kamiyacho Neurosurgical Clinic in Minato Ward, Tokyo.

Expectations are high that generative AI will improve convenience in many ways, but confusion caused by the negative impact of this technology is also spreading. This is the fourth installment of a series which explores issues and potential countermeasures in the fields of education, government, business, medicine and sports.

***

There is a neurosurgical clinic in Tokyo often visited by patients suffering from headaches. Before examining a patient, Dr. Ryutaro Nomura reads a summary of information input by the patient in advance via smartphone. The summary was created by generative AI.

The causes of headaches vary. To determine the type of headache the patient is experiencing, a number of attributes need to be confirmed, such as the characteristics of the pain and the progression of the symptoms.

“The AI instantly summarizes the necessary information. It enables me to ask targeted questions and conduct smooth medical examinations,” Nomura said, noting the technology’s convenience.

However, the summaries are not always perfect, he explained. They sometimes include such unnatural expressions as “The patient’s requests have been hurting and finding relief on repeat since long ago.”

The summaries may also leave out clues which could be used in making a diagnosis. During the face-to-face examination, Nomura gets his patients to confirm the content of the summary, making sure no important points have been overlooked.

AI-assisted summarization was commercialized last October by Ubie, Inc., a startup based in Chuo Ward, Tokyo. Currently, around 1,400 medical institutions across 47 prefectures have adopted the system.

Generative AI can also be a tool for eliminating language barriers.

About 20 nursing assistants from Myanmar, the Philippines and other countries work at HITO Medical Center in Shikokuchuo, Ehime Prefecture.

The Japanese nurses there send instructions, such as “Please change the bed sheets,” to nursing assistants via smartphone app, and Microsoft’s AI translates the message into each nurses’ native language. The Japanese nurses can also check in Japanese work reports the nursing assistants have written and sent through the app.

Thanks to the adoption of the AI-assisted chat system last summer, misunderstandings have been eliminated. Previously, misunderstandings had resulted, for example, in a “cushion” being delivered instead of the requested “suction set.”

Now foreign nursing assistants can be put on the night shift to oversee wards with critically ill patients as they can respond quickly and accurately.

Lying behind the decision by medical institutions and others to implement generative AI in their operations is the desire to make more effective use of limited human resources. This has become important since the “workstyle reform of doctors,” which regulates physicians’ overtime, started in April and similar systems are expected to become widespread.

On the other hand, many in the medical field believe it is too early to start using AI-assisted systems in medical practice.

Akihiro Nomura, an associate professor at Kanazawa University and a clinical cardiologist, commented: “A medical error carries a life-threatening risk. Unless their accuracy is improved, [such AI-assisted systems] cannot be used for diagnosis and treatment.”

Last spring, Kanazawa University and others had ChatGPT, a generative AI developed by U.S. company OpenAI, sit the Japanese national medical examination. It scored 80%, which is above the passing threshold, but there were some seriously incorrect answers.

For treatment of hyperventilation, ChatGPT chose to “put a paper bag on the patient’s mouth and get them to breathe.” This method is no longer recommended due to a risk of suffocation. The AI is believed to have answered incorrectly because it had not been trained on the latest medical knowledge.

A study made public last September by a team at Tokyo Medical and Dental University highlighted the inconsistency of ChatGPT’s answers.

When asked what kind of diseases patients were suffering from based on their symptoms, the free version of ChatGPT provided answers that varied depending on such factors as the day it was asked, even when the question was the same.

To a question whose correct answer was “cervical myelopathy,” an illness that can cause various symptoms, ChatGPT answered with a variety of diseases, such as “peripheral neuropathy” and “multiple sclerosis,” and only one in 25 answers was correct, a rate of 4%.

Preventing leaks of information is another challenge. It is essential to establish measures to prevent leaks of personal information, such as the name of the patient, any diseases they have and their test results.

“Even though generative AI is making progress, a certain number of mistakes are bound to occur,” said Ryozo Nagai, president of Jichi Medical University knowledgeable about medical AI. “Therefore, it’s necessary to discuss responsibility when that happens. We should train AI on more data related to Japanese people to improve its quality, and verify what role it can take in medicine and how it can help patients.”