LOS ANGELES — ChaptGPT has been a gamechanger in the world of artificial intelligence, even as some argue that the tech is going too far. The AI-powered chatbot has been helping with everyday tasks like writing emails, or even creating original jokes. In the field of healthcare, however, it might help patients with cirrhosis and liver cancer. A new study explains how the program can generate easy-to-understand information about basic knowledge, lifestyle interventions, and treatments for these conditions.
“Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about managing and preventing complications of their disease,” says Brennan Spiegel, MD, MSHS, director of Health Services Research at Cedars-Sinai and co-corresponding author of the study, in a media release. “We found ChatGPT—while it has limitations—can help empower patients and improve health literacy for different populations.”
People with cirrhosis are at a much higher risk of developing liver cancer, and the condition often requires complex medical interventions to keep it under control as best as possible.
“The complexity of the care required for this patient population makes patient empowerment with knowledge about their disease crucial for optimal outcomes,” says co-corresponding author Alexander Kuo, MD, medical director of Liver Transplantation Medicine at Cedars-Sinai. “While there are currently online resources for patients and caregivers, the literature available is often lengthy and difficult for many to understand, highlighting the limited options for this group.”
ChaptGPT could help reduce these barriers, providing patients with a personalized AI education model to teach them about their disease. To make sure the model could do the job right, investigators “asked” ChatGPT 164 common questions across five categories, twice. The categories included basic knowledge, diagnosis, treatment, lifestyle, and preventive medicine. Two liver transplant specialists then graded these answers by the chatbot.
What did the answers reveal?
- ChatGPT answered roughly 77 percent of the questions correctly, generating high levels of accuracy in 91 questions across the categories.
- According to the specialists, 75 percent of the responses for basic knowledge, treatment, and lifestyle were comprehensive or correct, but inadequate.
- Responses categorized as “mixed with correct and incorrect data” were 22 percent for basic knowledge, 33 percent for diagnosis, 25 percent for treatment, 18 percent for lifestyle, and 50 percent for preventive medicine.
In all, researchers believe the model could be a viable option for supplemental care when someone is under a doctor’s supervision, but it doesn’t replace your physician entirely.
“More research is still needed to better examine the tool in patient education, but we believe ChatGPT to be a very useful adjunctive tool for physicians—not a replacement—but adjunctive tool that provides access to reliable and accurate health information that is easy for many to understand,” explains Spiegel. “We hope that this can help physicians to empower patients and improve health literacy for patients facing challenging conditions such as cirrhosis and liver cancer.”
The findings appear in the journal Clinical and Molecular Hepatology.