The promises and perils of AI in medicine

| Alden Woods
A healthcare professional in scrubs and a stethoscope uses a laptop at a desk with windows in the background.

Photo: Adobe Stock.

UW experts Dr. Gary Franklin and Lucy Lu Wang discuss how AI could help and hurt healthcare

In most doctors’ offices these days, you’ll find a pattern: Everybody’s Googling, all the time. Physicians search for clues to a diagnosis, or for reminders on the best treatment plans. Patients scour WebMD, tapping in their symptoms and doomscrolling a long list of possible problems.

But those constant searches leave something to be desired. Doctors don’t have the time to sift through pages of results, and patients don’t have the knowledge to digest medical research. Everybody has trouble finding the most reliable information.

Optimists believe artificial intelligence could help solve those problems, but the bots might not be ready for prime time. In a recent paperDr. Gary Franklin, a research professor in the UW Department of Environmental & Occupational Health Sciences (DEOHS) and the UW School of Medicine, described a troubling experience with Google’s Gemini chatbot. When Franklin asked Gemini for information on the outcomes of a specific procedure — a decompressive brachial plexus surgery — the bot gave a detailed answer that cited two medical studies, neither of which existed.

Headshot of Gary Franklin.
Dr. Gary Franklin, research professor in DEOHS and the School of Medicine.

Franklin wrote that it’s “buyer beware when it comes to using AI Chatbots for the purposes of extracting accurate scientific information or evidence-based guidance.” He recommended that AI experts develop specialized chatbots that pull information only from verified sources.

One expert working toward a solution is Lucy Lu Wang, a UW assistant professor in the Information School who focuses on making AI better at understanding and relaying scientific information. Wang has developed tools to extract important information from medical research papersverify scientific claims and make scientific images accessible to blind and low-vision readers

UW News sat down with Franklin and Wang to discuss how AI could enhance health care, what’s standing in the way and whether there’s a downside to democratizing medical research.

Each of you has studied the possibilities and perils of AI in health care, including the experiences of patients who ask chatbots for medical information. In a best-case scenario, how do you envision AI being used in health and medicine? 

Gary Franklin: Doctors use Google a lot, but they also rely on services like UpToDate, which provide really great summaries of medical information and research. Most doctors have zero time and just want to be able to read something very quickly that is well documented. So from a physician’s perspective trying to find truthful answers, trying to make my practice more efficient, trying to coordinate things better — if this technology could meaningfully contribute to any of those things, then it would be unbelievably great. 

Headshot of Lucy Lu Wang.
Lucy Lu Wang, assistant professor in the Information School.

I’m not sure how much doctors will use AI, but for many years, patients have been coming in with questions about what they found on the internet, like on WebMD. AI is just the next step of patients doing this, getting some guidance about what to do with the advice they’re getting. As an example, if a patient sees a surgeon who’s overly aggressive and says they need a big procedure, the patient could ask an AI tool what the broader literature might recommend. And I have concerns about that. 

Lucy Lu Wang: I’ll take this question from the clinician’s perspective, and then from the patient’s perspective.

From the clinician’s perspective, I agree with what Gary said. Clinicians want to look up information very quickly because they’re so taxed and there’s limited time to treat patients. And you can imagine if the tools that we have, these chatbots, were actually very good at searching for information and very good at citing accurately, that they could become a better replacement for a type of tool like UpToDate, right? Because UpToDate is good, it’s human-curated, but it doesn’t always contain the most fine-grained information you might be looking for. 

These tools could also potentially help clinicians with patient communication, because there’s not always enough time to follow up or explain things in a way that patients can understand. It’s an add-on part of the job for clinicians, and that’s where I think language models and these tools, in an ideal world, could be really beneficial. 

Lastly, on the patient’s side, it would be really amazing to develop these tools that help with patient education and help increase the overall health literacy of the population, beyond what WebMD or Google does. These tools could engage patients with their own health and health care more than before.

Excerpted from the original post here.

For more information or to reach the researchers, email Alden Woods at acwoods@uw.edu. 





Newsletter

Environmental health news delivered to your inbox monthly: