USA

AI chatbots are offering cancer patients alternatives to chemo and sparking concern for health officials

A new study has found that AI chatbots habitually recommend alternative cancer treatments to chemotherapy, potentially putting lives at risk.

A team from the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center tested a series of widely used bots as part of their research, including xAI’s Grok, OpenAI’s ChatGPT, Google’s Gemini, Meta’s AI, and High-Flyer’s DeepSeek.

They found that almost half of the answers received regarding cancer treatments were rated “problematic” by experts who audited the responses, according to the study published in BMJ Open.

Almost half of the answers the stress-testers received from the five AI apps pertaining to cancer treatments were rated ‘problematic’ by auditors (Getty/iStock)

Of that total, 30 percent were “somewhat problematic,” and 19.6 percent were “highly problematic,” with the former category defined as largely accurate but incomplete and the latter both substantially wrong and leaving room for “considerable subjective interpretation” on the part of the user.

Nicholas Tiller and his team stress-tested the apps through a process known as “straining,” wherein they posed questions to the bots likely to lead them towards subject matter rife with misinformation to see how well they could navigate it.

Among the questions they put to the bots was whether 5G mobile technology or antiperspirants cause cancer, whether anabolic steroids are safe and which, if any, vaccines are known to be dangerous.

Tiller said they were attempting to recreate the approach taken by a casual user, who is likely to treat the technology much like a search engine.

“A lot of people are asking exactly those questions,” he said. “If somebody believes that raw milk is going to be beneficial, then the search terms are already going to be primed with that kind of language.”

When the bots were asked to name alternative therapies that performed better than chemotherapy in treating cancer, they typically responded appropriately, advising the prompter that alternatives can be harmful and may not be scientifically backed.

However, they then went on to list them anyway, suggesting acupuncture, herbal medicine, and “cancer-fighting diets” as other means through which sufferers might be able to treat cancer.

Grok, DeepSeek and ChatGPT were three of the widely-used apps tested out by the research team
Grok, DeepSeek and ChatGPT were three of the widely-used apps tested out by the research team (AFP/Getty)

Some even named clinics that provided alternative treatments and actively opposed the administering of chemotherapy.

Tiller said the bots’ inclination to give a “false balance” or “both-sides approach” to answering such inquiries – weighing scientific and non-scientific results equally and giving peer-reviewed journals the same consideration as wellness blogs, Reddit rants, and tweets – prevented them from providing “a very science-based, black-and-white answer.”

That risks leading people away from established, medically-approved cancer treatments towards bogus alternatives, ultimately preventing them from getting the help they really need, he said.

The researchers found that the bots delivered a generally similar set of results, although they said that Grok performed the worst of the models tested, and concluded: “The audited chatbots performed poorly when answering questions in misinformation-prone health and medical fields.

“Continued deployment without public education and oversight risks amplifying misinformation.”

An estimated one in four Americans now use AI tools for quick medical advice, underlining the importance of their providing accurate information
An estimated one in four Americans now use AI tools for quick medical advice, underlining the importance of their providing accurate information (Getty/iStock)

The findings are significant because around one in four U.S. adults now use AI tools for healthcare guidance, according to a Gallup poll published last week, which found that the majority of users sought out the technology for quick answers rather than waiting for an appointment with their doctor.

A small but significant share of survey respondents said they used AI because accessing healthcare was becoming too expensive or inconvenient.

However, only one in three said they trusted the software’s answers, with the remaining two-thirds expressing healthy skepticism.

Dr. Michael Foote, an assistant attending professor at Memorial Sloan Kettering Cancer Center who was not involved in the study, told NBC News that the prevalence of misinformation concerning alternative treatments and vitamin supplements already online was a real cause for concern.

“Some of this stuff hurts people directly,” he said. “Some of these medicines aren’t evaluated by the FDA, can hurt your liver, hurt your metabolism and some of them hurt you by patients relying on them and not doing conventional treatments.”

Dr. Foote warned that the bots’ answers “legitimize” dubious treatments and have been known to cause needless distress via wrong answers.

“I’ve encountered where patients come in crying, really upset because the AI chatbot told them they have six to 12 months to live, which, of course, is totally ridiculous.”

  • For more: Elrisala website and for social networking, you can follow us on Facebook
  • Source of information and images “independent”

Related Articles

Leave a Reply

Back to top button

Discover more from Elrisala

Subscribe now to keep reading and get access to the full archive.

Continue reading