Experts sound alarm as report reveals more than 40 million Americans turn to ChaptGPT for medical advice

One in eight Americans turn to ChatGPT every day for medical advice, leaving them prone to potentially dangerous misinformation, experts have warned.
A new report released by OpenAI, the AI research giant that created ChatGPT, revealed 40 million Americans use the service every day to ask about their symptoms or explore new treatments.
And one in four Americans feed their medical queries into ChatGPT once a week, while one in 20 messages sent to the service across the globe are related to healthcare.
Additionally, users ask up to 2 million questions every week about health insurance, including queries about comparing options, handling claims and billing.
OpenAI noted people in rural areas that have limited access to healthcare facilities are more likely to use ChatGPT, with 600,000 messages coming out of these areas each week.
And about seven in 10 health-related chat messages are sent outside of normal clinic hours, highlighting a need for health guidance in the evenings and on weekends.
The report also found two in three American doctors have used ChatGPT in at least one case, while nearly half of nurses use AI weekly.
Doctors who were not involved in the report told the Daily Mail that while ChatGPT has become more advanced and can help break down complex medical topics, it should not be a substitute for regular care.
A new report found 40 million Americans use ChatGPT every day for health information and advice (stock image)
Your browser does not support iframes.
Dr Anil Shah, a facial plastic surgeon at Shah Facial Plastics in Chicago, told the Daily Mail: ‘Used responsibly, AI has the potential to support patient education, improve visualization, and create more informed consultations.
‘The problem is, we’re just not there yet.’
The findings come as OpenAI faces multiple lawsuits from people who claim they or their loved ones were harmed after using the technology.
In California, 19-year-old college student Sam Nelson died of an overdose after asking ChatGPT for advice on taking drugs, his mother claims.
SF Gate reported that, based on chat logs it reviewed, the service would first give a formal response insisting it could not help, but then if Nelson asked certain questions or phrased prompts a certain way, the tool was manipulated into providing answers.
In another case, in April 2025, 16-year-old Adam Raine used ChatGPT to explore methods of ending his life, including what materials would be best for creating a noose. He later died by suicide.
Raine’s parents are involved in an ongoing lawsuit and seek ‘both damages for their son’s death and injunctive relief to prevent anything like this from ever happening again.’
The new report, published by OpenAI this week, found that three in five Americans view the healthcare system as ‘broken’ based on high costs, quality of care and a lack of nurses and other vital staff.
Doctors warned that while ChatGPT can help break down complex medical topics, it is not a substitute for real medical care (stock image)
Dr Kathering Eisenberg, pictured above, said that while AI can make complex medical terms more accessible, she would ‘use it as a brainstorming tool’ instead of relying solely on it
In a ranking of the share of healthcare messages from hospital deserts, or areas that are at least 30 minutes from a hospital, Wyoming had the most messages at four percent, followed by Oregon and Montana with three percent.
The team cited a survey of 1,042 adults who used the AI survey tool Knit in December 2025. They found 55 percent of those adults use AI to check or explore their symptoms, while 52 percent use it to ask questions at any time of day.
Additionally, 48 percent turned to ChatGPT to understand medical terms or instructions, while 44 percent used it to learn about treatment options.
The OpenAI team cited several case studies in the report. Ayrin Santoso, from San Francisco, told the company she used ChatGPT to help coordinate care for her mother in Indonesia after she suffered sudden vision loss.
And Dr Margie Albers, a family physician in rural Montana, said she uses Oracle Clinical Assist, which relies on OpenAI models, to help take notes and save time on clerical work.
Samantha Marxen, a licensed clinical alcohol and drug counselor and clinical director at Cliffside Recovery in New Jersey, told the Daily Mail: ‘One of the services that ChatGPT can provide is to make medical language clearer that is sometimes difficult to decipher or even overwhelming.’
Dr Melissa Perry, Dean of George Mason University’s College of Public Health who has studied AI usage in the medical setting, told the Daily Mail: ‘When used appropriately, AI can improve health literacy and support more informed conversations with clinicians.’
Ayrin Santoso (left), from San Francisco, told the company she used ChatGPT to help coordinate care for her mother (right) in Indonesia after she suffered sudden vision loss
Dr Margie Albers (pictured here), a family physician in rural Montana, said she uses Oracle Clinical Assist, which relies on OpenAI models, to help take notes and save time on clerical work
However, Marxen warned that the ‘main problem is misdiagnosis.’
‘The AI could give generic information that does not fit one’s particular case, thus leading the person to misjudging the severity of the symptom,’ she said.
On the flip side, AI could make a user think they are experiencing ‘the worst-case scenario,’ Marxen said.
Dr Katherine Eisenberg, senior medical director of Dyna AI, told the Daily Mail: ‘I do think ChatGPT opens up access to more types of medical information to patients, but people have to understand that ChatGPT is trying to serve every need for every user and is not specifically built or optimized for medicine.
‘I would suggest treating it as a brainstorming tool that is not a definitive opinion.’
She added: ‘I actually don’t think people need to stop using ChatGPT for medical information entirely. I think we’re beyond that point, with the amount of use it’s getting today.’
Instead, she suggested double checking all information from a reliable source like an academic center and avoid putting in sensitive personal information. ‘Most importantly, patients should feel comfortable telling their care team where information came from, so it can be discussed openly and put in context,’ she said.



