
A new tool has been created to tackle potentially life-threatening diet and vaccine misinformation on social media and in AI search summaries.
Health misinformation is a major public health threat, according to the World Health Organization (WHO). From restrictive diets and extreme fasting to the unsafe use of dietary supplements, misinformation can have disastrous consequences, health authorities have warned.
Herbal and dietary supplements alone have been estimated to account for 20 per cent of drug-induced liver injury cases and approximately 23,000 US emergency visits annually, studies have suggested.
Researchers at University College London (UCL) have developed a tool that identifies potentially harmful misinformation online.
Unlike existing tools, which offer binary judgements of whether content is “true” or “false”, this first-of-its-kind tool addresses misinformation that is not overtly false but still has the potential to dangerously mislead, particularly among vulnerable groups.
“When it comes to diet and nutrition, misinformation often operates through selective framing that masks potential health risks. Harmful misleading content tends to fly under fact-checkers’ radars and escape meaningful oversight until high-profile cases make the headlines,” lead author and developer Alex Ruani at UCL said.
One example is a case in 2025 of cholesterol-induced skin lesions diagnosed in a man who had adopted a carnivore diet. A trend that researchers say is disproportionately amplified by social media algorithms, particularly within “manosphere” communities.
Another example is a person who was hospitalised weeks after following incorrect AI-generated advice to replace sodium chloride (salt) with sodium bromide, a substance with no dietary role and which is toxic if regularly ingested over time.
Online misinformation has also been linked to decisions to abandon life-saving cancer treatment in favour of unproven dietary alternatives.
The tool, called Diet-Nutrition Misinformation Risk Assessment Tool or Diet-MisRAT, analyses content and decides how likely someone would be misled. It then ranks material as green, amber or red according to a weighted misinformation risk score.
For example, when assessing content containing claims such as ‘it is safer to give your child high-dose vitamin A than the MMR vaccine’, the tool classifies this into a critical risk tier as it presents false safety framing.
Researchers hope it will help policymakers, digital platforms and regulators implement safeguards.
“When AI chatbots speak confidently, users may assume their advice is safe. If we can properly measure how misleading a piece of advice is and how much harm it may pose, we can build stronger safeguards into models and AI agents before deployment rather than reacting after harm occurs,” Dr Ruani said.
The study published in the journal Scientific Reports, tested the tools results against the judgments of nearly 60 specialists in dietetics, nutrition and public health and found it to be reliable.
Co-author Professor Michael Reiss at UCL said: “By spelling out the typical patterns that distort diet, nutrition or supplement information, the tool’s risk assessment criteria can be taught and applied in education and professional training. This will help learners understand not just whether something is wrong, but how and why it can skew judgement, equipping them to recognise and challenge it.”



