A recent study has uncovered a significant danger in the growing trend of using artificial intelligence for medical advice: AI chatbots are frequently providing “problematic” information regarding cancer treatments, often suggesting unproven alternatives to life-saving chemotherapy.
As more people turn to AI for quick healthcare answers, researchers warn that these tools may be legitimizing dangerous misinformation by treating scientific facts and internet myths with equal weight.
The Study: Testing the Limits of AI Accuracy
Researchers from the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center conducted a rigorous “stress test” on several leading AI models, including OpenAI’s ChatGPT, Google’s Gemini, Meta’s AI, xAI’s Grok, and High-Flyer’s DeepSeek.
The team used a technique called “straining,” posing questions designed to trigger common misconceptions—such as the safety of 5G technology, anabolic steroids, or specific vaccines. The goal was to mimic how a casual user, often influenced by biased search terms, might interact with the bots.
The results, published in BMJ Open, were alarming:
– Nearly 50% of the responses regarding cancer treatments were rated as “problematic” by medical experts.
– 19.6% were classified as “highly problematic,” meaning they were substantially incorrect and open to dangerous subjective interpretation.
– 30% were “somewhat problematic,” providing information that was largely accurate but incomplete.
The “False Balance” Problem
One of the most critical findings involves how AI handles conflicting information. When asked for alternatives to chemotherapy, many bots initially provided the correct medical disclaimer—stating that alternative therapies may lack scientific backing.
However, the bots often failed to stop there. They proceeded to list acupuncture, herbal medicine, and “cancer-fighting diets” as viable options, and in some cases, even pointed users toward specific clinics that actively oppose conventional chemotherapy.
The researchers identified a phenomenon known as “false balance” as the root cause. Instead of providing a definitive, science-based answer, the bots often adopt a “both-sides” approach. By weighing peer-reviewed medical journals against wellness blogs, Reddit threads, and social media posts, the AI gives unverified claims the same authority as established medicine.
Why This Matters: The Rise of “AI First Aid”
This issue is not merely academic; it arrives at a time when AI is becoming a primary source of health information. According to a recent Gallup poll:
– 25% of U.S. adults now use AI tools for healthcare guidance.
– Many users choose AI because it is faster than waiting for a doctor’s appointment or because traditional healthcare has become too expensive or inconvenient.
– Despite this usage, only one in three users actually trusts the software’s answers.
The Real-World Consequences
Medical professionals warn that the harm caused by AI misinformation is twofold:
1. Direct Physical Harm: Unregulated supplements and “alternative” medicines can cause organ damage (such as liver failure) or metabolic issues.
2. Delayed Treatment: The greatest risk is that patients may forgo or delay conventional, life-saving treatments like chemotherapy in favor of unproven methods.
Furthermore, the emotional toll is significant. Dr. Michael Foote of Memorial Sloan Kettering Cancer Center noted that chatbots can cause “needless distress” by providing wildly inaccurate prognoses, such as telling a patient they have only months to live when there is no medical basis for such a claim.
Conclusion
While AI offers unprecedented convenience, its tendency to treat misinformation with the same weight as scientific fact poses a severe risk to patient safety. Without increased oversight and better public education, the deployment of these tools may inadvertently accelerate the spread of dangerous medical myths.
















































