AI Can Also Lie to Cancer Patients. How Should Doctors Respond?
AI-powered chatbots, unlike physicians, are always available and at first glance offer valuable answers to the swirling questions of cancer patients. However, they can lead patients into a labyrinth of misinformation. How can an oncologist guide a patient out without damaging the fragile trust between them?
An Appealing Source of Information…
Cancer patients often encounter misinformation. Recommendations related to unproven—or even scientifically debunked—treatments can lead patients to abandon conventional therapies. According to U.S. data, up to 83% of cancer patients are exposed to misinformation. If they fall for it, their mortality rate (depending on cancer type) can be 2.5–6 times higher than for those who follow oncologists’ recommendations.
The internet is a common place where oncology patients encounter misinformation. AI-based services are becoming increasingly popular as a quick source of education. These chatbots simulate a conversation with an instant expert who seems to draw on the latest data and credible patient testimonials, offering immediate consultation—anytime, day or night. They can be particularly appealing to patients who don’t feel comfortable approaching their doctor with every treatment-related question.
…But Also a Fountain of Dangerous Nonsense
AI chatbots have a darker side: beneath their sophisticated language may lie a professionally limited understanding. They can also manipulate emotions through simulated empathy. This may lead patients to place more trust in the responses than warranted. Trust is crucial in oncology care—patients make life-defining decisions based on detailed conversations with real experts.
The more a chatbot mimics a professional or a sympathetic survivor, the greater the risk of manipulation. The training data for AI models is not professionally reviewed, and there is no guarantee that the AI updates its information frequently or considers the latest scientific findings.
The dangers of AI replacing psychological or psychiatric care were demonstrated in a recent Stanford University study. Researchers simulated conversations between people with mental health issues and ChatGPT. In about 20% of cases, the chatbot responded inappropriately—reinforcing delusions and failing to refer suicidal individuals to professional help. Shockingly, in one case, it offered a list of the highest nearby bridges rather than a crisis hotline.
How to Effectively Counter Misinformation?
Used cautiously and with outputs verified by trustworthy sources, AI can be a helpful educational tool for oncology patients. It can answer questions about cancer biology, treatment standards, or logistics that physicians may not have time to address during appointments.
Nevertheless, the best source of accurate medical information remains the treating physician. It may therefore be useful to complement the above discussion of AI functionality with expert strategies for debunking misinformation. These were recently mapped in a newly published study.
The study analyzed real physician responses in simulated Zoom conversations with actor-patients. Through qualitative analysis, a communication model was abstracted, highlighting key conversation segments that strengthen trust between doctor and patient. These can serve as practical guidance for real-world use.
Acknowledge the Patient’s Effort and Bring Them Back to Facts
The initial response to a patient's misinformation-related question should express understanding of why they brought it up. Physicians can admit the limits of their own knowledge and, for example, offer to thoroughly review the source the patient refers to. It’s important to take patients’ concerns seriously and appreciate that they reached out with their questions.
The conversation should include clarification on which parts of the misinformation may be accurate and why the overall message is misleading. Physicians should explain that the statement might not apply to the patient’s specific diagnosis or personal context.
When informing patients that the information is incorrect, oncologists can tailor explicit or diplomatic statements depending on the patient’s emotional state. They should also explain how treatment decisions are made and how scientific evidence is continually updated.
Finally, patients should be advised on how to responsibly use online resources in the future—how to identify trustworthy sources and verify information.
Physicians should avoid criticizing patients for seeking or validating information from other sources. Patient education is crucial for shared decision-making, yet navigating online sources can be challenging even for seasoned professionals.
Editorial Team, Medscope.pr
Sources:
1. Mullis M. D. et al. Clinician-patient communication about cancer treatment misinformation: The Misinformation Response Model. PEC Innov 2024 Jul 6:5:100319, doi: 10.1016/j.pecinn.2024.100319.
2. Moore J. et al. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. Conference: FAccT '25: The 2025 ACM Conference on Fairness, Accountability, and Transparency, doi:10.1145/3715275.3732039.
3. McLean A. L., Hristidis V. Evidence-Based Analysis of AI Chatbots in Oncology Patient Education: Implications for Trust, Perceived Realness, and Misinformation Management. J Cancer Educ 2025 Feb 18, doi: 10.1007/s13187-025-02592-4.
Did you like this article? Would you like to comment on it? Write to us. We are interested in your opinion. We will not publish it, but we will gladly answer you.
- AI Can Also Lie to Cancer Patients. How Should Doctors Respond?
- How Can AI Improve Lung Cancer Detection Directly in GP Clinics?
- Virtual Reality Helps Detect Autism Earlier and More Accurately Than Traditional Methods
- Night Shifts May Pose a Higher Risk of Asthma for Women
- Natural Doesn’t Always Mean Better – How to Deal with the Appeal to Nature in the Doctor’s Office