A recent study has found that medical advice provided by artificial intelligence-powered chatbots is no more reliable than information obtained through traditional search engines, despite these tools’ strong performance in medical licensing examinations.
Rebecca Payne, a co-author of the study from the University of Oxford, noted that “AI is not yet ready to take on the role of a doctor,” despite widespread claims about its capabilities. She cautioned patients against relying on chatbots to interpret their symptoms, warning that doing so “can be dangerous, as it may lead to misdiagnosis and a failure to recognise the need for urgent medical care.”
The study, led by British researchers, involved around 1,300 participants in the United Kingdom. Each participant was asked to consult one of three AI chatbots about a hypothetical medical scenario, while a separate group used conventional online search engines to seek guidance.
Results showed that users of AI chatbots correctly identified their medical condition in roughly one-third of cases. However, only 45 per cent were able to determine the appropriate course of action based on the advice provided.
The researchers found that the overall accuracy of diagnoses and recommendations generated by AI chatbots was no better than that achieved by users of traditional search engines.
The findings appear to contrast sharply with the impressive results achieved by AI-powered chatbots in medical licensing exams. Researchers attributed this discrepancy to communication gaps. Participants often failed to provide complete or precise information to the chatbots. In some cases, they struggled to understand the options presented to them or simply disregarded the advice.
The study also highlighted the growing reliance on AI for health information. Researchers noted that one in six adults in the United States consults AI-powered chatbots for medical information at least once a month — a figure expected to rise as the technology becomes more widespread.










