ChatGPT, a sophisticated chatbot driven by artificial intelligence (AI) technology, has been increasingly used in healthcare contexts, one of which is assisting patients in self-diagnosing before seeking medical help. Although it seems very useful at first glance, AI may cause the patient more harm than good if it is not accurate in its diagnosis and recommendations. A research team from Japan and the US recently found that the precision of ChatGPT’s diagnoses and the degree to which it recommends medical consultation require further development.
In a study published the Journal of Medical Internet Research, the multi-institutional research team led by Tokyo Medical and Dental University (TMDU) evaluated the accuracy (percentage of correct responses) and precision of ChatGPT’s response to five common orthopaedic diseases (including carpal tunnel syndrome, cervical myelopathy, and hip osteoarthritis) because orthopaedic complaints are very common in clinical practice and comprise up to 26% of the reasons why patients seek care.
Over a five-day course, each of the study researchers submitted the same questions to ChatGPT. The reproducibility between days and researchers was also calculated, and the strength of the recommendation that the patient seek medical attention was evaluated.
“We found that accuracy and reproducibility of ChatGPT’s diagnosis are not consistent over the five conditions. ChatGPT’s diagnosis was 100% accurate for carpal tunnel syndrome, but only 4% for cervical myelopathy,” said lead author Tomoyuki Kuroiwa. Additionally, reproducibility between days and researchers varied from “poor” to “almost perfect” among the five conditions even though researchers entered the same questions every time.
ChatGPT was also inconsistent in recommending medical consultation. Although almost 80% of ChatGPT’s answers recommended medical consultation, only 12.8% included a strong recommendation as set by the study standards. “Without direct language, it is possible that the patient is left confused after self-diagnosis, or worse, experience harm from a misdiagnosis,” said Kuroiwa.
This is the first study to evaluate the reproducibility and degree of the medical consultation recommendation of ChatGPT’s ability to self-diagnose. “In its current form, ChatGPT is inconsistent in both accuracy and precision to help patients diagnose their disease,” explained senior author Koji Fujita. “Given the risk of error and potential harm from misdiagnosis, it is important for any diagnostic tool to include clear language alerting patients to seek expert medical opinions for confirmation of a disease.”
The researchers also note some limitations of the study including the use of questions simulated by the research team and not patient-derived questions, focusing on only five orthopedic diseases, and using only ChatGPT. While it is still too early to use AI intelligence for self-diagnosis, the training of ChatGPT on diseases of interest could change this. Future studies can help shed light on the role of AI as a diagnostic tool.