London Medical Laboratory says the application of artificial intelligence (AI) in healthcare is growing exponentially. Will you soon be telling your symptoms to a machine, and do the benefits outweigh the risks?
European lawmakers have just voted to push ahead with the EU’s controversial AI Act, covering everything from automated medical diagnoses to ChatGPT.
But does the increasing role of Artificial Intelligence (AI) in healthcare pose a threat? London Medical Laboratory has been analysing the latest data. It reveals five reasons to be fearful and five to be hopeful about AI’s rapid growth.
Leading testing expert Dr Avinash Hari Narayanan (MBChB), Clinical lead at London Medical Laboratory, says: “The risks and benefits of the rapid growth of AI are hotly debated, and there are few areas where AI will play such a significant role, for good or ill, then in healthcare. Here are our five key concerns and hopes for AI’s future role”.
Five reasons to be fearful
Poorly designed systems can misdiagnose. AI is already widely used globally in the prediction and diagnosis of several diseases, especially those whose diagnosis is based on imaging.
However, should an AI system recommend the wrong medication, fail to identify a tumour on a scan, or allocate a hospital bed based on the wrong prediction about which patient would benefit more, patients could be harmed. When AI becomes widespread, an underlying problem in a major system could result in injuries to thousands of patients rather than the single mistake of one doctor.
AI algorithms can spread false health information, such as supposed vaccine side effects or target vulnerable populations with fraud. Some machine writing programmes have already been used to write highly convincing health phishing emails.
AI requires effectively storing and transmitting large quantities of sensitive patient data. That means systems could become targets for cybercriminals. Hackers could attack vulnerabilities along the AI data pipeline and use health and contact information to obtain drugs or even for blackmail.
Reflecting cultural bias
Software trained on data sets that reflect cultural biases will incorporate those blind spots. For example, communities with lesser recognition of certain conditions due to poorer health information, stigmatisation and cultural differences may present less to healthcare providers. To AI, that would mean this group is less likely to have this condition than other groups, but that may not be the case.
A new paper by leading international health professionals claims AI ‘could pose an existential threat to humanity itself’. As one example, it identifies lethal autonomous weapons, which, claim the authors, ‘can kill en masse without human supervision’.
Even more worryingly, it suggests that, in the future, AI ‘could present a threat to humans, and possibly an existential threat, by intentionally or unintentionally causing harm directly or indirectly, by attacking or subjugating humans or by disrupting the systems or using up resources we depend on.’
Five reasons to be cheerful
AI methods, including Deep Learning (DL) algorithms, are already used globally in predicting and diagnosing several diseases, especially those whose diagnosis is based on imaging. IBM says AI tools are now being used to analyse CT scans, x-rays, MRIs, and other images for lesions that a human radiologist might miss.
These tools are also used in medical research, greatly accelerating how data and images are processed and yielding powerful new ways to understand diseases and create new treatments.
Speeding up appointments
AI can automate some admin tasks that take up much of medical practices’ time today. This includes booking appointments, updating records, and queuing up relevant patient information before consultations. This makes healthcare more accessible to patients and efficient for healthcare providers.
AI algorithms can help healthcare providers by providing real-time data and recommendations. These can monitor patients’ vital signs, such as heart rate and blood pressure, and alert doctors if a sudden change occurs. For chronic conditions, AI algorithms monitor patients’ health data over time and provide recommendations for lifestyle changes.
Health techs such as the Apple Watch and FitBit allow people to monitor their own health while also providing healthcare professionals with essential data.
By analysing big data, AI can help identify new disease outbreaks early. For example, an AI system developed by Canada-based BlueDot detected unusual pneumonia cases around a market in Wuhan, China, more than a week before the WHO issued a public notice of the emerging Covid virus.
On an individual level, an AI system might be able to identify a person with Parkinson’s disease based on the trembling of a computer mouse, even if the person did not yet know.
Finally, AI can improve access to care. For example, AI-powered telemedicine services can provide remote consultations and diagnoses, making it easier for patients to access care without travelling. Patients often have questions outside typical surgery hours. AI can help provide around-the-clock support through chatbots that can answer basic questions and give patients resources when their practice isn’t open.
Dr Hari Narayanan concludes: “The AI healthcare market is already a billion-dollar industry. In the future, it could also pave the way for the growth of precision medicine. AI could study patients’ medical history, preferences and personal needs and integrate this with their DNA genetic profiles.
“That’s not as far in the future as you may imagine. The first fruits of DNA-based investigations are already here. London Medical Laboratory’s new DNA Genotype Profile Test is a simple, at-home saliva test kit. This once-in-a-lifetime test gives over 300 reports, providing insights into nutrition, traits, fitness and health from our genetic blueprint. The saliva test can be taken at home through the post or at one of the many drop-in clinics that offer these tests across London and nationwide in over 95 selected pharmacies and health stores.”