The prospect of flawless diagnosis at a fraction of the cost, that Artificial Intelligence (AI) gives, would make many politicians salivate as they imagine their approval ratings soaring. However, those working in the healthcare sector might not share the same level of excitement. AI has consistently been touted as a great means for redundancy that will eventually leave everyone – from high skilled to low – without a job. As Maksim Richards explains, however, this is unlikely to happen soon, if ever.
The sorts of AIs used in today’s modern world are “narrow” AIs – programs that specialise at one job. Take a haematologist for example; they study the blood of thousands of patients and try to pick out abnormalities that might indicate disease. AI is growing in its use in haematology, but it can still do relatively little. AIs are able to identify an abnormal blood sample but not identify the exact disease; while haematologists can look at patients’ blood and not only identify the illness but also know its effects and can explore what the next step in treating the patient might be. This is a great example of AI assisted diagnosis: the more rapid AI can flag patients that need further analytical investigation by a trained professional human. This is likely the form we will see becoming more prevalent given the immense promise of the reduction of human error and so ultimately unnecessary deaths. Many are quietly comfortable with the knowledge that these AIs do not pose a danger to the jobs of these highly skilled healthcare workers but will rather enhance their performance.
While AI may not pose a risk to doctors’ jobs just yet, there are considerable risks associated with their implementation as they stand. The fallibility of doctors is rarely an issue that is discussed, likely because doctors’ mistakes cost lives. Computers, on the other hand, are so reliable that even nuclear missiles are entrusted to these machines. Our trust in computers is so unwavering that we run the risk of over-relying on the AI models we develop. In such a critical field like medicine, this could have dramatic ramifications. The extensive testing these models undergo before they’re even used in real settings and the constant supervision they are under when placed in hospitals should set your mind at ease. But as we implement more of them, will we begin to devalue a doctor’s opinion?
Why is overreliance a concern, when we know that machines are so reliable? An important reason is that the AI may not be applicable to situations in other countries. If we imagine an AI, developed in the United States, that specialises in the diagnosis of pneumonia from chest x-rays. This AI may have an accuracy of 99%, when supervised by a radiologist, which would an incredible increase in the rate and validity of diagnosis thus ensuring prompt and proper treatment. However, if this AI were to be implemented in a less developed country, it could lead to worrying consequences. The AI model would only have seen x-rays from the US, where the technology is sufficiently advanced to gain clear images free of artefacts that might be mistaken for disease. When these models are created in universities and hospitals, the data they have is from a limited section of the society they have access to. With 7.8 billion people on the planet, it is crucial that representative samples are taken when developing AI models because the human cost of implementing an ineffective AI could be severe.
In the long run these AI assisted diagnoses are much more cost effective than traditional means, not only because they speed the whole process up, but they allow healthcare workers to focus on difficult cases and research without being bogged down by the monotony of being sent healthy patient samples for analysis. The initial costs of an AI can be staggering, thanks not only the development of the model but also the infrastructure required to run it, along with the constant maintenance and supervision required to avoid deadly mistakes. Malaria is diagnosed by haematologists and if promptly treated, can provide a full recovery. Malaria is often the burden of tropical countries that typically do not have the resources to develop and implement an AI. This common pattern, wherein the countries most in need of healthcare technology cannot afford it, extends to AI which will only worsen global health inequality. The rich and healthy get healthier and the poor and sick get sicker…
The promise of AI has fallen short so far – we do not have robots barking orders at unwitting doctors while their patients live for centuries, nor do we have streets littered with unemployed healthcare workers whose interview rivals were some variations on the hal 9000 from the movie 2001: A Space Odyssey. What we have seen are astounding results, where AIs have been implemented alongside humans. With a continuation of this trend we could bring healthcare to the forefront of technological research, and achieve exceptional patient outcomes. Stark warnings have been made by healthcare and technology professionals to ensure a close eye is kept on their progress to ensure we don’t inadvertently raise mortality and that we don’t leave poorer countries in the dust.
Maksim Richards is a Medicine student at St John’s College, Oxford
If you enjoyed this article…
You might want to explore our courses in: