Artificial intelligence slips cautiously into the clinic

Artificial intelligence slips cautiously into the clinic

FEATURE Artificial intelligence slips cautiously into the clinic 898 inconsistent application of referral criteria”, says Fox. Although the nationa...

40KB Sizes 6 Downloads 68 Views

FEATURE

Artificial intelligence slips cautiously into the clinic

898

inconsistent application of referral criteria”, says Fox. Although the national health sevice (NHS) has circulated referral guidelines, “such knowledge may be more effectively delivered at the point of care using advanced technology”.

Rights were not granted to include this image in electronic media. Please refer to the printed journal. Using AI to improve care

A 6-month, NHS-funded pilot trial of ERA is underway at Leicester and Southampton University Hospitals to assess ERA’s impact on referral speed and quality (see videos at www.openclinical.org; demonstration program available at www.infermed.com/wap/era). InferMed, a technology company that collaborates with ICRF, also has an AI program to assist in HIV drug prescribing and is developing a cancer pain tool “that uses information about cancer type to infer possible sites of spread, to help explain why pain may be rising in a certain part of the body”, says clinical director Robert Dunlop. AI systems can also assist in predicting a patient’s clinical course and in preventive care, adds Rich Caruana (Cornell University, Ithaca, NY, USA), who helped develop an AI system to predict caesarean delivery. The system accurately detected risk factor variables in 676 (70%) nulliparous and 419 (47·6%) parous caesarean deliveries from an historical cohort of 22 157 women who delivered live singletons (Am J Obstet Gynecol 2000; 183: 1198–1206). A refined version of the system may one day be used for educating doctors and women early in pregnancy, he says. “They could look 5 months into the pregnancy, and if they see risk factors, then the physician can say, ‘if we do the following, we can lower this risk and possibly improve the outcome’.” Barry Silverman and colleagues at the University of Pennsylvania (Philadelphia, PA, USA) also aim to shift health behaviour, but with a computer game, Heart Sense, targeted to patients at high risk of

Science Photo Library

linicians are unlikely to have androids—human-like robots seen in science fiction movies—as assistants any time soon. But artificial intelligence (AI)—intelligent computer programs, also known as “knowledge-based” or “expert” systems—are making their way into the clinic in more subtle ways, says Enrico Coiera (Centre for Health Informatics, University of New South Wales, Australia). “It’s already here, but not ‘in your face’. And that’s how it should be—hidden and embedded in the world we live in.” Automated arrhythmia detectors that are “unremarkable to physicians, who just see a monitor attached to the patient” are just one example of AI that is already in use in hospitals, notes Coiera (www.coiera.com). Another are laboratory results indicating that a certain value is out of normal range. Such reports often are computer generated and then checked by a doctor, “so again, the AI part is hidden”. But such applications are only the beginning, says Coiera, who argues that information overload has made it “impossible” for clinicians to stay on top of the literature, particularly for drug prescribing. “I worry when I hear a [family doctor] saying, ‘I know the dose of amoxicillin, why do I want a computer to tell it to me?’ But guidelines on appropriateness change regularly, thousands of papers come out daily, and so the idea that a doctor always knows the right thing to do is a myth.” Evidence-based decision-support tools—hardly the stuff of science fiction, but certainly useful—will be the “next wave” of AI in medicine, emphasise experts. A system under construction at University of New South Wales that provides a “biomedical librarian at your beck and call” will soon become commonplace, predicts Coiera. “You’ll be able to say, ‘I have a patient with this disease; what’s the current recommendation?’ And the ‘librarian’ will search the best journals, as you’ve defined them, and come back with the answer.” A decision-support tool for early cancer detection, Early Referrals Application (ERA), has been developed by John Fox and colleagues at the Imperial Cancer Research Fund (ICRF) in London, UK. “Much of the delay in detection of early-stage cancer is thought to occur at the primary care level, largely because of

C

having a myocardial infarcation (see www.seas.upenn.edu/~barryg/heart). “We want to help them recognise heart attack symptoms and go to the hospital within an hour, rather than delaying 2 to 12 hours, as is now typical”, says Silverman. “Users get involved with people who have poor health behaviours, and by helping them, learn about their own health behaviours and what to do differently”, he explains. A trial using a proof-of-concept prototype showed that after playing Heart Sense, patients had better understanding and memory of symptoms and management; this led to a positive review by the Scientific Panel of the American Heart Association (Health Care Management Science 2001; 4: 213–28). But despite such successes, “we in AI are still trying to build the trust of the medical profession”, emphasises Fox. “Many AI applications have been built and tested, but few have been deployed, because before there can be a change in practice, you have to show it’s worth doing and safe.” The very fact that AI applications exhibit an understanding of concepts and relationships “make people suspicious of them. Although we work hard to ensure that they are based on the best possible science, they give the appearance of being based on intuition.” To overcome such misperceptions, Fox points to the aerospace and medical equipment industries, which “engineer safety” into their applications. The same can be done with AI applications, he argues. Also, “AI itself can provide additional security. By using computational techniques that teach programs to reason, deduce, analyse, assess, plan, schedule, and control actions, you can build systems that know about safety and hazards and how to deal with them. You put in general rules that the program can apply in specific situations.” What of the notion that clinicians will become overly reliant on such technology? “That’s nonsense”, asserts Coiera. “If we look at the current focus on quality and safety in medicine and want to make the best possible decisions despite information overload, can we really survive without computer support—and think that somehow we’ll magically know what to do?” Marilynn Larkin

THE LANCET • Vol 358 • September 15, 2001