Clinical decision support (CDS) sofware
Clinical decision support (CDS) software covers apps that provide information or advice based on specific input data about the patient, or provide easy access to clinical guidelines or protocols. Systems using complex algorithms and other forms of artificial intelligence (AI) to provide specific "advice and information" to patients can easily morph into full diagnostic systems (i.e. medical devices) and this is where they can run into problems with the medical establishment. The issues were well illustrated in a recent BBC documentary ('Diagnosis on demand? The computer will see you now').
The debate was summed up perfectly by one of the contributors who spoke of a true clash of cultures: ".. on the one hand you have the technology world, silicon valley, move fast, break stuff; then you have the diligent, evidence-based, do-no-harm healthcare system". In other words, extreme risk taking vs ultra conservative/risk-averse - a recipe for trouble. It is interesting that the language being used by opponents of AI (or, more specifically, those opposed to how current leading IT companies like Babylon Heath are introducing their systems) is exactly the same as that used by opponents of the expansion in private health care 20 years ago..
​
"undermining the NHS"
"cherry picking"
"creaming off the easy patients (the so called "worried well"), leaving the NHS with the expensive, difficult-to-treat patients".
​
It should be added that these type of comments were largely reserved for Babylon's "GP at hand" app/service, where patients can actually transfer from their existing GP practice (in inner London only at present, as part of this pilot scheme) to a new practice (in south west London) that is working in partnership with Babylon Health. The 'rub', of course, is that this process removes the funding from the existing GP practice, which is paid for each patient on its register. This new development took London GPs by surprise in 2017 and the new practice already has over 30,000 patients on its books - mostly tech-savvy young adults who are used to using their smart phones for pretty much everything else. So, why not health care?
​
The UK NHS has recently published a position paper ('Accelerating the use of AI in health and care') based on a survey of private health IT companies currently developing AI systems for the healthcare sector. The UK government obviously sees AI software as a potential saviour in relieving the ever-growing pressure on NHS professionals, but several influential professional bodies (e.g. Royal College of General Practitioners), the main doctor's union (British Medical Association) and two of the UK's leading medical journals (e.g. British Medical Journal, The Lancet) have all expressed serious concerns about the pace with which this largely unproven technology is being introduced.
​
Eventually, of course, things will settle down as these two opposing cultures come to understand each other a bit better. IT companies will avoid making dubious claims and statements that are almost designed to anger the medical profession (reference) and doctors will slowly come to accept a properly developed AI system as a useful ally rather than a threat.​
​
Diagnostic radiology is a prime target for such AI systems and the recent UK survey (see link above) showed that about 25% of all 'diagnostic' apps were in this area. However, it also showed that, of all the systems surveyed, only 18% of the companies had taken their product through a regulatory process. Twenty three percent claimed to be "in the process of securing approval" and 41% claimed that EU regulation was "not applicable".
The potential widespread use of such systems in clinical radiology has prompted the Royal College of Radiologists (RCR) to issue a position statement on the subject, which supports a previous publication in which it expressed clinical governance concerns, and also claimed that methodologies for writing safety-critical software are rarely used in AI systems.
​
The UK MHRA now recognises that medical software in general, and AI software in particular, requires a different approach to medical device regulation to that currently operating in the UK under the old 2002 regulations. In September 2021 it published an outline of its plans for dealing with software and AI in the context of medical device safety.​​​​​
​
Along with other types of mobile health apps, the regulation of CDS software in the US is also in a state of flux following the publication (in December 2016) of the 21st Century Cures Act, which amended the definition of a medical device. The new statutory definition of CDS software is in three parts (all three parts must apply), but the last part is the most important, as it relates to independent checking of CDS software’s output:
​
“… and (iii) enabling such health care professional to independently review the basis for such recommendations that such software presents, so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient”.
In other words, if CDS software is being heavily relied upon to make clinical management decisions about an individual patient then that software would be deemed intrinsically more risky and may therefore be deemed to be a medical device.
​
In the EU there is no separate SaMD category for mobile apps or CDS software, so the software either meets the MDR17 definition of a medical device or it doesn't. However, as is the case with traditional medical devices, certain mobile medical apps can pose potential risks that are unique to the characteristics of the platform on which the software runs. For example, the interpretation of radiological images on a mobile device could be adversely affected by the smaller screen size, lower contrast ratio and uncontrolled ambient lighting. The FDA has stated that it intends to take these particular risks into account when deciding the appropriate degree of regulatory oversight for these devices.​​​​​​​
​
​
Example:
The GRACE 2.0 ACS Risk Calculator was developed in the US. It qualifies as a medical device under EU regulations, so the developers had to go through he full CE marking process in order for it be be marketed and sold within the EU. The manufacturer states in accompanying documentation that "..this risk scoring tool is intended for use by clinicians, in conjunction with individual patient assessment. We assume no responsibility for how you use or interpret the GRACE 2.0 ACS Risk Calculator app", but such 'blanket disclaimers' do not allow the manufacturer to avoid liability for harm or injury (under consumer protection law) in many jurisdictions. In other words, the best defence (mitigation) is to minimise bugs by employing best practice methods and standards - the so called 'development risks defence'.
​
​
​
​
This page last updated: 17 February 2025
​
​
​
​



