Artificial intelligence has generated a lot of interest within the healthcare industry in recent months. However, amongst the excitement there is some backlash from critics who claim there is no reality behind the hype. With elements of truth on both sides, this article attempts to separate AI reality from fiction.
When it comes to learning and problem solving, AI is already playing a significant role in our daily lives. AI helps Netflix and Amazon to predict user preferences and Siri and Alexa to understand human speech, for example. But perhaps more importantly is its tremendous potential to improve the healthcare industry, due to the vast quantity of health data constantly being generated – 60% of all medical information that has ever been generated was so in the last six years alone, for example.
Indeed, the vast majority of new AI initiatives, companies and funding globally, are being directed to healthcare, over and above other sectors – with healthcare often being cited as the “hottest sector for AI deals”. Yet despite smart-solutions such as Google’s DeepMind and Babylon Health already entering public and private healthcare markets, many are still unsure of what AI comprises and for this reason it is often dismissed.
AI software is incredibly complex. It looks for patterns in data and uses them to predict, prevent or treat diseases. In many respects, AI can conduct this process in a faster, more economical and more effective way than a human being can. Imagine software that can collect information about patients recovering after surgery by monitoring their blood pressure (BP) and heart rate (HR), for example.
It has learned from observing the same scenario tens of thousands of times that when BP and HR behave in a certain way it is likely that a patient is going into shock,- meaning that medical intervention, such as intravenous fluids, is urgently required.
This process is known as machine learning. If this system can predict impending deterioration at an earlier stage than humans – typically a single nurse, conducting calculations manually – it is clearly of great value, and even more so when we consider that the platform can process data on thousands of patients at the same time, when a human may be limited to case-by-case approach.
The same process can be extended to more challenging situations. There is a vast amount of data captured about elderly patients living at home with chronic health conditions.
However, this data is only used retrospectively – like when a patient has already suffered some serious adverse event such as a fall or hospital admission. Through a process known as deep learning, AI can extract abstract and obscure patterns in data or even in text entries, which are indicative of impending deterioration. This includes registering when a care worker begins to change the way he/she describes the patient’s mood or interactions.
Preventing elderly patients from falling seriously ill in the community with simple interventions like this would reduce the current burden on the NHS and greatly improve the functioning of the healthcare system. This example also illustrates how AI can increase the capabilities of staff that may not be trained to make complex diagnostic decisions.
In addition to this, AI is immediately scalable, with the potential for thousands of staff to use it simultaneously. It can also concurrently improve in both breadth and depth as, unlike a human, an AI interface can be an expert in cardiology, rheumatology and pathology, simultaneously.
At the same time, AI can process volumes of information in each of these respective fields that are much larger than those a human specialist could process, reconciling what some perceive as a classical tension in service delivery – that between specialisation and generalisation.
Even so, despite the clear benefits, there are some practical limitations of AI in healthcare. Regulation and evaluation of new diagnostic techniques and therapies is essential before they can be used in practice.
AI that aims to alter patient care must be subject to the same rigorous scrutiny as a new medical device or drug would be. This is relatively easy for cases of simple machine learning where the inputs and outputs are clearly defined, but in a process as complex as deep learning, clarity is lost.
A complicated AI system may produce the right result most of the time, but when it doesn’t, it may not be possible to know what went wrong. This lack of accountability and transparency creates a notable risk.
Regulatory bodies, both in terms of device approval such as the MHRA, and service delivery, such as the CQC, must therefore be proactive in their approach to tackling these issues, by working with AI-enabled providers to better understand their technologies.
It is important for them to appreciate the algorithms and decision-trees upon which platforms make recommendations on care – somewhat akin to understanding the mechanism of action of a drug. This must also be coupled with appropriate measures concerning privacy, data security, and cyber-risk.
Adoption by staff and patients is another significant issue. Staff at all levels may feel uncomfortable taking instructions regarding life-or-death decisions from a computer system. Patients would also need to provide consent if their information is to be used by experimental systems which they may be reluctant to share – particularly when Stephen Hawking recently said that AI technology would “end mankind”.
Finally, there are complications around data and technology that must be considered. Processes like deep learning require a huge amount of information in order to succeed at even basic tasks. Services that must digitise handwritten records could face enormous delays before their datasets become useful. Once the data is captured and formatted, the hardware processing requirements are also substantial.
There is a poor long-term case for using such expensive technology when it would cost significantly less to employ human beings to do the same task. Protecting large amounts of stored data can also be challenging and expensive.
Currently, the adoption of AI-driven healthcare is limited. ‘GP chatbots’ are probably the best widely used example, as they combine the kind of prediction outlined above with natural language processing. With steadily greater processing power available for less money, and data scientists finding more efficient ways to train and explain their models, in the near future AI technology will have a much more significant presence in the healthcare system.
The biggest impediment will likely be the growing public fear and distrust of AI – but even this is not necessarily a bad thing, as it raises expectations as to what the industry must develop before it is adopted into practice while ensuring that we move into the future without violating important principles of safety and privacy.
Sourced by Dr Ben Maruthappu, co-founder and CEO of tech-enabled home care platform Cera