The value of healthcare workers’ intuition in effective clinical care has been verified by reports around the world again and again.
From doctors’ ability to spot sepsis in critically ill children, to “nurse worry” as a “vital sign” predictive of patient deterioration, to helping GPs navigate complex patient care, intuition appears to play a large role in supporting high-risk patients — even when data or computer outputs suggest another treatment approach.
Artificial intelligence has already begun to transform healthcare, and the health sector will only continue to consider AI innovations in 2024 and beyond. In this increasingly technological world, what is the role of these human hunches in healthcare practice? Is AI about to overtake doctors’ “gut feelings” entirely?
What is AI in healthcare and when is it used?
As Thomas Davenport from Babson College and Deloitte consultant Ravi Kalakota explain elsewhere, AI healthcare includes “rule-based expert systems”, which use prescribed knowledge-based rules to solve a problem, and “robotic process automation”, which uses automation technologies to mimic some tasks of human workers.
Such technology can help with automated patient monitoring — where alerts are signalled once a rule criterion is met — patient scheduling reminders and medicine management.
Other forms of AI used in healthcare include robots, natural language processing and machine learning.
Robots can help move and stock medical supplies, lift and reposition patients and assist surgeons. One Finnish hospital has launched a €7 billion project, set to be completed in 2028, which will engage robots to collect patient data typically reliant on human physical touch, from measuring pulse to taking temperature and calculating oxygen saturation.
The release of ChatGPT in 2023 marked a leap forward for AI in popular consciousness. This type of AI, which requires training on large data sets (supported by human feedback), focuses on giving computers the ability to read, support and manipulate human language. Such natural language processing has changed the communication landscape with its language mimicry.
While some note that the hype hasn’t quite been reflected in reality, professionals in a range of sectors — including healthcare — now use ChatGPT for correspondence, such as with drafting “sick notes” or managing medication or healthcare information.
There are predictions that healthcare natural language processing will be a US$7.2 billion business by 2028, with this type of AI being deployed to help translate complex published papers for public consumption, for analysis of electronic health records to help identify at-risk patients, and to interact with patients to help with triage or answer healthcare questions.
In 2024, some say this type of AI will likely focus on more sophisticated language models that power chatbots and virtual assistants, and will be built into word-processing programs.
Machine learning gives computers the ability to learn without explicitly being programmed for a given task. The algorithms driving these types of AI are based on statistical and predictive models. Like natural language processing, machine learning often relies on “training” from existing data sets, which have been human-reviewed and annotated.
Essentially, machine learning doesn’t automatically know what to look for, and without human-informed training, this type of AI tends to provide lots of noise and useless predictions. Once trained, machine learning can take previously unseen patient information and apply its prior “training” to analyse the data and predict outcomes, or make recommendations.
In healthcare, machine learning can recognise patterns that may be missed by humans, with AI’s role notable in patient survival of gastric cancer, identification of primary causes of cancers and reducing breast cancer false positives.
In 2024, machine learning algorithms are likely to continue to be used for probing healthcare data analytics, with vast medical data provided by wearables, medical devices and electronic health records. But as with all forms of healthcare AI, it is clear humans are still needed for AI training, evaluation of outputs, and considering the impacts of the AI recommendation.
Artificial versus human intelligence
Given the global healthcare workforce shortages, the proverb may be true: necessity breeds invention.
It may not be long until healthcare includes integrated AI forms where a robot greets you in your native language for your annual check-up using natural language processing, takes your vital signs, and sends a recommendation to the doctor on which patient to prioritise and what investigations need to be ordered, using machine learning algorithms designed to analyse collected vital signs.
What AI can’t do is replace the natural “gut feel” of a healthcare professional. And this won’t change in 2024.
The clinical reasoning and thinking process that healthcare providers engage in is so complex, and the sources of information that the human brain considers in patient care are too numerous to capture with current algorithms. The implicit knowledge an expert relies on for effective clinical care is so deeply embedded in human automation that methods to get at these data points often fail.
On top of this, AI-accessible data and the AI algorithms themselves can have flaws.
Machine learning can be overly sensitive, leading to over-diagnoses in some patients. Natural language processing AIs can act as healthcare trojan horses, where the technology is so convincing in its communication approaches that it tricks the user into thinking it is knowledgeable in the same way a human is.
There are also privacy concerns with such AI applications.
AI typically relies on data input to continue learning — and the clarity about what happens to confidential patient information when input into AI remains an open question for many platforms. There are also challenges that the healthcare AI field is tackling related to bias and responsibility, and remaining questions about what we consider “mundane” and “repetitive” tasks that AI can truly take off humans.
In reality (ironically), all existing AI is devoid of the necessary context within which healthcare occurs. It misses the complexity, empathy and important data points that human intelligence has access to, and can therefore only replicate specific human tasks.
Originally published under Creative Commons by 360info™.
I am sure there are examples of the value of intuition in health workers – which might be another name for subconsciously drawing on their memory bank of experience and training.
However I am also pretty sure there would be an equal or larger number of people negatively impacted by health workers assumptions and prejudices that narrow the scope of their activity. This would at least relate to women and also all sorts of atypical groups.
I like the sound of it!
immediate access to the current world’s entire medical knowledge. No human bias, forgetfulness, misjudgement or misinterpretation, no ego involvement, no Union forcing unrealistic fees and payments and no doubtful or forged qualifications.
Now watch the storm of protest and negativity from the Doctors and other “Healthcare” professionals.
Probably only the Pathology crowd being unaffected, unless the Robodocs are fitted with their own analysis system.
Bring it on!
Such a system just automates and reifies existing bias.