AI in Healthcare

Nicola Pastorello

Nicola Pastorello

More about Author

Nicola Pastorello is Data Analytics manager of Daisee, an Australia-based Artificial Intelligence company bridging the gap between technical AI and commercial application in the fields of vision and natural-language processing. He has a extensive experience in applying impactful AI to healthcare-related problems (e.g. predicting epileptic seizures, improve medical diagnoses).

New software tools powered by Artificial Intelligence (AI) are going to be dominant components of near-future healthcare. Soon, medical practitioners and researchers will routinely adopt a wide range of machine-learning techniques in most of their daily tasks. Here, I show some of the most exciting results obtained in this space and discuss how future developments of AI will radically change the way we diagnose and cure people.

Artificial Intelligence (AI) is going to heavily affect how we will research, diagnose and cure diseases in the very near future. The adoption of machine learning tools and algorithms by physicians and researchers will be of incredible benefit for the whole community. It will allow prompter and more accurate diagnoses, help doctors navigate the plethora of new medical research in order to better define their curing strategies and allow researchers to develop new cures and better identifying new patterns in complex data.

Two main components are responsible for the rapid adoption of AI in healthcare: the availability of large volumes of personal health data, and the massive advancements in computing technology.

Big and deep Data

Until few years ago, the only way to measure most health biomarkers required systems and devices available only in the hospital environment. As a consequence, health data records in the past were obtained in a hospital environment only in cases where the actual analyses/scans were required (i.e. not in a continuous fashion all-over the daily life of the patient).

But  the growing use of cheap and always-recording sensors embedded on wearable smart devices and smartphones today for personal health data collection and sharing is changing this. These sensors are increasingly efficient and accurate, often competing in quality with their medical-level counterparts.The affordability of such devices and their capacity for continuously collecting data has provoked an explosion in the amount of personal-health data available to researchers and practitioners. Moreover, an easy and continuous monitoring of biomarkers outside the hospital environment can provide physicians with a much more frequent measure of their patients wellbeing. Of course, this raises a number of ethical and privacy concerns, which will be discussed later.

Meanwhile, analysis and patient examination technology in healthcare facilities is becoming more detailed and able to record huge amounts of information in single scans. As an example, DNA sequencing is no more an inaccessibly expensive exam as it used to be, and even a simple blood analysis is able to return dozens of different biomarker measurements. However, when all this personal health data is aggregated in order to obtain a more complete picture of a patient’s health status, the number of parameters to be studied can largely exceed the number that humans are able to evaluate.

Most diagnoses are based on finding patterns in symptoms that can be linked to known diseases. The better the mapping of these symptoms, the lower the likelihood of a wrong diagnosis (assuming that the diseases’ symptoms are known in advance). However, large overlapping in the symptom space exists among number of different diseases. Thus, to increase the accuracy, multiple and different exams/scans are needed, but these might be expensive and require specialised technicians. Generally, a worthwhile approach is moving one exam at time in an exclusion pattern and diagnose the most likely disease given the available results. However, the larger and more complex the space of explored parameters, the harder it is for the doctor to discern real signal (i.e. what is actually linked with the disease) from statistical noisy information.

A similar issue is faced in several fields of medical research. For instance, finding the sequence in human DNA that links with a searched response is a beefed-up version of  the needle in a haystack problem. Despite the first complete human genome sequencing dating back almost twenty years, we are not much closer to understanding what are the genetic causes for diseases like cancer or leukaemia. Here, the information we can extract from a single genome is huge, and understanding which components are causally related to the phenomenon of interest is a massively complex problem.

The second component for the advent of AI in healthcare is the availability of technologies that allow for the efficient training of machine learning models in complex problems. Although most of currently used AI core algorithms date back decades, only in the past few years have they become applicable to real-word problems. Thanks to the evolution of CPU (and GPU) technology, AI models that needed weeks to be trained on a limited dataset, now can be trained over Gigabytes of data in only a few hours.

With these algorithms, we are finally able to explore incredibly vast parameter spaces, deal with noisy and missing information and learn high-dimensionality patterns that would be otherwise inaccessible.

The human body is a complex system, where only high-level components are well known. While, for example, we are able to explain the relations between viruses and flu, we are still not yet able to fully understand the processes that rule the biological cell or which differences in the genomic information are linked with high level differences among individuals.

Furthermore, when studying the effects of a medication or the health status variation in time of a patient, medical records are limited to a sporadic sampling. Since most exams and scans still require the patient to be in a hospital environment, they are limited in most situations to when they are actually needed (e.g., when the person is sick) instead of on a higher frequency base (which, in turn, would help mapping the whole course of the illness to the full recovery). As a consequence, most collected medical data is biased towards the periods when the patient is affected by some diseases, or whenrecovering from it. Often, no records exist for the periods when a person feels healthy, and the same exam might be conducted years apart. AI in healthcare must be able to include such a sparse and non-representative information in its input dataset, and integrate it with personal lifestyle and population profiling (e.g., age, assumption of alcoholic beverages), to better predict the risk factors for a single individual. These same systems should also cope with sporadic missing data, noise and errors/uncertainties in the historical datasets (as is the case for most manually-compiled information) in a similar or better way than its human counterparts.  Finally, they have to intelligently include in their analysis centuries of historical medical research (mostly from medical literature) and consider the aided doctor medical experience-matured knowledge.

All these problems would require a Strong AI, an intelligent system that is able to mimic the learning and generalisation skills of the creative human mind, thus being able to find creative solutions to unknown and unseen problems. At the present stage, we are at least decades away from such an artificial intelligence, but under certain constraints, current state-of-the-art AI has been already productively used in this space. In particular, when problems are narrower in scope and better defined, Weak AI solutions have shown remarkable results.

A limitation with these approaches is the need for massive labeled data for training (i.e. where the ground truth for the problem is available for a large number of cases) and the assumption of all useful information being somehow encoded in such data.

The future

In future, the vast majority of physicians will use AI tools in their clinical and research work. Faster and more precise diagnoses will be the norm, as well as exploiting these tools to select the best treatments for most issues.

However, some potential risks have to be taken into consideration. First, personal health information is sensitive data: adequate security and privacy policies must be established around any pipeline and algorithm that make use of it. Second, the predictions or decisions obtained are only as good as the data that has been used to train the models. For example, if the input dataset is biased towards some non-homogeneous distribution, this could lead to catastrophically inaccurate results. Training an AI skin cancer image classifier with just examples of a specific variation of this condition would prevent the model to generalise to other cancer forms. While this is not a critical issue in most non-medical AI applications, in healthcare this could cause wrong life-and-death decisions. Policymakers, healthcare professionals, and patients need to be aware of these issues while this technology is in its early stages.

Examples of Weak AI in Healthcare

IBM’s intelligent system Watson has been one of the first examples of AI applications in healthcare. Since the first project helping lung cancer treatments in 2013, IBM has created a whole suite of AI tools for clinical decision support (Watson Health) that process medical information from a large number of resources (e.g.,encyclopaedias, taxonomies, treatment guidelines) and evidences (e.g., scans, images, tests) in order to build-up an easily accessible knowledge base that can help physicians choose the best treatment for their patients. Once the symptomatic information is submitted to Watson, the system starts mining the historical personal health records and formulating hypotheses regarding the potential cause for the illness.

A more recent project, Watson for Oncology has been trialled in collaboration with the Manipal Hospitals in India. The outcome of this project is an online tool which helps cancer patients identifying personalised care options and physicians accessing an “expert-level” second opinion. The system is continuously trained on a dataset from almost 300 medical journals and 12 million pages of text.

Also Google, initially known for its web search engine, entered this space in a big way by acquiring DeepMind in 2014. This was a startup mainly based in UK, which gained large attention when its AlphaGo AI engine defeated the best Go players in the world.

DeepMind Health is a research project involving a number of healthcare providers and universities all over the UK. From these collaborations, AI tools and models have been designed and tested to search for predictive signs of blindness, to classify cancerous and healthy skin tissues, and to develop clinical mobile apps that can link to the digital health record of a person.

Finally, AI is an important component of many privately-funded healthcare projects of the Deakin Software and Technology Innovation Lab (Deakin University, Australia). For example, it has been designed and deployed for early prediction of the occurrence of epileptic seizures and quantitatively assess mental health conditions such as stress, anxiety and depression.

In the first case, thanks to the large amount of high-quality data collected by the Comprehensive Epilepsy Program at the Royal Melbourne Hospital, DSTIL’s researchers have been able to train deep learning algorithms to identify patterns and features in detailed clinical and imaging signals. AI in this case can predict whether a convulsive epileptic seizure is going to occur in the next few minutes, potentially leading towards a better understanding of the causes for such events and massively improving the lifestyle of people affected by this pathology.

On a different project, DSTIL developed an AI that is able to quantitatively and accurately detect stress and mental health conditions from biomarker profiles obtained with customer-level wearable smart devices. Stress levels are detected with more than 90per cent accuracy, while anxiety and depression can be confidently identified in more than 85per cent of the cases. As a comparison, professional psychiatry assessments show consistent agreement only in less than 70% of cases.