Artificial intelligence is undergoing a moment of zeitgeist. From the hosts of Westworld to Turing-tested humanoid Ava from Ex Machina, the paranoid references to its possibilities and horrors are exploited in cinema.  Siri, Cortana, and self-driving cars are perhaps popular, practical examples of the technology in use. 

Harnessing the power of artificial intelligence (AI) provides enticing opportunities that could transform the medical field. 

In September, U of T Professor Emeritus Geoffrey Hinton and President Emeritus David Naylor both published articles in the Journal of the American Medical Association on deep learning and its potential to transform medicine. 

Hinton, who is also a vice-president and engineering fellow at Google, distilled intricate aspects of deep learning in his article, while Naylor explored prospects for machine learning in health care in his. 

At U of T, Canada Research Chair in Robots for Society and Associate Professor Goldie Nejat and her team develop socially assistive robots to aid seniors, and Professor Shahrokh Valaee uses AI and artificial X-rays to pinpoint diseases.

“I believe in artificial intelligence in the long run,” said Dr. Frank Rudzicz. “I believe there is a future out there where you’ll have something like Echo in your house and Echo itself could diagnose you.”

Rudzicz is a scientist at the International Centre for Surgical Safety of the Li Ka Shing Knowledge Institute at St Michael’s Hospital and also a faculty member at the Vector Institute for Artificial Intelligence.

He is among a number of researchers working to use AI to transform the practice of medicine. 

At the Speech and Oral Communication lab (SPOClab), Rudzicz’s team of researchers investigate machine learning and natural language processing for use in health care practices.

Their aim is to use data to produce software that helps individuals with disabilities communicate. 

“We’re interested in the whole mechanism of speech and language processing. From the acoustics in speech, to how it is physically produced by the articulators, to how it’s produced in the brain,” said Rudzicz. 

In the short term, Rudzicz sees the speech recognition technology as being a Google search for physicians, providing them with relevant medical information on their patient’s history. 

It could help reduce the clerical burden for physicians by providing a transcription of communication with patients and integrating that with their electronic medical record. 

In the long term, with growing knowledge on the illness-related effects on the articulation of speech and speech patterns, the technology could be used as an end-to-end package to diagnose diseases like Alzheimer’s, Parkinson’s, and cerebral palsy, with some human oversight.

Despite such endeavours, there remain several hurdles that need to be overcome prior to introducing machine learning applications to a clinical setting.

Rudzicz warned against looking into the magical crystal ball for predicting the future, which “can be fun but it can also be wildly off-base.” 

There remain several hurdles that need to be overcome prior to the introduction of such technology into the market. 

For instance, accessing datasets that are used to develop the machine learning programs can be an expensive proposition for AI developers. 

These are expensive to obtain but important as input in training the machine learning programs. Through providing samples of variables, the input, along with its feedback, is required to build an AI. Large and diverse datasets are also critical to avoid biases. 

The Vector Institute obtains a large dataset of Ontarians through a collaboration with the Institute for Clinical and Evaluative Sciences. Rudzicz explained that obtaining datasets is only the first step. The next steps would be to build an AI model, which would undergo rigorous clinical trials. 

The final step is the buy-in from communities of health professionals who use the technology.

These stages are critical in developing an accurate machine, which is especially significant in medical practice. 

Take Watson — a computing system developed in IBM’s Deep QA project — whose successes and failures attest to fallacies of machine learning. 

At first, consumers hailed Watson as a potential breakthrough in cancer treatment, but recent news on Watson has been far from complimentary, citing inaccurate diagnoses, unsafe treatment advice, and general dissatisfaction from doctors. 

On the other hand, recently published studies that use deep learning and deep neural networks to identify retinal disease, pneumonia, and skin cancer show hopeful results. Deep neural networks performed on par with a group of 21 dermatologists.

Though AI is still in its infant stages, U of T is in a position to revolutionize how machine learning is used in health care.