Experts in medicine, academia, and industry explored the promises and perils of the applications of artificial intelligence (AI) in health care during panel discussions with the Raw Talk Podcast on May 7. The event was organized by graduate students of U of T’s Institute of Medical Science.

The two panels, collectively named “Medicine Meets Machine: The Emerging Role of AI in Healthcare,” aimed to demystify sensationalism and clarify misconceptions about the growing field of study.

“On one hand, it seems like everyone has heard about [AI],” said Co-executive Producer Grace Jacobs. “But on the other hand, it seems like there’s a lot of misunderstanding and misconceptions that are quite common.”

How AI is used in health care

While discussing the reality of AI, several panelists emphasized that it should be viewed and treated as a tool. “It is statistics where you don’t have to predefine your model exactly,” said Dr. Jason Lerch of the University of Oxford.

Other speakers agreed that AI is an expansion of — or a replacement for — traditional statistics, image processing, and risk scores, as it can provide doctors with more robust and accurate information. However, final health care recommendations and decisions remain in the hands of doctors and patients.

“You always need a pilot,” said Dr. Marzyeh Ghassemi, a U of T assistant professor of computer science and medicine.

But what advantages can this tool provide? Ghassemi thinks it can assimilate clues from a wider range of patients’ conditions to predict treatment outcomes, replacing the experience-based intuition that doctors currently rely on.

Speaking on her time in the Intensive Care Unit as an MIT PhD student, Ghassemi said, “A patient would come in, and I swear they would look to me exactly the same as prior patients, and the… senior doctors would call it. They would say, ‘oh, this one’s not going to make it. They’re going to die.’ And I would say, ‘Okay… why?’ And they said, ‘I’m not sure. I have a sense.’”

“They used different words — gestalt, sense — but they all essentially said the same thing. ‘I just — I have a sense.'”

Doctors develop this sense by seeing many cases during their training, but they can intuit only the cases that they had personally experienced; AI algorithms can potentially understand many more cases using a wider dataset.

Accessing those cases requires access to patient data, and access to data requires conversations about consent and privacy. Ghassemi and Dr. Sunit Das, a neurosurgeon at St. Michael’s Hospital and Scientist at the Keenan Research Centre for Biomedical Science, said that “de-identification” — the removal of information that can be traced back to individual identities — protects privacy.

Large de-identified datasets from the United States and the United Kingdom are available for AI research, but generally, Canada lags behind these countries in making health data available for this purpose.

Dr. Alison Paprica, Vice-President of Health Strategy and Partnerships at the Vector Institute, agreed that data should be used for research, but argued that de-identification alone does not eliminate risk.

“You’re not just giving a dataset to anybody,” she said. “You’re giving a dataset to people who are extremely skilled at finding relationships and patterns and maybe piecing together information in ways that most people couldn’t. So I think there’s going to be heightened sensitivity around re-identification risk.”

Society must manage this risk and balance it against the benefits. “How do we balance that?” Paprica asked. She suggested that consulting all involved stakeholders could help strike that equilibrium.

Advice for scientists aiming to use AI in their research

So what advice did the panelists have for scientists hoping to harness the power of AI in their own research?

Ghassemi stressed the importance of knowing what you’re doing: researchers have created many tools that make AI research easy to implement, but conscientious scientists need to know the statistical and training principles behind the methods.

“If you’re not aware of how these things are trained,” she said, “it’s really easy to misuse them. Like, shockingly easy to misuse them.”

Other panelists advised users to take care when choosing data to train the algorithms. “A learning algorithm can’t overcome bad data that goes in, or can’t completely overcome it,” said Lerch.

Moderator Dr. Shreejoy Tripathy summed up a key takeaway on applying AI to health care: “Understand your data… And understand your algorithms.”