Don't opt out: click here to learn more about our work.

How to trust AI with life-or-death decisions

A lecture on the ethics of consequential AI systems

How to trust AI with life-or-death decisions

As advances in AI reach new heights, there are certain traits that AI systems must have if humans are to trust them to make consequential decisions, according to U of T Computer Science Professor Emeritus Dr. Ronald Baecker at his lecture, “What Society Must Require of AI.” In an effort to learn more about how society will change as artificial intelligence (AI) advances, I attended the talk on June 5 and left more informed of the important role that people have to play in shaping this fast-changing technology.

How is today’s AI different from past technologies?

AI has been used in the field of computer science for decades, yet it has recently been accelerating at a strikingly fast pace, with machine learning catalyzing progress in the field.

Machine learning has improved pattern recognition to such an extent that it can now make consequential decisions normally made by humans. However, AI systems that apply machine learning can sometimes make mistakes.

For many AI systems, this result is acceptable. Such nonconsequential systems rarely make life-or-death decisions, explained Baecker, and their mistakes are “usually benign and can be corrected by trying again.”

But for consequential systems, which are AI-based software that addresses more complex problems, such mistakes are unacceptable.

Problems could arise when using AI to drive autonomous vehicles, diagnose medical conditions, inform decisions made in the justice system, and guide military drones. Mistakes in these areas could result in the loss of human life.

Baecker said that the research community must work to improve consequential AI, which he explains through his proposed “societal demands on AI.” He noted that these demands must give AI human-like attributes in order to improve the decisions that it makes.

Would you trust consequential AI?

When we agree to implement solutions for complex problems, said Baecker, we normally need to understand the “purpose and context” behind the solution suggested by a person or organization.

“If doctors, police officers, or governments make [critical] decisions or perform actions, they will be held responsible,” explained Baecker. “[They] may have to account or explain the logic behind these actions or decisions.”

However, the decision-making processes behind today’s AI systems are often difficult for people to understand. If we cannot understand the reasoning behind an AI’s decisions, it may be difficult for us to detect mistakes by the system and to justify its correct decisions.

Two questions we must answer to trust consequential AI

If a system makes a mistake, how can we easily detect it? The procedures that certain machine learning systems use cannot be easily explained, as their complexity ⁠— based on “hundreds of thousands of processing elements and associated numerical weights” ⁠— cannot be communicated to or understood by users. 

Even if the system works fine, how can we trust the results? For example, physicians reassure patients by explaining the reasoning for their treatment recommendations, so that patients understand what their decisions entail and why they are valid. It’s difficult to reassure users skeptical of an AI system’s decision when the decision-making process may be impossible to adequately explain.

It’s difficult to reassure users skeptical of an AI system’s decision when the decision-making process may be impossible to adequately explain.

Another real-life problem arises when courts use AI-embedded software to predict a defendant’s recidivism in order to aid in the setting of bonds. If that software system were inscrutable, then how could a defendant challenge the system’s reasoning on a decision that affects their freedom?

I found Baecker’s point fascinating: for society to be able to trust consequential AI systems, which may become integrated with everyday technologies, we must trust them like human decision-makers, and to do so, we must answer these questions.

Baecker’s point deserves more attention from us students, who, beyond using consumer technology every day, will likely experience the societal consequences of these AI systems once they are widely adopted.

Society must hold AI systems to stringent standards to trust them with life-or-death decisions

Baecker suggests that AI-embedded systems and algorithms must exhibit key characteristics of human decision-makers, with a list that he noted: “seems overwhelming.”

A trustworthy complex AI system, said Baecker, must display competence, dependability and reliability, openness, transparency and explainability, trustworthiness, responsibility and accountability, sensitivity, empathy, compassion, fairness, justice, and ethical behaviour. 

Baecker noted that the list is not exhaustive — it omits other attributes of true intelligence, such as common sense, intuition, context use, and discretion.

But at the same time, he also recognized that his list of requirements is an “extreme position,” which necessitates very high standards for a complex AI system to be considered trustworthy.

However, Baecker reinforced his belief that complex AI systems must be held to these stringent standards for society to be able to trust them to make life-or-death decisions.

“We as a research community must work towards endowing algorithmic agents with these attributes,” said Baecker. “And we must speak up to inform society that such conditions are now not satisfied, and to insist that people not be intimidated by hype and by high-tech mumbo-jumbo.”

“Society must insist on something like what I have proposed, or refinements of it, if we are to trust AI agents with matters of human welfare, health, life, and death.”

U of T student wins Pioneer Tournament with team for innovation that predicts human cancer risk

Hannah Le and teammates developed an innovation that blends AI, machine learning, and genomics

U of T student wins Pioneer Tournament with team for innovation that predicts human cancer risk

As many U of T students were wrapping up classes in March, first-year engineering student Hannah Le and her team won the third Pioneer Tournament — a worldwide competition that rewards participants for developing innovative ideas — for their project that used machine learning to identify and understand human biomarkers that predispose individuals to certain diseases.

Competition participants submit their project online and post weekly progress updates. The project then earns points awarded by contestants, who vote on the updates. After three weeks, the project becomes eligible to win a weekly prize, which is awarded to the team that wins the highest number of points at the end of that week. A project that places as a finalist for three weeks wins the team a larger award.

Le and her team members — Samarth Athreya, 16, and Ayaan Esmail, 14 — earned a top spot on the leaderboard in March and were awarded $7,000 from Pioneer to put toward their project. 

How the team got together

“Samarth, Ayaan and I met each other at an organization called The Knowledge Society in 2017,” wrote Le to The Varsity. The Knowledge Society is a startup incubator that exposes high school students to emerging technologies, such as artificial intelligence (AI), virtual reality, and brain-computer interfaces.

When the three innovators met, Esmail was working on a project that could accurately pinpoint and target cancer cells, while Athreya was working with machine learning models. With Le’s interest in genetics, the three decided to team up and investigate whether there was a way to use metabolic data to predict the onset of a disease.   

“I became incredibly curious on how we can decode the 3 billion letters [of DNA] in every cell of our body to increase human lifespan and healthspan,” wrote Le.

“Inspired by my grandmother who passed away due to cancer, I started asking myself the question: [could] there possibly be a way for us to predict the onset of cancer before it happens, instead of curing it?”

How Le’s team developed a model for predicting the risk of cancer development

At its core, the team’s AI platform uses a patient’s biological information to predict their risk of developing certain forms of cancer.

Metabolites are molecules that play a key role in maintaining cellular function, and some studies have shown that high levels of certain metabolites can signal the progression of lung cancer. But to develop and test their model, the team needed a large amount of metabolic data.

“To overcome such [a] limitation, we had the fortune to reach out to mentors such as the Head of Innovation at JLABS, [a Johnson & Johnson incubator], for further guidance and advice,” wrote Le. “As our team cultivates a stronger database, we would be able to produce more reliable results.”

“As teenagers we were far from experts [in] the field but we were really hungry to learn,” added Le.

As participants of the Pioneer Tournament, Le and her team received the opportunity to select a board of virtual advisors, who would provide guidance for their project.

“I recalled contacting Josh Tobin at OpenAI to ask him about the use of synthetic data in genomics research,” wrote Le. “[That enabled] us to understand both the strengths and weaknesses of such [an] approach, allowing us to pivot on what models to implement.”

The competition as a learning experience

Le remembers the Pioneer Tournament as an exciting chance to learn about different machine learning models and what made them effective as well as other projects that fellow participants were working on, all while attending courses at U of T.

“First year was an interesting journey of challenging course content, intertwined with unexpected personal growth,” wrote Le. “I learned how to strike a balance between working on personal projects, meeting interesting people, while completing my school work.”

And while Le is intrigued by the intersection of machine learning and genomics, she wrote, “I hope to keep an open mind and continue to be curious about the world around me.”

Eye spy a fruit fly

Drosophila melanogaster can distinguish other flies

Eye spy a fruit fly

Most of us don’t think much of fruit flies other than as noisy nuisances with their sights set on spoiled food. 

However, according to Jonathan Schneider and Joel Levine, researchers in UTM’s Department of Biology, fruit flies, or Drosophila melanogaster, have a higher capacity for visual comprehension than previously believed.

Schneider, a postdoctoral fellow, and his supervisor, Levine, Chair of UTM’s Biology Department and a senior fellow at the Canadian Institute for Advanced Research (CIFAR) Child & Brain Development program, detailed their research in a paper published in the October issue of PLOS One. 

The research was funded by a CIFAR Catalyst grant and conducted in collaboration with Nihal Murali, a colleague from the Department of Machine Learning at the University of Guelph’s School of Engineering, and Graham Taylor, a Canada Research Chair in Machine Learning.

Though fruit flies have a limited scope of vision, they possess an incredibly layered and organized visual system, including hyperacute photoreceptors. 

Schneider and Levine wanted to determine whether fruit flies, despite their limited input image, could distinguish individual flies.

To do so, the researchers equipped a machine with 25,000 artificial neurons to mimic the eye of a fruit fly. They then recorded 20 individual flies — 10 male, 10 female — for 15 minutes for three days using a machine vision camera. From these recordings, they developed standardized images, which they resized to imitate the images the flies perceived. 

They showed the images to ResNet18 — a computer algorithm without the constraints of ‘fly eye’ technology — their ‘fly eye’ machine, and human participants. All three were tasked with re-identifying the fly whose images they had been shown.  

The results indicated that fruit flies can extract meaning from their visual surroundings and can even recognize individual fruit flies, something that even fly biologists have had trouble with. 

“So, when one [fruit fly] lands next to another,” explains Schneider to Science Daily, “it’s ‘Hi Bob, Hey Alice.’”

Fruit flies’ extent of visual comprehension has implications for their social behaviour, and this study could help researchers learn how they communicate. 

As well, these findings are significant because while most programs designed to mimic human capacity — such as virtual assistants like Siri, Alexa, and Google Assistant — come close to it, rarely do they go beyond it, like with the ‘fly eye.’

Machines like these can bridge the gap between engineers and neurobiologists. The former can use their findings to design their machines as biologically realistic as possible. 

The latter can use that biological accuracy to hypothesize how visual systems process information and, as Schneider and his colleagues put it, “uncover not just how [fruit flies], but all of us, see the world.”

Like humans do

Can U of T researchers help turn computers into mini-minds?

Like humans do

While calculators can be helpful when tackling some math equations, they can’t compete with the complex thought processes of humans — at least, not yet. Dr. Richard Zemel and Dr. Raquel Urtasun of U of T’s Computer Science Department are trying to speed that research along; the two are working to build computers to think more like humans when it comes to processing data. 

The two are part of a team of scientists and mathematicians led by the Baylor College of Medicine, trying to understand the computational building blocks of the brain. The goal is to create more advanced learning machines.

For this project, researchers at the University of Toronto will be partnering with the California Institute of Technology, Columbia University, Cornell University, Rice University, and the Max Planck Institute at the University of Tübingen.

Their research is supported by a program known as Machine Intelligence from Cortical Networks (MICrONS), and operates under the umbrella of Intelligence Advanced Research Projects Activity (IARPA). The IARPA is a US agency which invests in high-risk, high-reward research that offer solutions to the needs of US intelligence agencies. It is also part of the broader BRAIN Initiative, launched in 2013 by President Obama with an eye towards understanding devastating brain diseases and developing new technology, treatments, and cures.

This research will not only help scientists understand the computational workings of the brain, but will also advance the study of synthetic neural networks in order to better predict events such as cyber-attacks, financial crashes, or hazardous weather. 

Algorithms based on neural networks are already used in a wide range of areas, from the consumer-level to military intelligence as seen in “speech recognition, text analysis, object classification, [as well as] image and video analysis programs. The applications are broad,” says Dr. Zemel, adding that the “aim is to extend some of the most popular types of machine-learning models using deep neural networks.”

The massive amounts of data produced across the world on a daily basis affects everything from spam in your inbox to military intelligence operations. Smarter and more discerning learning machines will help to manage and present information in a more comprehensible way. 

“Currently the rules by which activities in a network are defined are mostly ad hoc, and validated and improved by experience. Here we hope to gain some insight from natural deep neural networks to refine these rules,” said Zemmel.   

In other words, the ways in which current algorithms represent, transform, and learn from data are determined largely through trial and error.   

Based on models dating back to the 1980s, advancements in neural networking have been confined to scientists’ ability to measure the activity of only a few neurons at a time. Today, more accurate and plentiful data allows researchers to take a more enhanced detailed look at brain activity, allowing for a more computational, rather than architectural understanding.

The availability of better tools, techniques, and technology will allow MICrONS researchers to measure the activity of 100,000 neurons while a subject is engaged in visual perception and learning tasks. Although the research teams will be mapping the activity of one cubic millimetre of a rodent’s brain (a volume less than one-millionth the size of the human brain), these tools will allow them to analyze neural circuits in ways that were unimaginable just a few years ago. 

The precision and microscopic scale of this research is challenging as scientists are aiming to obtain a highly detailed and complete understanding of one small part of the brain, rather than a structural understanding of the brain entirely.

The team hopes to develop new methods of passing messages from this data, which describe how information is passed between the model neurons in a big network.

The research that doctors Zemel and Urtasun are conducting can bring computers closer to actual brain levels of functioning. This will allow for more powerful performance that will better align with human needs.