Don't opt out: click here to learn more about our work.

How to trust AI with life-or-death decisions

A lecture on the ethics of consequential AI systems

How to trust AI with life-or-death decisions

As advances in AI reach new heights, there are certain traits that AI systems must have if humans are to trust them to make consequential decisions, according to U of T Computer Science Professor Emeritus Dr. Ronald Baecker at his lecture, “What Society Must Require of AI.” In an effort to learn more about how society will change as artificial intelligence (AI) advances, I attended the talk on June 5 and left more informed of the important role that people have to play in shaping this fast-changing technology.

How is today’s AI different from past technologies?

AI has been used in the field of computer science for decades, yet it has recently been accelerating at a strikingly fast pace, with machine learning catalyzing progress in the field.

Machine learning has improved pattern recognition to such an extent that it can now make consequential decisions normally made by humans. However, AI systems that apply machine learning can sometimes make mistakes.

For many AI systems, this result is acceptable. Such nonconsequential systems rarely make life-or-death decisions, explained Baecker, and their mistakes are “usually benign and can be corrected by trying again.”

But for consequential systems, which are AI-based software that addresses more complex problems, such mistakes are unacceptable.

Problems could arise when using AI to drive autonomous vehicles, diagnose medical conditions, inform decisions made in the justice system, and guide military drones. Mistakes in these areas could result in the loss of human life.

Baecker said that the research community must work to improve consequential AI, which he explains through his proposed “societal demands on AI.” He noted that these demands must give AI human-like attributes in order to improve the decisions that it makes.

Would you trust consequential AI?

When we agree to implement solutions for complex problems, said Baecker, we normally need to understand the “purpose and context” behind the solution suggested by a person or organization.

“If doctors, police officers, or governments make [critical] decisions or perform actions, they will be held responsible,” explained Baecker. “[They] may have to account or explain the logic behind these actions or decisions.”

However, the decision-making processes behind today’s AI systems are often difficult for people to understand. If we cannot understand the reasoning behind an AI’s decisions, it may be difficult for us to detect mistakes by the system and to justify its correct decisions.

Two questions we must answer to trust consequential AI

If a system makes a mistake, how can we easily detect it? The procedures that certain machine learning systems use cannot be easily explained, as their complexity ⁠— based on “hundreds of thousands of processing elements and associated numerical weights” ⁠— cannot be communicated to or understood by users. 

Even if the system works fine, how can we trust the results? For example, physicians reassure patients by explaining the reasoning for their treatment recommendations, so that patients understand what their decisions entail and why they are valid. It’s difficult to reassure users skeptical of an AI system’s decision when the decision-making process may be impossible to adequately explain.

It’s difficult to reassure users skeptical of an AI system’s decision when the decision-making process may be impossible to adequately explain.

Another real-life problem arises when courts use AI-embedded software to predict a defendant’s recidivism in order to aid in the setting of bonds. If that software system were inscrutable, then how could a defendant challenge the system’s reasoning on a decision that affects their freedom?

I found Baecker’s point fascinating: for society to be able to trust consequential AI systems, which may become integrated with everyday technologies, we must trust them like human decision-makers, and to do so, we must answer these questions.

Baecker’s point deserves more attention from us students, who, beyond using consumer technology every day, will likely experience the societal consequences of these AI systems once they are widely adopted.

Society must hold AI systems to stringent standards to trust them with life-or-death decisions

Baecker suggests that AI-embedded systems and algorithms must exhibit key characteristics of human decision-makers, with a list that he noted: “seems overwhelming.”

A trustworthy complex AI system, said Baecker, must display competence, dependability and reliability, openness, transparency and explainability, trustworthiness, responsibility and accountability, sensitivity, empathy, compassion, fairness, justice, and ethical behaviour. 

Baecker noted that the list is not exhaustive — it omits other attributes of true intelligence, such as common sense, intuition, context use, and discretion.

But at the same time, he also recognized that his list of requirements is an “extreme position,” which necessitates very high standards for a complex AI system to be considered trustworthy.

However, Baecker reinforced his belief that complex AI systems must be held to these stringent standards for society to be able to trust them to make life-or-death decisions.

“We as a research community must work towards endowing algorithmic agents with these attributes,” said Baecker. “And we must speak up to inform society that such conditions are now not satisfied, and to insist that people not be intimidated by hype and by high-tech mumbo-jumbo.”

“Society must insist on something like what I have proposed, or refinements of it, if we are to trust AI agents with matters of human welfare, health, life, and death.”

Can artificial intelligence transform health care?

U of T researchers are at the forefront of artificial intelligence applications

Can artificial intelligence transform health care?

Artificial intelligence is undergoing a moment of zeitgeist. From the hosts of Westworld to Turing-tested humanoid Ava from Ex Machina, the paranoid references to its possibilities and horrors are exploited in cinema.  Siri, Cortana, and self-driving cars are perhaps popular, practical examples of the technology in use. 

Harnessing the power of artificial intelligence (AI) provides enticing opportunities that could transform the medical field. 

In September, U of T Professor Emeritus Geoffrey Hinton and President Emeritus David Naylor both published articles in the Journal of the American Medical Association on deep learning and its potential to transform medicine. 

Hinton, who is also a vice-president and engineering fellow at Google, distilled intricate aspects of deep learning in his article, while Naylor explored prospects for machine learning in health care in his. 

At U of T, Canada Research Chair in Robots for Society and Associate Professor Goldie Nejat and her team develop socially assistive robots to aid seniors, and Professor Shahrokh Valaee uses AI and artificial X-rays to pinpoint diseases.

“I believe in artificial intelligence in the long run,” said Dr. Frank Rudzicz. “I believe there is a future out there where you’ll have something like Echo in your house and Echo itself could diagnose you.”

Rudzicz is a scientist at the International Centre for Surgical Safety of the Li Ka Shing Knowledge Institute at St Michael’s Hospital and also a faculty member at the Vector Institute for Artificial Intelligence.

He is among a number of researchers working to use AI to transform the practice of medicine. 

At the Speech and Oral Communication lab (SPOClab), Rudzicz’s team of researchers investigate machine learning and natural language processing for use in health care practices.

Their aim is to use data to produce software that helps individuals with disabilities communicate. 

“We’re interested in the whole mechanism of speech and language processing. From the acoustics in speech, to how it is physically produced by the articulators, to how it’s produced in the brain,” said Rudzicz. 

In the short term, Rudzicz sees the speech recognition technology as being a Google search for physicians, providing them with relevant medical information on their patient’s history. 

It could help reduce the clerical burden for physicians by providing a transcription of communication with patients and integrating that with their electronic medical record. 

In the long term, with growing knowledge on the illness-related effects on the articulation of speech and speech patterns, the technology could be used as an end-to-end package to diagnose diseases like Alzheimer’s, Parkinson’s, and cerebral palsy, with some human oversight.

Despite such endeavours, there remain several hurdles that need to be overcome prior to introducing machine learning applications to a clinical setting.

Rudzicz warned against looking into the magical crystal ball for predicting the future, which “can be fun but it can also be wildly off-base.” 

There remain several hurdles that need to be overcome prior to the introduction of such technology into the market. 

For instance, accessing datasets that are used to develop the machine learning programs can be an expensive proposition for AI developers. 

These are expensive to obtain but important as input in training the machine learning programs. Through providing samples of variables, the input, along with its feedback, is required to build an AI. Large and diverse datasets are also critical to avoid biases. 

The Vector Institute obtains a large dataset of Ontarians through a collaboration with the Institute for Clinical and Evaluative Sciences. Rudzicz explained that obtaining datasets is only the first step. The next steps would be to build an AI model, which would undergo rigorous clinical trials. 

The final step is the buy-in from communities of health professionals who use the technology.

These stages are critical in developing an accurate machine, which is especially significant in medical practice. 

Take Watson — a computing system developed in IBM’s Deep QA project — whose successes and failures attest to fallacies of machine learning. 

At first, consumers hailed Watson as a potential breakthrough in cancer treatment, but recent news on Watson has been far from complimentary, citing inaccurate diagnoses, unsafe treatment advice, and general dissatisfaction from doctors. 

On the other hand, recently published studies that use deep learning and deep neural networks to identify retinal disease, pneumonia, and skin cancer show hopeful results. Deep neural networks performed on par with a group of 21 dermatologists.

Though AI is still in its infant stages, U of T is in a position to revolutionize how machine learning is used in health care. 

Computer Science departments welcome five new faculty members

U of T hopes to advance robotics research

Computer Science departments welcome five new faculty members

Five new faculty members were appointed to U of T’s Computer Science departments for the 2018–2019 academic year, as the university moves to increase its commitment to computer science research, particularly in robotics.

The researchers come from a variety of backgrounds and have diverse research interests that encompass fields like robotics, machine learning, human-robot interaction, and parallel algorithms.

Dr. Animesh Garg, one of the new Assistant Professors in the Department of Mathematical and Computational Sciences at UTM, was previously a postdoctoral researcher at Stanford University.

In an email interview with The Varsity, Garg wrote that he chose to accept a position at U of T in part because of collaborations with industry leaders such as Google, NVIDIA, and Uber.

“The opportunity to work in such a dynamic environment composed of academic leaders, industrial partners and most of all inspiring students made for a great combination for a young academic such as myself to establish a thriving research lab,” continued Garg.

His research focuses on the fields of generalizable autonomy for robotics and “involves an integration of perception, machine learning and control in the real world.”

Dr. Maryam Mehri Dehnavi, a new Assistant Professor in the Department of Computer Science hailing from Rutgers University, wrote in an email to The Varsity that she was drawn to U of T because of its stellar academic environment and the city.

Dehnavi also pointed to the department’s focus beyond “just current trendy areas” and its investment in long-term research.

“We aim to significantly improve the performance of large-scale data-intensive problems on parallel and cloud computing platforms by building high-performance frameworks,: said Dehnavi on her research. “To build these frameworks we formulate scalable mathematical methods and develop domain-specific compilers and programming languages.”

Dr. Joseph Jay Williams is also a new Assistant Professor in the Department of Computer Science, previously from the National University of Singapore.

In an interview with The Varsity, he said that he is excited to join U of T due to the unique position he was offered in doing research that “applies computer science techniques to educational research.” In particular, Williams is excited to work on cross-disciplinary collaborations, such as with the Department of Psychology and the Ontario Institute for Studies in Education.

Williams’ research focuses on creating “intelligent self-improving systems that conduct dynamic experiments to discover how to optimize and personalize technology, helping people learn new concepts and change habitual behavior.”

In the future, Williams hopes to conduct randomized A/B experiments with practical applications in health and education.

Dr. Florian Shkurti will be an Assistant Professor in the Department of Mathematical and Computational Sciences at UTM coming from McGill University. Shkurti was drawn to U of T due to its “longstanding tradition of excellence” in areas like robotics, machine learning, computer vision, and various engineering subfields.

One of Shkurti’s research projects works on robot control systems that enable robots to work alongside scientists to explore underwater environments.

“In the future, I am planning to dedicate my research efforts to creating algorithms that learn useful abstractions and representations from large sources of unsupervised visual data,” said Shkurti.

Dr. Jessica Burgner-Kahrs from Leibniz Universität Hannover in Germany will join the Department of Mathematical and Computational Sciences at UTM as an Associate Professor.

According to Burgner-Kahrs, her research interests are in robotics, particularly in small-scale continuum robotics and human-robot interactions. She will be joining the faculty in Spring 2019.

Through its appointment of research-focused faculty, the university hopes to expand its research frontiers in computer science beyond traditional areas.

Become the master of CSC108

Introductory computer programming course experiments with self-paced mastery learning

Become the master of CSC108

This winter semester marks the start of the self-paced, mastery-based version of CSC108: Introduction to Computer Programming. Funded by the Provost’s Learning and Education Advancement Fund, this pilot will be testing whether mastery learning is an effective way to teach computer programming.

Paul Gries, an Associate Professor in U of T’s Department of Computer Science, has known “forever” that he wanted a self-paced version of CSC108, but he did not know how to implement it effectively. Mastery-based learning appears to be a viable answer.

Mastery learning requires students to demonstrate that they have mastered one concept before moving on to another. This differs from traditional styles of teaching, where students’ knowledge may be tested only once or twice over the course of the semester.

The course is broken down into seven units, called ‘quests.’ Students work through these quests by watching lectures online at home, and then go to class to work through exercises related to the content of the quests.

Before moving on to the next quest, students must demonstrate their mastery of the material by taking a quiz. If they achieve the threshold grade on the quiz, which ranges from 70 to 80 per cent, they are permitted to move on to the next set.

If they do not pass the mastery quiz, all hope is not lost. Students have the chance to practice more exercises or get one-on-one attention from Gries or the teaching assistants during class time.

To facilitate peer-learning, students who are struggling with the same material are placed together.

Students can move through the quests as quickly or as slowly as they want — something that is unheard of in university courses. By this account, Gries said that it is possible to be finished with the course material by halfway through the semester.

CSC108 has had a history of students dropping out or failing, only to retake the course again later. Gries thinks that this happens because students do not realize they are struggling with the material until the midterm, at which point it is usually too late. “There’s no mechanism or structure for them to effectively catch up,” said Gries.

It is especially hard to identify students that are struggling in a class with such large enrolment numbers. Offered in the fall, winter, and summer semesters, CSC108 is one of the largest classes on the St. George campus, with over 2,000 students enrolled per term.

When courses are both mastery-based and self-paced, students can figure out if they are struggling much more quickly — and they’re given an opportunity to catch up.

Michael Spyker, a student currently in the mastery section, believes that this new teaching style eliminates the “downwards spiral” some of us may be familiar with in traditional classrooms. “The mastery based section… provides a much more organic teaching environment,” said Spyker.

While mastery learning at U of T is a relatively new endeavour, CSC108 is no stranger to innovative learning — there are already two other non-traditional sections offered: one solely online, and the other, an inverted version.

The main difference between the inverted version and the mastery version is that the latter is self-paced.

“There’s all sorts of other research supporting that the inverted classroom is better than the traditional classroom, because students are doing active learning — they’re engaged in the material,” said Gries, adding that he believes this constant, short-term engagement of the material leads to better learning.

Second-year student Spencer Ki, who took CSC108 last fall, agrees that the inverted course is effective. “I feel that the inclusive and ‘hands-on’ approach taken in lectures really helped me absorb what was being taught, as opposed to simply memorising it.”

However, if he had the chance, Ki said he would have taken the mastery version. “I definitely see the mastery-based course as the next step in the evolution of university classes.”

Still, the mastery version might not be for everyone. “We expect that for some people the inverted classroom might be better” said Gries.

Ki had similar thoughts about the mastery version: “the temptation to procrastinate will probably be much higher.”

Gries’ enthusiasm for re-inventing education goes beyond just CSC108. He hopes that if this pilot proves to be successful, mastery-based learning could be implemented in introductory courses across U of T.

Gries does not see it stopping at universities: “There’s a lot of high schools that don’t offer [computer programming]. We’d like to find a way to offer that. So, maybe a couple of years from now we’ll have [a] mastery-based high school curriculum.”

Gries suggested that computer science students could travel to high schools to facilitate teachers in both learning and teaching the material.