Don't opt out: click here to learn more about our work.

How to trust AI with life-or-death decisions

A lecture on the ethics of consequential AI systems

How to trust AI with life-or-death decisions

As advances in AI reach new heights, there are certain traits that AI systems must have if humans are to trust them to make consequential decisions, according to U of T Computer Science Professor Emeritus Dr. Ronald Baecker at his lecture, “What Society Must Require of AI.” In an effort to learn more about how society will change as artificial intelligence (AI) advances, I attended the talk on June 5 and left more informed of the important role that people have to play in shaping this fast-changing technology.

How is today’s AI different from past technologies?

AI has been used in the field of computer science for decades, yet it has recently been accelerating at a strikingly fast pace, with machine learning catalyzing progress in the field.

Machine learning has improved pattern recognition to such an extent that it can now make consequential decisions normally made by humans. However, AI systems that apply machine learning can sometimes make mistakes.

For many AI systems, this result is acceptable. Such nonconsequential systems rarely make life-or-death decisions, explained Baecker, and their mistakes are “usually benign and can be corrected by trying again.”

But for consequential systems, which are AI-based software that addresses more complex problems, such mistakes are unacceptable.

Problems could arise when using AI to drive autonomous vehicles, diagnose medical conditions, inform decisions made in the justice system, and guide military drones. Mistakes in these areas could result in the loss of human life.

Baecker said that the research community must work to improve consequential AI, which he explains through his proposed “societal demands on AI.” He noted that these demands must give AI human-like attributes in order to improve the decisions that it makes.

Would you trust consequential AI?

When we agree to implement solutions for complex problems, said Baecker, we normally need to understand the “purpose and context” behind the solution suggested by a person or organization.

“If doctors, police officers, or governments make [critical] decisions or perform actions, they will be held responsible,” explained Baecker. “[They] may have to account or explain the logic behind these actions or decisions.”

However, the decision-making processes behind today’s AI systems are often difficult for people to understand. If we cannot understand the reasoning behind an AI’s decisions, it may be difficult for us to detect mistakes by the system and to justify its correct decisions.

Two questions we must answer to trust consequential AI

If a system makes a mistake, how can we easily detect it? The procedures that certain machine learning systems use cannot be easily explained, as their complexity ⁠— based on “hundreds of thousands of processing elements and associated numerical weights” ⁠— cannot be communicated to or understood by users. 

Even if the system works fine, how can we trust the results? For example, physicians reassure patients by explaining the reasoning for their treatment recommendations, so that patients understand what their decisions entail and why they are valid. It’s difficult to reassure users skeptical of an AI system’s decision when the decision-making process may be impossible to adequately explain.

It’s difficult to reassure users skeptical of an AI system’s decision when the decision-making process may be impossible to adequately explain.

Another real-life problem arises when courts use AI-embedded software to predict a defendant’s recidivism in order to aid in the setting of bonds. If that software system were inscrutable, then how could a defendant challenge the system’s reasoning on a decision that affects their freedom?

I found Baecker’s point fascinating: for society to be able to trust consequential AI systems, which may become integrated with everyday technologies, we must trust them like human decision-makers, and to do so, we must answer these questions.

Baecker’s point deserves more attention from us students, who, beyond using consumer technology every day, will likely experience the societal consequences of these AI systems once they are widely adopted.

Society must hold AI systems to stringent standards to trust them with life-or-death decisions

Baecker suggests that AI-embedded systems and algorithms must exhibit key characteristics of human decision-makers, with a list that he noted: “seems overwhelming.”

A trustworthy complex AI system, said Baecker, must display competence, dependability and reliability, openness, transparency and explainability, trustworthiness, responsibility and accountability, sensitivity, empathy, compassion, fairness, justice, and ethical behaviour. 

Baecker noted that the list is not exhaustive — it omits other attributes of true intelligence, such as common sense, intuition, context use, and discretion.

But at the same time, he also recognized that his list of requirements is an “extreme position,” which necessitates very high standards for a complex AI system to be considered trustworthy.

However, Baecker reinforced his belief that complex AI systems must be held to these stringent standards for society to be able to trust them to make life-or-death decisions.

“We as a research community must work towards endowing algorithmic agents with these attributes,” said Baecker. “And we must speak up to inform society that such conditions are now not satisfied, and to insist that people not be intimidated by hype and by high-tech mumbo-jumbo.”

“Society must insist on something like what I have proposed, or refinements of it, if we are to trust AI agents with matters of human welfare, health, life, and death.”

U of T team wins top prize at KPMG’s international AI competition

Paramount AI team created device that sorts waste with 94 per cent accuracy

U of T team wins top prize at KPMG’s international AI competition

A team of five U of T graduate students named Paramount AI won first place in KPMG’s 2019 Ideation Challenge, a worldwide competition to develop solutions to problems facing businesses using artificial intelligence (AI). KPMG is one of the world’s top four accounting firms.

The U of T students faced off against 600 participants from top universities across nine countries, including Canada, Australia, China, Germany, Luxembourg, Italy, the Netherlands, and the United Kingdom.

The final round was held from May 10–12 in Amsterdam, where the students — Maharshi Trivedi, Nikunj Viramgama, Aakash Iyer, Vaibhav Gupta, and Ganesh Vedula — won the top prize for their innovation, which used AI to automate waste segregation.

Paramount AI’s innovative solution

The winning innovation is a sorting system able to distinguish between three different categories of waste: recycling, organic, and garbage.

Iyer, who is specializing in data analytics and financial engineering, explained that the initial prototype of the system used LED light bulbs and basic circuits to classify the waste.

The five students worked continuously, with little breaks and limited sleep during the three days of the competition, which came at the expense of exploring Amsterdam.

The reward for their efforts came in the confirmation of the practicality of using the system in real-life situations. The device completed both a financial and market analysis by the end of the competition.

The importance of waste segregation

Viramgama, who is specializing in data analytics and data science, explained that the team chose to focus on the issue of waste segregation because they were concerned about improper sorting in Toronto.

He noted that about one in three residents in Toronto contaminate the waste they place in recycling bins, and that 20 per cent of waste placed in blue recycling bins ends up in a landfill.

Since there is limited landfill space, this has motivated government spending on improved waste management. An increase in spending may lead to a raise in taxes,which makes the emergence of automation in waste segregation something that can greatly benefit our waste management.

The U of T team tackled this issue by creating a system that accurately sorts waste about 94 per cent of the time. Current waste systems have an accuracy of only up to 74 per cent, and each percentage of accuracy translates to significant savings for spending on waste management.

The pressing need for a solution to this environmental problem, which has economic consequences, could be a reason why Paramount AI won the competition.

The other reason, explained Vedula, was that the team was “not only thinking about saving the environment, but… also trying to help businesses [maximize] profits.”

The future of Paramount AI

The next step for Paramount AI is to present their prototype to experts at KPMG’s annual AI summit in October. By then, the team hopes to further develop their model, aiming to continue increasing the accuracy of their system, while likely adding new features to increase the value of the product for potential clients.

The students currently have the intellectual property rights of their invention. With the support of KPMG, the team is interested in looking to commercialize their product.

They are also optimistic about the future of AI in positively shaping the lives of Torontonians, as a whole. “We completely believe that in the next few years, we will see AI being integrated in every part of our lives, because there is a huge potential,” said Vedula.

“[AI] is already involved in making our lives easier.”

Where computers and clinics intersect

Raw Talk Podcast hosts expert panel discussions about AI’s role in healthcare

Where computers and clinics intersect

Experts in medicine, academia, and industry explored the promises and perils of the applications of artificial intelligence (AI) in health care during panel discussions with the Raw Talk Podcast on May 7. The event was organized by graduate students of U of T’s Institute of Medical Science.

The two panels, collectively named “Medicine Meets Machine: The Emerging Role of AI in Healthcare,” aimed to demystify sensationalism and clarify misconceptions about the growing field of study.

“On one hand, it seems like everyone has heard about [AI],” said Co-executive Producer Grace Jacobs. “But on the other hand, it seems like there’s a lot of misunderstanding and misconceptions that are quite common.”

How AI is used in health care

While discussing the reality of AI, several panelists emphasized that it should be viewed and treated as a tool. “It is statistics where you don’t have to predefine your model exactly,” said Dr. Jason Lerch of the University of Oxford.

Other speakers agreed that AI is an expansion of — or a replacement for — traditional statistics, image processing, and risk scores, as it can provide doctors with more robust and accurate information. However, final health care recommendations and decisions remain in the hands of doctors and patients.

“You always need a pilot,” said Dr. Marzyeh Ghassemi, a U of T assistant professor of computer science and medicine.

But what advantages can this tool provide? Ghassemi thinks it can assimilate clues from a wider range of patients’ conditions to predict treatment outcomes, replacing the experience-based intuition that doctors currently rely on.

Speaking on her time in the Intensive Care Unit as an MIT PhD student, Ghassemi said, “A patient would come in, and I swear they would look to me exactly the same as prior patients, and the… senior doctors would call it. They would say, ‘oh, this one’s not going to make it. They’re going to die.’ And I would say, ‘Okay… why?’ And they said, ‘I’m not sure. I have a sense.’”

“They used different words — gestalt, sense — but they all essentially said the same thing. ‘I just — I have a sense.'”

Doctors develop this sense by seeing many cases during their training, but they can intuit only the cases that they had personally experienced; AI algorithms can potentially understand many more cases using a wider dataset.

Accessing those cases requires access to patient data, and access to data requires conversations about consent and privacy. Ghassemi and Dr. Sunit Das, a neurosurgeon at St. Michael’s Hospital and Scientist at the Keenan Research Centre for Biomedical Science, said that “de-identification” — the removal of information that can be traced back to individual identities — protects privacy.

Large de-identified datasets from the United States and the United Kingdom are available for AI research, but generally, Canada lags behind these countries in making health data available for this purpose.

Dr. Alison Paprica, Vice-President of Health Strategy and Partnerships at the Vector Institute, agreed that data should be used for research, but argued that de-identification alone does not eliminate risk.

“You’re not just giving a dataset to anybody,” she said. “You’re giving a dataset to people who are extremely skilled at finding relationships and patterns and maybe piecing together information in ways that most people couldn’t. So I think there’s going to be heightened sensitivity around re-identification risk.”

Society must manage this risk and balance it against the benefits. “How do we balance that?” Paprica asked. She suggested that consulting all involved stakeholders could help strike that equilibrium.

Advice for scientists aiming to use AI in their research

So what advice did the panelists have for scientists hoping to harness the power of AI in their own research?

Ghassemi stressed the importance of knowing what you’re doing: researchers have created many tools that make AI research easy to implement, but conscientious scientists need to know the statistical and training principles behind the methods.

“If you’re not aware of how these things are trained,” she said, “it’s really easy to misuse them. Like, shockingly easy to misuse them.”

Other panelists advised users to take care when choosing data to train the algorithms. “A learning algorithm can’t overcome bad data that goes in, or can’t completely overcome it,” said Lerch.

Moderator Dr. Shreejoy Tripathy summed up a key takeaway on applying AI to health care: “Understand your data… And understand your algorithms.”

Opinion: $100 million donation cements U of T’s global leadership

Recent donations to AI and biomedical research propel U of T’s innovation field toward greatness

Opinion: $100 million donation cements U of T’s global leadership

U of T’s plan to build a new 750,000-square-foot innovation research complex exhibits its commitment to artificial intelligence (AI) and biomedical research leadership, which will greatly enrich our collective university experience. This breakthrough, courtesy of the university’s recent $100 million donation from Gerald Schwartz and Heather Reisman, firmly anchors its leading status in Canada and its rising position in the world.

After hearing U of T’s plans to build hubs to stimulate innovation, the billionaire couple announced the largest-ever donation the university has ever received at a press conference on March 25 to support these ambitions. The Schwartz Reisman Innovation Centre, to be located at the corner of College Street and Queen’s Park, will realize the interplay of technology, culture, and society.

Philanthropic gifts such as these support universities by both expanding on existing innovation projects and seeking opportunities to launch new institutes in areas such as machine learning and biomedical and robotic improvement. At U of T, recent philanthropic projects such as the $6.7 million from TD toward data analytics, health care, and behavioural economics, as well as the $20 million for research on the biological causes of depression from the Labatt family strengthen U of T’s potential impact on innovation both domestically and internationally.

Importantly, the $100 million donation firmly ties U of T with the word “innovation,” as it is the largest donation ever made in the Canadian innovation sector. This donation will likely go down among the most noteworthy advancements in Canada’s innovation history. Other notable programs include CanadaHelps, the country’s largest non-profit platform for donating and fundraising with a focus on innovation, which surpassed $500 million in 2015, and Google Canada, which launched a $5 million prize for non-profit innovation. However, these influences are not as profound as that of the single $100 million donation. This is because the Schwartz Reisman Innovation Centre will constantly remind future generations of scholars, entrepreneurs, and philanthropists of the spirit of innovation, and of the university’s commitment to a brighter future.

The emphasis placed on innovation in AI and biomedicine by U of T and Schwartz and Reisman reaffirms the significance of this gift. The donation continues a remarkable industry-wide outburst of philanthropy for innovations in AI and biomedical research to prominent universities around the planet. For example, in October, the single largest donation of $350 million USD to the Massachusetts Institute of Technology prompted its commitment to dedicate $1 billion USD to research into the rapid evolution of computing and AI. That same month, the University of Oxford’s Future of Humanity Institute received a donation of approximately $17.6 million USD to foster research in advanced technology. In July 2017, a $10 million USD endowment to the Stanford Cancer Institute called for advance research in cancer cell therapy, the pioneering cancer treatment today. Six months earlier, Carnegie Mellon University had received a $250 million USD gift to fund a new robotic institution.

Colloquially, the term ‘AI’ is used to describe machines that mimic the cognitive functions of humans, such as learning and problem-solving. Biomedicine, meanwhile, applies biological and physiological principles to clinical practice, which has been the dominant health system for more than a century. Innovations, from a historic meaning, are meant to facilitate human wellbeing.

We are in the midst of a revolution focused on AI and biomedical systems. With the significance of such research, the industrial evolution that is now occurring shoulders the responsibilities of optimizing every facet of humans’ lives, from prolonging life to easing low-wage workers from heavy labour.

Schwartz and Reisman’s donation stamps U of T’s profound position as a top educational institution in the innovation industry. Thanks to generous recent donations, U of T is continuing to shift its focus to the innovation and technology spheres. The implication behind this donation also signals U of T’s rising and determinant role in the current revolution and its ambition to work collaboratively with other influential institutions to lead innovation.

U of T undergraduate co-wins prestigious research award at AIES Conference

Inioluwa Deborah Raji awarded best paper for detecting facial recognition bias in Amazon technology

U of T undergraduate co-wins prestigious research award at AIES Conference

Amazon’s facial recognition technology may be misidentifying dark-skinned women, according to U of T Engineering Science undergraduate Inioluwa Deborah Raji and Massachusetts Institute of Technology Media Lab research assistant Joy Buolamwini. This finding helped Raji and Buolamwini win “best student paper” at the Artificial Intelligence, Ethics, and Society (AIES) Conference in Honolulu, Hawaii. Held in January, the prestigious conference was sponsored by Google, Facebook, Amazon, and the like.

Their paper, which caught the Toronto Star’s attention, was a follow-up on an earlier audit by Buolamwini on technology from Microsoft, IBM, and Face++, a facial recognition startup based in China.

Origins of the research

Buolamwini’s earlier study, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” investigated the accuracy of artificial intelligence (AI) systems used by the three technology firms for facial recognition. Then-Microsoft Research computer scientist Timnit Gebru co-authored the paper.

Raji wrote that after reading about Buolamwini’s experiences “as a black woman being regularly misgendered by these models,” she wondered if her personal experience would hold true for a larger dataset containing samples of other dark-skinned women. This proved to be the case in the final analysis.

According to Raji, “Gender Shades” uncovered “serious performance disparities” in software systems used by the three firms. The results showed that the software misidentified darker-skinned women far more frequently than lighter-skinned men.

In an email to The Varsity, Raji wrote that since the release of Buolamwini and Gebru’s study, all three audited firms have updated their software to address these concerns.

For the paper submitted to the AIES Conference, Raji and Buolamwini tested the updated software to examine the extent of the change. They also audited Amazon and Karios, a small technology startup, to see how the companies’ adjusted performance “compared to the performance of companies not initially targeted by the initial study.”

At the time of Raji and Buolamwini’s follow-up study in July, Raji wrote that “the ACLU [American Civil Liberties Union] had recently reported that Amazon’s technology was being used by police departments in sensitive contexts.”

Amazon denied that bias was an issue, saying that it should not be a concern for their “partners, clients, or the public.”

Raji and Buolamwini’s study showed evidence to the contrary. “We found that they actually had quite a large performance disparity between darker females and lighter males, not working equally for all the different intersectional subgroups,” wrote Raji.

Amazon’s response to the study

In a statement sent by Amazon’s Press Center to The Varsity, a representative wrote that the results of Raji and Boulamwini’s study would not be applicable to technologies used by law enforcement.

Amazon wrote that the study’s results “are based on facial analysis and not facial recognition,” and clarified that “analysis can spot faces in videos or images and assign generic attributes such as wearing glasses,” while “recognition is a different technique by which an individual face is matched to faces in videos and images.”

“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case – including law enforcement – based on results obtained using facial analysis,” continued Amazon. “The results in the paper also do not use the latest version of Rekognition and do not represent how a customer would use the service today.”

In a self-study using an “up-to-date version of Amazon Rekognition with similar data downloaded from parliamentary websites and the Megaface dataset of 1M images,” explained Amazon, “we found exactly zero false positive matches with the recommended 99% confidence threshold.”

However, Amazon noted that it continues “to seek input and feedback to constantly improve this technology, and support the creation of third party evaluations, datasets, and benchmarks.” Furthermore, Amazon is “grateful to customers and academics who contribute to improving these technologies.”

The pair’s research could inform policy

Raji wrote that while it’s tempting for the media to focus on the flaw in Amazon’s software, she thinks that the major contribution of her paper is in helping to uncover how researchers can effectively conduct and present an audit of an algorithmic software system to prompt corporate action.

“Gender Shades introduced the idea of a model-level audit target, a user-presentative test set, a method for releasing results to companies called Coordinated Bias Disclosure,” wrote Raji.

In other words, Raji and Buolamwini’s research showed an effective way for companies and policymakers to investigate and communicate a problem in software systems and take action.

Most importantly, wrote Raji, the studies highlight the need for researchers to evaluate similar software models “with an intersectional breakdown of the population being served.”

Department of Engineering introduces artificial intelligence minor and certificate

The new program will be available to students in January

Department of Engineering introduces artificial intelligence minor and certificate

The Faculty of Applied Science & Engineering’s new Artificial Intelligence (AI) minor and certificate programs will be available for enrolment by students in the Core-8 and Engineering Science programs in January.

Students are required to fulfil three full course equivalents (FCE) to complete the minor, while students enrolled in the certificate program must complete 1.5 FCEs. Since a few of the courses required for the program fall out of the scope of students’ main discipline, some students may need to take extra courses to complete the requirements.

Students who complete the minor or certificate will receive a notation on their transcript.

Professor Jason Anderson from the Department of Electrical & Computer Engineering, a key figure behind the program, explains that all students will be required to take one foundational course, as well as courses in data structures and algorithms relevant to AI and machine learning.

Students enrolled in the certificate program can choose between traditional AI or machine learning for specialization. Students in the minor will learn about both and choose an additional area of interest to specialize in, such as computer vision or natural language processing.

Anderson explains that machine learning is one aspect of AI. In traditional AI, computers can make decisions on their own. In machine learning, computers use and learn from data to make decisions.

“The computer is actually trained to recognize images in different categories. In traditional AI, that’s more encoded in rules.”

“Students who take the certificate or minor will have hands on experience applying AI and machine learning techniques to real engineering problems,” says Anderson. In addition, students will be exposed to the ethical questions surrounding AI technology.

While there is currently no specific Professional Experience Year Co-op (PEY) opportunity for the AI minor and certificate, Anderson says that many students are already working with AI to some extent during their PEY.

Anderson also notes that AI ties in with other engineering disciplines in several ways. For instance, AI technologies can be used by civil engineers to understand traffic patterns or by chemical engineers in drug discovery.

In his own field, Anderson notes that AI technology is being used in computer-aided design tools that “create complicated digital circuits” in order “to produce higher quality designs, for example, that use less silicon area, that use less power, operate faster, to make predictions.”

“We want students who can research in this area but also have applied AI techniques,” says Anderson. Through this program, he hopes to foster engineering talent that will lead students to create startups, develop new AI technology, or further their education through graduate studies.

Can artificial intelligence transform health care?

U of T researchers are at the forefront of artificial intelligence applications

Can artificial intelligence transform health care?

Artificial intelligence is undergoing a moment of zeitgeist. From the hosts of Westworld to Turing-tested humanoid Ava from Ex Machina, the paranoid references to its possibilities and horrors are exploited in cinema.  Siri, Cortana, and self-driving cars are perhaps popular, practical examples of the technology in use. 

Harnessing the power of artificial intelligence (AI) provides enticing opportunities that could transform the medical field. 

In September, U of T Professor Emeritus Geoffrey Hinton and President Emeritus David Naylor both published articles in the Journal of the American Medical Association on deep learning and its potential to transform medicine. 

Hinton, who is also a vice-president and engineering fellow at Google, distilled intricate aspects of deep learning in his article, while Naylor explored prospects for machine learning in health care in his. 

At U of T, Canada Research Chair in Robots for Society and Associate Professor Goldie Nejat and her team develop socially assistive robots to aid seniors, and Professor Shahrokh Valaee uses AI and artificial X-rays to pinpoint diseases.

“I believe in artificial intelligence in the long run,” said Dr. Frank Rudzicz. “I believe there is a future out there where you’ll have something like Echo in your house and Echo itself could diagnose you.”

Rudzicz is a scientist at the International Centre for Surgical Safety of the Li Ka Shing Knowledge Institute at St Michael’s Hospital and also a faculty member at the Vector Institute for Artificial Intelligence.

He is among a number of researchers working to use AI to transform the practice of medicine. 

At the Speech and Oral Communication lab (SPOClab), Rudzicz’s team of researchers investigate machine learning and natural language processing for use in health care practices.

Their aim is to use data to produce software that helps individuals with disabilities communicate. 

“We’re interested in the whole mechanism of speech and language processing. From the acoustics in speech, to how it is physically produced by the articulators, to how it’s produced in the brain,” said Rudzicz. 

In the short term, Rudzicz sees the speech recognition technology as being a Google search for physicians, providing them with relevant medical information on their patient’s history. 

It could help reduce the clerical burden for physicians by providing a transcription of communication with patients and integrating that with their electronic medical record. 

In the long term, with growing knowledge on the illness-related effects on the articulation of speech and speech patterns, the technology could be used as an end-to-end package to diagnose diseases like Alzheimer’s, Parkinson’s, and cerebral palsy, with some human oversight.

Despite such endeavours, there remain several hurdles that need to be overcome prior to introducing machine learning applications to a clinical setting.

Rudzicz warned against looking into the magical crystal ball for predicting the future, which “can be fun but it can also be wildly off-base.” 

There remain several hurdles that need to be overcome prior to the introduction of such technology into the market. 

For instance, accessing datasets that are used to develop the machine learning programs can be an expensive proposition for AI developers. 

These are expensive to obtain but important as input in training the machine learning programs. Through providing samples of variables, the input, along with its feedback, is required to build an AI. Large and diverse datasets are also critical to avoid biases. 

The Vector Institute obtains a large dataset of Ontarians through a collaboration with the Institute for Clinical and Evaluative Sciences. Rudzicz explained that obtaining datasets is only the first step. The next steps would be to build an AI model, which would undergo rigorous clinical trials. 

The final step is the buy-in from communities of health professionals who use the technology.

These stages are critical in developing an accurate machine, which is especially significant in medical practice. 

Take Watson — a computing system developed in IBM’s Deep QA project — whose successes and failures attest to fallacies of machine learning. 

At first, consumers hailed Watson as a potential breakthrough in cancer treatment, but recent news on Watson has been far from complimentary, citing inaccurate diagnoses, unsafe treatment advice, and general dissatisfaction from doctors. 

On the other hand, recently published studies that use deep learning and deep neural networks to identify retinal disease, pneumonia, and skin cancer show hopeful results. Deep neural networks performed on par with a group of 21 dermatologists.

Though AI is still in its infant stages, U of T is in a position to revolutionize how machine learning is used in health care. 

Rotman hosts AI industry leaders for machine learning conference

Alibaba president, Sanctuary AI founder among speakers discussing the future, impacts of technology

Rotman hosts AI industry leaders for machine learning conference

The Rotman School of Management’s Creative Destruction Lab hosted 24 of the world’s leading artificial intelligence (AI) researchers, business leaders, economists, and thinkers on October 23. The “4th Annual Rotman Conference on: Machine Learning and the Market for Intelligence” featured discussions of AI and the impact it will bring to the future of business, medicine, and numerous other industries.

Ajay Agrawal, the founder of the Creative Destruction Lab, and Shivon Zilis, the project director of Tesla and Neuralink, co-chaired the 11.5-hour event. Among the speakers were Alibaba — the world’s largest online retailer — President Michael Evans, Governor of the Bank of England Mark Carney, and U of T Emeritus distinguished professor Geoff Hinton. Despite their unique perspectives, one message was clear: machine intelligence will revolutionize how we think about solving problems.

The event began with talks from leaders in the international business sector on why industries worldwide are rapidly adopting machine intelligence into their business practices. Kevin Sneader, Global Managing Partner at McKinsey & Company, explained how monumental AI will be toward optimization and efficiency. Sneader said that he expects “mainstream absorption” of AI within the next decade. Evans showcased Alibaba’s automated distribution facilities powered by intelligent roving robots and its multitiered corporate strategy to adopt AI.

The speakers made it clear that businesses see the huge potential upsides associated with smart automation, but none discussed the issues that AI adoption may bring to the labour force or customer data responsibility.

Many industry pioneers dream of closing the gap between human and artificial intelligence, and they want you to know that the results don’t have to parallel dystopian sci-fi. Suzanne Gildert, CEO of Sanctuary AI, is building sentient, fully autonomous robots powered by the next generation of AI.

The artist-turned-technologist said that designing the first generation of synths with realistic human bodies will allow them to interface with our human world. Debates around the treatment, regulation, and integration of robots into human society are still very unresolved, but Gildert hopes that AI will push humankind to new heights. Citing the possibilities to create hyper-empathetic, creative, and intelligent minds, Gildert emphasized her optimism for the future of AI.

She ended her talk with a fascinating, albeit slightly terrifying, demo of a robotic clone of herself, complete with a matching silicon body and voice capabilities.

Perhaps one of the more sobering talks of the day was given by theoretical physicist and former president of the Santa Fe Institute Geoffrey West, who discussed the “socioeconomic entropy” that comes with chasing innovation. Despite the optimism of other speakers and the crowd in light of continued innovation and growth, West cast doubt over humanity’s ability to support sustained accelerated innovation.

Based on his research into the scale of companies and human networks, he suggested an underlying futility to the aspirations of the field. This alternate perspective brought a human context back to the event; if we don’t understand how we grow, we are doomed to collapse under our own weight.

PHOTO BY NIKHI BHAMBRA/THE VARSITY

The lower floors of the event hosted Toronto AI companies, who demonstrated their latest and greatest tech. Dozens of startups and corporations presented their efforts to integrate AI into solutions for specific industry problems, highlighting the extent of AI adoption.