Quantifying the climate crisis: how changes could impact road maintenance

U of T instructor Dr. Piryonesi on studying the climate using probabilistic models over deterministic models

Quantifying the climate crisis:  how changes could impact road maintenance

Road management, the climate crisis, and machine learning are three things which may not seem connected, but they do to Dr. Madeh Piryonesi, a University of Toronto civil engineer who defended his PhD this year.

This June, one of his papers, co-authored by Professor Tamer El-Diraby at U of T’s Department of Civil & Mineral Engineering, titled “A Machine-Learning Solution for Quantifying the Impact of Climate Change on Roads,” won the Moselhi Best Paper Award at the Canadian Society for Civil Engineering’s annual conference. Piryonesi created a model to predict how roads would deteriorate due to change in climate, and implemented it as an online tool that will be accessible to policymakers.

Piryonesi’s research has been funded by the Natural Sciences and Engineering Research Council and the Ontario Good Roads Association.

How roads may be impacted by the climate crisis

In Piryonesi’s model, users are treated to a visual interface where they can input a road’s name and see that road pop up on Google Maps. They can then enter the parameters for a future climate — such as an increase in temperature and precipitation — and see the projected future deterioration of the road.

The model can make predictions for roads in many locations, thanks to the wealth of data Piryonesi had access to. His machine-learning algorithms were trained on data provided by the Long-Term Pavement Performance program, which is managed by the US Federal Highway Administration. The program stores data — including traffic and weather information — on more than 2,500 road sections across Canada and the US, and dates back over 30 years.

“Using this very well-spread data kind of makes sense for climate change analysis,” said Piryonesi to The Varsity.

The model’s predictions depended strongly on location. He tested the tool on roads in both Texas and Ontario. While it projected that, in a certain climate-change scenario, roads in Texas would be badly hit, it actually predicted that some roads in Ontario would fare better with a change in climate than without.

Piryonesi stressed that this doesn’t mean the climate crisis is good for Ontario roads, only that, under the model’s specific assumptions, Ontarian roads should not be badly damaged.

Nevertheless, the model highlights how the climate crisis varies by region.

The theory behind Piryonesi’s work

Many models already exist for predicting road quality in order to aid municipal governments in maintaining their infrastructure. However, Piryonesi diverged from most previous work in two ways.

While existing models use a variety of techniques, the use of machine learning in road modelling is relatively new. Tailoring these models to incorporate changes in climate is also novel.

Piryonesi explained that the reason this interesting combination is useful is that change in climate is inherently a stochastic process — that is, it involves randomness.

According to Piryonesi’s paper, this puts deterministic models, which spit out a single value, at a disadvantage compared to models that can consider a range of possibilities and predict their likelihoods. Machine learning falls into the latter category.

At its core, Piryonesi’s work is based on a decision tree algorithm. In everyday life, decision trees — a kind of flowchart — let us visualize how outcomes or costs depend on sequences of events that take place.

In machine learning, decision tree algorithms are fed existing data, learn from it, and reverse-engineer a decision tree that predicts unknown data. To amplify the low accuracy of a single decision tree, Piryonesi’s model also uses ‘bagging,’ a process in which hundreds or thousands of ‘learners’ construct separate trees, and then hold a ‘vote’ on the best one.

This approach can produce predictions that are not single numbers. “If our model has five outcomes, being the road staying in good condition, medium, and so on,” said Piryonesi, “the tool can give you, for example, a probability of 98 per cent good and the two per cent being in the other conditions.” Deterministic models can’t make these probabilistic predictions.

However, Piryonesi is aware that some users do not see this as advantageous. “Most customers or most municipalities that we are working with are using deterministic tools,” he commented. “The problem is, they don’t get the notion of probability and probabilistic things; they want one number.”

In Piryonesi’s opinion, industries and academia alike should better communicate the fact that everything in the real world is probabilistic.

“Having a probability doesn’t mean that it’s bad; [only] that we are not sure,” he said.

The impact of the tool’s findings

The findings were interesting to Piryonesi for two reasons. For one, understanding how badly roads are affected by changes in climate compared to other types of infrastructure can inform governments on what infrastructure demands the most attention and funding.

Climate justice also interests him. He sees value in determining quantitatively which regions would be hit worst by the climate crisis. “I think this could be a good basis for carbon pricing, for tax.”

Although change in climate was not originally his area of expertise, he was drawn to it because he saw the need for more evidence-based research.

“Politicians, men of religion, everyone, people on the street, they talk about [the climate crisis]. And oftentimes they have anecdotes [but] they’re not super accurate,” he said. “So I thought maybe I would want to touch a little on this [topic].”

Editor’s Note (November 10, 8:35 pm): The article has been updated to note the contributions of Professor Tamer El-Diraby and organizations that provided Piryonesi with funding.

UTSG: Computer Graphics Job Fair

Want to know what it’s like to make your favourite movies, games or design tools? Computer graphics companies and studios are looking for talented developers like you!

Drop by the U of T Computer Graphics Job Fair to learn more about the exciting opportunities available in the computer graphics industry! This is your chance to chat with representatives from top companies/studios in the field!

Confirmed companies:
-Autodesk
-Pixomondo
-SpinVFX
-Ubisoft
-SideFX
-Interaptix
-More to be announced!

Date: October 15th, 2019
Time: 11am-2pm
Location: BA3200 (Bahen Centre for Information Technology)

Remember to register to get the latest updates, notifications and perks!

If you’re interested in representing your company at the job fair, please contact us at utcomputergraphics@gmail.com

UTSG: Computer Graphics Job Fair

Want to know what it’s like to make your favourite movies, games or design tools? Computer graphics companies and studios are looking for talented developers like you!

Drop by the U of T Computer Graphics Job Fair to learn more about the exciting opportunities available in the computer graphics industry! This is your chance to chat with representatives from top companies/studios in the field!

Lunch will also be provided to those who sign up on Eventbrite!

Confirmed companies:
-Autodesk
-Interaptix
-MESH Inc
-Pixomondo
-SideFX
-SpinVFX
-Ubisoft
-More to be announced!

Date: October 15th, 2019
Time: 11am-2pm
Location: BA3200 (Bahen Centre for Information Technology)

Remember to register to get the latest updates, notifications and perks!

If you’re interested in representing your company at the job fair, please contact us at utcomputergraphics@gmail.com

U of T students win first place in AutoDrive Challenge for engineering self-driving car

Victory marks second consecutive year that aUToronto came first in the international competition for car named Zeus

U of T students win first place in AutoDrive Challenge for engineering self-driving car

A team of U of T engineering and computer science students won first place in an international self-driving car competition in Michigan in July. The group, named aUToronto, pitted its vehicle Zeus against those from seven other North American universities.

This victory marks the second consecutive year that the team has won first place in the AutoDrive Challenge, as it did last year against the same competitors. The competition, run by the Society of Automotive Engineers (SAE) International and General Motors, will hold its final round for this three-year cycle in 2020.

The team scored first in eight categories

aUToronto won first place in eight of nine categories this year, defeating competitors from the University of Waterloo, Michigan State University, Michigan Tech University, Kettering University, Virginia Tech, North Carolina A&T State University, and Texas A&M University.

For its ability to recognize traffic signs, such as speed limits and ‘do not enter’ signs, as well as respond appropriately to them, aUToronto’s Zeus won first place in the Traffic Control Sign Challenge.

aUToronto’s car also placed first in the Pedestrian Challenge, which tested cars on their ability to wait for pedestrian replicas to completely cross a road before proceeding, as well as in the MCity Challenge, which required the vehicles to navigate around obstacles such as a tunnel and railroad crossing.

“Correctly detecting and classifying all the traffic lights and signs was more difficult than we anticipated,” reflected aUToronto Team Lead Keenan Burnett in an email to The Varsity.

The team’s approach to the problem, which utilized deep neural networks, or systems of artificial neurons, required substantial tuning and data collection to work effectively.

Zeus further secured first place in the Intersection Challenge and tied for first in the Mapping Challenge.

One key aspect to the team’s success here can be attributed to the realization that “relying only on GPS/IMU for positioning [could have been] risky.”

“We opted to integrate a more advanced localization software that uses [a laser system called] LIDAR instead,” wrote Burnett. “This proved to be one [of] the keys to our success at the competition.”

aUToronto also scored first in the categories of Social Responsibility Report, Social Responsibility Presentation, and Concept Design Presentation. The team placed second in Concept Design.

Origins of aUToronto and Zeus

“The team’s inception traces back to when SAE was soliciting applications from universities to compete in their new self-driving competition,” wrote Burnett.

“The idea is that they would select the [eight] top university applications based on the quality of the proposals, the backing of the university, and facilities that would be made available to students.”

Cristina Amon, U of T’s Dean of Engineering at the time, requested Professor Tim Barfoot to submit a proposal in 2017, according to Burnett. Professors Barfoot and Angela Schoellig, from the U of T Institute for Aerospace Studies (UTIAS), partnered to submit the proposal and were selected as one of the eight competing university teams.

Burnett, who was then applying to be a graduate student at UTIAS, asked Barfoot if a position was available to run the team. He was hired after an interview in April.

“From then, I built the year 1 team up from scratch,” wrote Burnett. “We hired a small set of students with excellent technical and leadership skills to head the sub-teams for the first year of the competition.”

Burnett then let the sub-team leads hire their own sub-team members. New members have been recruited roughly every four months since the fall of 2017.

Around 100 students are on the team, according to Burnett. Undergraduates comprise 90 per cent of the team, while graduate students make up the remaining 10 per cent. Students primarily study electrical engineering, mechanical engineering, engineering science, and computer science.

In terms of resources, aUToronto received a Chevrolet Bolt Electric Vehicle and an Intel compute server from the competition’s organizers. The team then acquired sensors and developed infrastructure around the Bolt to turn it into a fully self-driving car.

“The turnaround between receiving the vehicle and shipping it off to compete in the Year 1 competition was just 6 months,” wrote Burnett.

The team then competed in the second round of the AutoDrive Challenge earlier this year.

“We still have a third year of the competition coming up,” wrote Burnett. “It will be held at the Ohio Transportation Research Center. We anticipate needing to handle dynamic [challenges] and [drive] at much higher speeds.”

UTSG: Hack the 6ix 2019

Join 400 other hackers from across North America in a 36-hour hackathon to close out the summer! An application is required.

Facebook event

Flaw in WhatsApp exploited to target human rights lawyer, finds Citizen Lab

Lawyer has been embroiled in lawsuit against NSO Group, controversial Israeli technology firm

Flaw in WhatsApp exploited to target human rights lawyer, finds Citizen Lab

On May 12, a London-based human rights lawyer received peculiar video calls on his WhatsApp account while visiting Sweden.

Concerned by receiving the calls at such odd times in the morning, he reached out to cyber specialists at U of T’s Citizen Lab to investigate.

The Citizen Lab is a multidisciplinary research institute located at the Munk School for Global Affairs and Public Policy. The lab explores issues related to cybersecurity, surveillance, and digital censorship.

The lawyer, who remains anonymous due to fears of retaliation for speaking out, suspects potential foul play given his involvement with a civil lawsuit against NSO Group, an Israeli technology firm.

Foreign governments, including Saudi Arabia, Mexico, and the United Arab Emirates, have allegedly used NSO Group’s products to spy on journalists and political dissidents, including a critic of Saudi Arabia living in Canada.

According to reports from the Financial Times, the spyware targeting the lawyer’s phone had digital characteristics typical of NSO Group products.

Citizen Lab Senior Researchers John Scott-Railton and Bill Marczak led the investigative team that discovered WhatsApp’s vulnerability.

In an interview with The Varsity, Scott-Railton said he “observed a case where it looked like there was an attempt to target that lawyer’s phone with this novel attack, which would have happened over WhatsApp through a missed call.”

By exploiting the app’s vulnerability, NSO Group’s Pegasus spyware could enter a target’s iPhone or Android device through WhatsApp’s call function. The malicious code could then extract private information such as text messages and call histories, regardless of whether a target answers the call or not. The spyware can also collect new data by turning on the device’s camera or microphone.

 

WhatsApp’s response

WhatsApp engineers worked to patch the vulnerability as quickly as possible once they became aware of the susceptibility in the software. When finished, their company urged its 1.5 billion users to update their apps.

“WhatsApp encourages people to upgrade to the latest version of our app, as well as keep their mobile operating system up to date, to protect against potential targeted exploits designed to compromise information stored on mobile devices,” WhatsApp said in a public statement.  

The social network also informed the United States Department of Justice officials and issued a Common Vulnerabilities and Exposures notice to inform cybersecurity experts.

Scott-Railton praised WhatsApp for acting swiftly after discovering the vulnerability. “The way that WhatsApp has responded to this has been, I think, quite positive,” he said, noting how WhatsApp contacted a number of human rights organizations, which are common targets of the Pegasus spyware, before publicly announcing the security vulnerability.

According to Scott-Railton, this was an “unprecedented” move by a social media company and signals that it “felt there was something very wrong that had been done… and they didn’t like what they saw.”

It is unclear how many people were targeted or impacted by the vulnerability. However, based on WhatsApp’s comments, Scott-Railton said it seems like “there was a problem… [which was] much larger” than the attack on the human rights lawyer alone.

NSO Group promises reform

NSO Group maintains that it partners with governments to assist with law enforcement efforts and prevent criminal activity such as terrorism.

In response to reports that its software was targeting the human rights lawyer, NSO Group said that it “would not, or could not, use its technology in its own right to target any person or organization, including this individual.”

Earlier this year, NSO Group was partially acquired by the UK-based private equity fund Novalpina Capital. When Novalpina took over, it promised to reform the company in light of recent reports of suspected abuse.  

When the acquisition occurred, Novalpina was hoping to “establish a new benchmark for transparency and respect for human rights in full compliance with the [United Nations] Guiding Principal,” said Stephen Peel, co-founder of the fund.

Scott-Railton believes that “if indeed this was NSO, it suggests that this public story about human rights abuse may not [match up] with other things that we’ve observed.”

A bigger picture

Citizen Lab has been involved in multiple investigations tracking companies that sell spyware. Earlier this year, Citizen Lab itself had been targeted by undercover agents — masked as “socially conscious investors” — for its research on NSO Group.

Scott-Railton believes this case points to a larger trend of companies selling spyware to target individuals. “I think in the long run, we won’t really understand the digital risks and challenges that we all face until we see cases where harm happens to individuals,” he said.

“It’s very disconcerting to someone who has WhatsApp on their phones when they hear that there’s some company out there that’s selling a technology to basically use that as a way onto their phones, without any interaction,” Scott-Railton said.

“It’s almost unpreventable.”

Disclosure: Kaitlyn Simpson previously served as Volume 139 Managing Online Editor of The Varsity, and currently serves on the Board of Directors of Varsity Publications Inc.

Editor’s Note (September 28, 12:17 pm): This article has been updated to reflect the author’s former and current affiliations with The Varsity.

U of T team wins top prize at KPMG’s international AI competition

Paramount AI team created device that sorts waste with 94 per cent accuracy

U of T team wins top prize at KPMG’s international AI competition

A team of five U of T graduate students named Paramount AI won first place in KPMG’s 2019 Ideation Challenge, a worldwide competition to develop solutions to problems facing businesses using artificial intelligence (AI). KPMG is one of the world’s top four accounting firms.

The U of T students faced off against 600 participants from top universities across nine countries, including Canada, Australia, China, Germany, Luxembourg, Italy, the Netherlands, and the United Kingdom.

The final round was held from May 10–12 in Amsterdam, where the students — Maharshi Trivedi, Nikunj Viramgama, Aakash Iyer, Vaibhav Gupta, and Ganesh Vedula — won the top prize for their innovation, which used AI to automate waste segregation.

Paramount AI’s innovative solution

The winning innovation is a sorting system able to distinguish between three different categories of waste: recycling, organic, and garbage.

Iyer, who is specializing in data analytics and financial engineering, explained that the initial prototype of the system used LED light bulbs and basic circuits to classify the waste.

The five students worked continuously, with little breaks and limited sleep during the three days of the competition, which came at the expense of exploring Amsterdam.

The reward for their efforts came in the confirmation of the practicality of using the system in real-life situations. The device completed both a financial and market analysis by the end of the competition.

The importance of waste segregation

Viramgama, who is specializing in data analytics and data science, explained that the team chose to focus on the issue of waste segregation because they were concerned about improper sorting in Toronto.

He noted that about one in three residents in Toronto contaminate the waste they place in recycling bins, and that 20 per cent of waste placed in blue recycling bins ends up in a landfill.

Since there is limited landfill space, this has motivated government spending on improved waste management. An increase in spending may lead to a raise in taxes,which makes the emergence of automation in waste segregation something that can greatly benefit our waste management.

The U of T team tackled this issue by creating a system that accurately sorts waste about 94 per cent of the time. Current waste systems have an accuracy of only up to 74 per cent, and each percentage of accuracy translates to significant savings for spending on waste management.

The pressing need for a solution to this environmental problem, which has economic consequences, could be a reason why Paramount AI won the competition.

The other reason, explained Vedula, was that the team was “not only thinking about saving the environment, but… also trying to help businesses [maximize] profits.”

The future of Paramount AI

The next step for Paramount AI is to present their prototype to experts at KPMG’s annual AI summit in October. By then, the team hopes to further develop their model, aiming to continue increasing the accuracy of their system, while likely adding new features to increase the value of the product for potential clients.

The students currently have the intellectual property rights of their invention. With the support of KPMG, the team is interested in looking to commercialize their product.

They are also optimistic about the future of AI in positively shaping the lives of Torontonians, as a whole. “We completely believe that in the next few years, we will see AI being integrated in every part of our lives, because there is a huge potential,” said Vedula.

“[AI] is already involved in making our lives easier.”

Where computers and clinics intersect

Raw Talk Podcast hosts expert panel discussions about AI’s role in healthcare

Where computers and clinics intersect

Experts in medicine, academia, and industry explored the promises and perils of the applications of artificial intelligence (AI) in health care during panel discussions with the Raw Talk Podcast on May 7. The event was organized by graduate students of U of T’s Institute of Medical Science.

The two panels, collectively named “Medicine Meets Machine: The Emerging Role of AI in Healthcare,” aimed to demystify sensationalism and clarify misconceptions about the growing field of study.

“On one hand, it seems like everyone has heard about [AI],” said Co-executive Producer Grace Jacobs. “But on the other hand, it seems like there’s a lot of misunderstanding and misconceptions that are quite common.”

How AI is used in health care

While discussing the reality of AI, several panelists emphasized that it should be viewed and treated as a tool. “It is statistics where you don’t have to predefine your model exactly,” said Dr. Jason Lerch of the University of Oxford.

Other speakers agreed that AI is an expansion of — or a replacement for — traditional statistics, image processing, and risk scores, as it can provide doctors with more robust and accurate information. However, final health care recommendations and decisions remain in the hands of doctors and patients.

“You always need a pilot,” said Dr. Marzyeh Ghassemi, a U of T assistant professor of computer science and medicine.

But what advantages can this tool provide? Ghassemi thinks it can assimilate clues from a wider range of patients’ conditions to predict treatment outcomes, replacing the experience-based intuition that doctors currently rely on.

Speaking on her time in the Intensive Care Unit as an MIT PhD student, Ghassemi said, “A patient would come in, and I swear they would look to me exactly the same as prior patients, and the… senior doctors would call it. They would say, ‘oh, this one’s not going to make it. They’re going to die.’ And I would say, ‘Okay… why?’ And they said, ‘I’m not sure. I have a sense.’”

“They used different words — gestalt, sense — but they all essentially said the same thing. ‘I just — I have a sense.'”

Doctors develop this sense by seeing many cases during their training, but they can intuit only the cases that they had personally experienced; AI algorithms can potentially understand many more cases using a wider dataset.

Accessing those cases requires access to patient data, and access to data requires conversations about consent and privacy. Ghassemi and Dr. Sunit Das, a neurosurgeon at St. Michael’s Hospital and Scientist at the Keenan Research Centre for Biomedical Science, said that “de-identification” — the removal of information that can be traced back to individual identities — protects privacy.

Large de-identified datasets from the United States and the United Kingdom are available for AI research, but generally, Canada lags behind these countries in making health data available for this purpose.

Dr. Alison Paprica, Vice-President of Health Strategy and Partnerships at the Vector Institute, agreed that data should be used for research, but argued that de-identification alone does not eliminate risk.

“You’re not just giving a dataset to anybody,” she said. “You’re giving a dataset to people who are extremely skilled at finding relationships and patterns and maybe piecing together information in ways that most people couldn’t. So I think there’s going to be heightened sensitivity around re-identification risk.”

Society must manage this risk and balance it against the benefits. “How do we balance that?” Paprica asked. She suggested that consulting all involved stakeholders could help strike that equilibrium.

Advice for scientists aiming to use AI in their research

So what advice did the panelists have for scientists hoping to harness the power of AI in their own research?

Ghassemi stressed the importance of knowing what you’re doing: researchers have created many tools that make AI research easy to implement, but conscientious scientists need to know the statistical and training principles behind the methods.

“If you’re not aware of how these things are trained,” she said, “it’s really easy to misuse them. Like, shockingly easy to misuse them.”

Other panelists advised users to take care when choosing data to train the algorithms. “A learning algorithm can’t overcome bad data that goes in, or can’t completely overcome it,” said Lerch.

Moderator Dr. Shreejoy Tripathy summed up a key takeaway on applying AI to health care: “Understand your data… And understand your algorithms.”