The field of artificial intelligence (AI) is at a critical inflection. Over the past few years, computer programs like Chat-GPT, DALL-E, and others have stunned the public with their capacity to develop artistic creations and work with abstract concepts. 

They have also prompted deep unease. Worries about AI are not just from Twitter users and conspiracy theorists predicting a robot takeover this time around. Industry leaders, including those at U of T, are almost unanimously sounding the alarm on AI becoming an existential threat. 

“Perhaps there is a lot more happening here than we are currently able to understand, and maybe we’re going at a pace that’s just too fast for us to be able to grasp the implications of the current era of technological development,” wrote Monique Crichlow, executive director at U of T’s Schwartz Reisman Institute for Technology and Society (SRI), in a statement to The Varsity

“We don’t know for certain that AI poses an existential threat,” Crichlow noted. But she wrote it is a possibility, and the SRI urges world leaders to take that possibility seriously. 

U of T’s complicated history with AI

The SRI’s director, U of T Law Professor Gillian Hadfield, has signed two open letters calling for developers to slow down and policymakers to act before it is too late. 

As a U of T-based institute, the SRI’s message is especially notable. U of T is the university where, 11 years ago, Geoffrey Hinton and two graduate students pioneered artificial neural networks — the technology that opened up the possibility of Chat-GPT and other generative AI technologies in the first place. 

Shortly after the breakthrough, Hinton took a position at Google, which acquired the neural network technology for 44 million USD. The AI industry has come to call Hinton its “godfather.”

He went on to found the Vector Institute, a non-profit AI research company, in 2017. Its office sits at the corner of College Street and University Avenue, across the street from the SRI building. 

Then this past May, Hinton announced that he had left his position at Google so that he could openly criticize the company’s actions. In light of the existential threats that AI currently poses, Hinton says he regrets how his research contributed to its development. 

“I console myself with the normal excuse: if I hadn’t done it, somebody else would have,” he told The New York Times

Experts call for legislation and cooperation

Along with Hadfield, Hinton is a signatory on the US-based Centre for AI Safety’s 22-word long statement released on May 30, earlier this year.  

The statement reads, “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” 

The statement has 350 signatures from high-level players in the AI industry, including the CEOs of the research organization Google DeepMind — whose AI-based projects address challenges in areas ranging from medicine to ancient languages — and OpenAI, the AI research laboratory that created Chat-GPT.

Hadfield has also signed a letter calling on AI labs everywhere to immediately halt training all AI systems more powerful than GPT-4, the latest version of what is commonly known as Chat-GPT. The letter was released on March 22 by the Future of Life Institute, a global non-profit that seeks to prevent existential risks that technological development could produce. As of September 23, the letter has 33,711 signatures. 

The letter paints a striking picture of the industry. “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” it reads.

These AI developments could make life vastly better for current and future generations, the letter acknowledges. But it warns they could also do the opposite. Steering the technology in the right direction, it urges, means implementing strong industry watchdogs and comprehensive regulatory frameworks. And that requires cooperation across AI labs, businesses, and governments. 

“I don’t think the pause letter is trying to stifle innovation or technological advancement, as some have accused,” Crichlow wrote to The Varsity. Instead, she suggested, the letter urges us to think critically about what direction humankind is headed: “I think that’s a good thing.”

How to save the human race 

One potential set of AI regulations, under Bill C-27, is currently making its way through Canada’s House of Commons. It is Ottawa’s first attempt to “comprehensively regulate” AI, writes SRI policy researcher Maggie Arai in an article on the institute’s website.

Among other provisions, the bill proposes to put the onus on developers, distributors, and managers of AI technologies to mitigate and monitor the risks that their technology poses. This includes ensuring that whatever personal data the technology collects remains totally anonymous, and providing consumers with easy-to-understand explanations of any potential risks or impacts the technology poses. 

The proposed regulations are not without flaws, Arai writes. But the whole business of regulating AI generally suffers from more fundamental issues.

Essentially, developments in AI are prone to happen at an exponential rate. Policymakers, meanwhile, are already infamous for their snail-like pace; AI regulations can scarcely hope to keep up. The technology is inherently difficult to understand, and those in government lack the expertise to create legislation to properly curb potential threats.

But a reconfiguration of how governments create AI regulations could address this chasm. Hadfield and Jack Clark, the former policy director for OpenAI and cofounder of Anthropic, propose in a preprint that regulators use a market-based approach, where developers and distributors of AI technologies would be responsible for purchasing regulatory services for their own products from a third-party regulatory body. Essentially, under such a model, the onus for keeping up with AI developers’ unmanageable speed is shifted onto the developers themselves. 

But until a model like this addresses problems with AI regulation, the Canadian government and others will struggle to keep up as AI developers race toward a potential global existential crisis.

Editor’s Note: The article has been updated to state that Jack Clark is the former policy director for OpenAI.