Imagine a world in which humans are freed from labour due to machine automation. If every person were guaranteed a basic standard of living, people would be free to pursue their hobbies and the things they truly desire. Some people might decide to become musicians, while others might volunteer at their local church or food bank. Most people would spend most of their time having fun and building meaningful relationships with friends, family, and romantic partners.

Transitioning from a utopian world, consider this scenario: in a rush to create the most powerful artificial intelligence (AI) system, a company creates one capable of self-improvement. The system quickly becomes a ‘superintelligence,’ a cognitive system whose intellectual performance across all relevant domains vastly exceeds that of humans. 

In his book Superintelligence: Paths, Dangers, Strategies, Swedish philosopher Nick Bostrom points out that such an agent is likely to develop sub-goals of self-preservation and resource acquisition. If the agent is not designed carefully, it may develop goals divergent from those that benefit humans. 

Think about how humans destroy animal habitats all the time through activities like deforestation and pollution — this is not out of a sense of malice towards animals but rather because humans simply have different goals than animals. In the same way, a badly aligned goal for a powerful AI might result in a similar outcome for humans.

AI alignment

AI alignment is the field of research dedicated to ensuring that AI systems are designed to act in ways that align with human values. Some issues AI alignment researchers aim to tackle are finding ways to represent human values to a machine; predicting how an advanced AI system will behave; making sure AI systems are transparent and interpretable; and safely shutting down an advanced AI system should something go wrong.

These are all open problems, and the field of AI alignment is still very much in its infancy. The wonderful thing about the field is that since it’s so new, an individual doing research can exert a significantly larger influence compared to what they might achieve in broader, more established fields. AI alignment is easily accessible; you can tinker around and try to break ChatGPT’s safety features with prompt engineering from the comfort of your couch, providing valuable insight into how to improve future systems. 

There are also organized efforts at making progress towards AI alignment such as the Centre for Human-Compatible AI at Berkeley or the Future of Humanity Institute at Oxford.

Currently, I am working as the vice-president of outreach at a non-profit research organization called Cavendish Labs, where I help the existential risk research lab connect with collaborators and funders and support scholars doing research. We spend our time thinking about some of the biggest problems humanity is facing — AI alignment is a big focus, but I’ve also worked on topics like pandemic prevention. After I graduate from U of T in math and computer science, I am planning on pursuing a career in AI alignment.

Risks of AI development

I also work closely with Max Tegmark as a software engineer at the non-profit organization Improve the News, of which he is the President. He guides the machine learning team, where we work on using machine learning to analyze large numbers of news articles. Tegmark is a professor at the Massachusetts Institute of Technology and the author of the book Life 3.0, which discusses the impact of AI. In an interview with The Varsity, Tegmark talked about his opinion on the risks posed by AI.

Concerned about the danger of humans losing control to AI, Tegmark believes this threat of superintelligence should make us more cautious about using and developing AI. He used the example of how social media algorithms decide what content to show you. He predicts that in the next few years, humans will increasingly use AI to substitute for their own decision-making.

When commenting on the scarce discussions in mainstream media about the grave impacts of AI, Tegmark remarked that those in positions of power seem unconcerned about the risks posed by AI. He noted that the situation feels eerily similar to the plot of the movie Don’t Look Up: an apocalyptic film where astronomers detect a comet heading towards Earth but the world refuses to heed their warnings.

Tegmark is also president and cofounder of the Future of Life Institute (FLI), which recently released an open letter calling for a six-month moratorium on the development of cutting-edge AI systems. Its central argument is that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” and it claims that this imperative has not yet been achieved. 

For many, the letter is a step in the right direction, as concerns about AI are finally gaining recognition as a valid issue. The letter was signed by influential people such as Elon Musk, who is an external advisor to FLI; Yoshua Bengio, recipient of the Turing Award; Steve Wozniak, cofounder of Apple; Emad Mostaque, founder of Stability AI; and Evan Sharp, co-founder of Pinterest. 

However, other AI researchers like Yann LeCun, the chief AI scientist at Meta, do not support the letter. In a YouTube discussion with Andrew Ng, cofounder of Google Brain, LeCun states that he believes that delaying progress in AI would provide little benefit and that the letter is part of a long-standing movement to stifle technological innovation. 

Looking to the future

While current AI systems are good in narrow domains such as playing chess, they don’t generalize well to broad domains. With artificial general intelligence (AGI), however, AI systems will be able to use abstract reasoning and solve complex tasks like humans, potentially turning into superintelligences. 

We have already begun to see the potential of AI to vastly disrupt industries in a short amount of time. AI tools like Dall-E and Stable Diffusion, which generate images from text descriptions, and ChatGPT, which is used for programming and writing, may overtake the roles of artists and software developers. Though these jobs are cognitively demanding, they are also jobs that people take pride in. 

The point of this article is not to make you feel a sense of doom about the future. The future is not a foregone conclusion. The decisions we make as individuals and as a society today will shape how the future looks. 

Though the future is hard to predict and nobody knows what the path to AGI will look like — or whether it will even happen at all — it might be worthwhile to dedicate significant resources to addressing short-term issues like job loss as well as attempting to alleviate long-term problems with AI alignment. 

Humanity only has one shot at creating AGI, so it’s crucial that we get things right.