It’s a late night in residence and a stressed-out student is hunched over their laptop, staring blankly at the screen. They’re struggling with an assignment and feel completely stuck. That’s when they turn to ChatGPT for help.

As an artificial intelligence (AI) powered language model, ChatGPT is able to offer advice and guidance on the student’s writing. With ChatGPT’s assistance, the student is able to complete their assignment with confidence and a newfound sense of ease.

Or at least that’s how ChatGPT describes its role in university coursework when asked by The Varsity

To an extent, their scene’s not wrong. Given that, in the North American education sector, AI is currently being used to individually tutor students, grade exams, analyze data about school systems and manage student transportation schedules, it’s no surprise that this student felt comfortable turning to ChatGPT in a time of need. 

But the U of T students, faculty, and researchers that The Varsity spoke with painted a more complicated picture a skill this AI still struggles with. From cheating, to plagiarism, to hand holding, concerns with language-generating technology abound. However, so do opportunities to enhance the original goals of higher education — and rewire them in the process.

The allure of AI

During a 2017 AI Frontiers conference held in Silicon Valley, computer scientist keynote speaker Andrew Ng announced, “AI is the new electricity.” 

Ng’s metaphor — meant to compare the way that electricity transformed industries to AI’s power to transform how we currently function — was bold, but not wrong. In the decade, we’ve witnessed AI-based artwork win competitions, heard interviews between dead people, and read about protein-folding breakthroughs. Given these successes in AI technology, it’s apparent that new AI systems won’t just be offering us new statistics in research labs; soon, they’ll be transformed to practical tools and commercial products that the public can use. Analysts from the firm PriceWaterhouseCoopers say that AI’s contribution to the world’s economy will be immense, at $16 trillion in added value.

The idea of inanimate objects coming to life as intelligent beings has been present throughout history. In 700 BC, the ancient Greeks shared myths about robots and Chinese engineers built automatons in the seventh century AD.

However, what we recognize as modern AI can be traced to philosophers’ attempts to describe human thinking as a symbolic system. Despite these efforts, AI wasn’t founded until a 1956 Dartmouth College conference, during which presenters John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon invented the term.

But achieving artificial intelligence wasn’t so simple. In 1974 and 1980, after reports were released criticizing progress in AI, interest in the field — and its government funding — declined.

So why is AI resurging?

Current breakthroughs are happening because a new class of AI models is being introduced, and these are more powerful than anything humans had previously created. Because these tools were initially used for language tasks such as writing essays, we refer to them as large language models (LLMs). LLMs have increased in complexity for years. As they grow in complexity, so do their abilities — and this translates into the classroom.

As a recent report from Microsoft explains, “[AI] gives teachers and schools new ways to understand how students are progressing and allows for highly customized, timely, and targeted curation of content.” 

As Kate Maddalena, an assistant professor at U of T’s Institute of Communication, Culture, Information & Technology at UTM, shared with The Varsity, “Some instructors use AI engines like Turnitin to detect plagiarism.” This is just one of the ways that AI is currently being implemented in classrooms.

But with new opportunities come new risks and with the massie new abilities of AI can come massive upheaval. Can we trust a new technology with our whole education system that we can’t even trust to cite an article? And can we trust ourselves, as learners, to responsibly use it?

The possibility for responsibility 

In the 2016 study “Artificial Intelligence as an Effective Classroom Assistant,” University of Sussex artificial intelligence professor Benedict du Boulay studied the impact that AI technology GPTUTOR had on grade-school classroom learning. GPTUTOR made mathematical proofs and communicated with students to help them find answers to their assessments.

Boulay’s study found that using GPTUTOR offloaded the teachers’ tasks, which allocated them more time to spend one on one with students in need of help. The study also found that students were also more engaged in learning and put more effort into their assignments.

Karina Vold is an assistant professor at U of T’s department of philosophy. In 2021, Vold’s course, HPS340 — The Limits of Machine Intelligence, was recognized by Maclean’s University Rankings for aiming to address the societal effects of AI and teaching students to “think about AI like philosophers.”

In the interview, Vold elaborated on the “extended mind thesis” concept, which argues that humans “use tools and symbols outside of their brains in an essential way,” which makes them “on par with our brains with respect to how they constitute… the mind.” With the help of these tools, Vold explained, “the mind is not just the brain, the mind is more than the brain.”

“What this means is that there are all sorts of technologies we use… that become integrated into our cognitive systems,” said Vold. “Technology is getting smarter and smarter; we’re starting to see changes in what we’re capable of, cognitively speaking.” 

Apart from AI helping workers, we must also consider the ethical implications for those who build the AI itself. Maddalena also told The Varsity, “When you consider an AI technology, ask what kind of world it helps to make, what kind of work it does, what kind of work it replaces.” She elaborated on underpaid workforces, largely across the global south, who do the grunt work of training their AI.

Paolo Granata is an associate professor and program coordinator of U of T’s Book and Media Sudies program. In an interview with The Varsity, Granata explained that new technologies such as AI can enhance students’ learning. “In the same way that math classes may [or may not] allow the use of the calculator, we might allow the use of AI tools in the humanities and social sciences classes if there’s consensus from the instructor,” Granata argued. “We should seriously enquire about how to use these tools to enhance students’ understanding of all fields of knowledge.”

In an interview with The Varsity, first-year law student Manreet Brar echoed Granata’s sentiments, claiming that they’ve used AI technology ChatGPT to prepare for exams.

“I found [ChatGPT] especially helpful to elucidate concepts that I found the professor hadn’t explained very clearly,” Brar wrote. “I appreciated that it explained complex concepts in simple terms.”

Similarly, fourth-year sociology major Khushi Sharma admitted to using AI tools to help them understand mathematical concepts required to solve questions in their assignments, though they would not use AI to complete assignments for them. “Math is not [one] of my strengths. However, I have some math-based courses,” Sharma explained. “AI has been proven to be a good tutor.”

An ethical dilemma 

Along with the promise of AI technologies comes threats.

In the 2023 paper, “ChatGPT has Mastered the Principles of Economics: Now What?” American professors Wayne Geerling, G. Dirk Mateer, Jadrian Wooten, and Nikhil Damodaran used ChatGPT to answer questions on American standardized economics exams. The chatbot answered 19 of 30 microeconomics questions and 26 out of 30 macroeconomics questions correctly; in comparison, most students answered 40 to 50 per cent of questions correctly on both exams, despite a semester of studying the material.

Thorp’s findings are especially concerning when paired with a 2023 study by H. Holden Thorp, the editor-in-chief of Science journals. Thorp submitted abstracts that ChatGPT created to academic reviewers. The reviewers detected that the chatbot created only 63 per cent of the abstracts, despite several of the remaining 37 per cent showing “glaring mistakes… including referencing a scientific study that does not exist.”

Susan MaCahan, U of T’s vice-provost, academic programs and vice-provost, innovations in undergraduate education, echoes this study’s warnings of AI inaccuracy. In February, MaCahan created a video outlining the functionality, history, and future status of the AI technology ChatGPT. 

As part of the video, MaCahan outlined misconceptions about ChatGPT, specifically that it plagiarizes material off the web. Instead, MaCahan explained, the software is determining the probability of the next word in its response being the appropriate word relative to the words that it has used so far in the response. “Basically, it is trying to write what it thinks we want it to write,” MaCahan said.

In an interview with The Varsity, Lesley Wilton, a researcher at OISE, echoed MaCahan’s concerns, adding that AI “might not understand if something’s true or not.” 

“I’ve heard of people asking it to write an essay and provide references and the references it provides are ‘real,’ ” Wilton explained. “The language learning model knows these words go together. So I have to have a journal cited and maybe volume number… but [the ones it gives me are] not necessarily real.”

Wilton also expressed concern about online data collection in AI tools, which, as she explained, improve with access to more data.

“Data collection is an issue because now we’re collecting data from students and from people that don’t realize it’s collecting data,” Wilton said. “So, now we have some privacy issues.”

Offering something new 

To get ahead on concerns about adopting AI into student settings, U of T is already in the process of establishing an institutional advisory group on AI, teaching, and learning. The group, which will include representatives from U of T Libraries, the Center for Teaching, academic divisions, and individual academics. The goal of the group, according to MaCahan, is to “advise [the university] on institutional decision making regarding these new AI systems.” 

This isn’t the only group that’s focused on the AI issue at U of T — research labs are too. Granata is the director of U of T’s Media Ethics Lab, a research hub that studies how digital media practices and emerging technologies are marked by ethical issues; in his interview, Granata promised that the lab “will be at the forefront of AI literacy as well.”

However, some professors already have ideas about how to responsibly introduce AI into their classrooms. 

Last week, Boris Stiepe, a U of T professor emeritus whose research focuses on computational biology, complexity, and society, argued in Maclean’s that, instead of advising students against using AI on assignments, professors should be updating their syllabi to teach students how to strategically use it. 

“We need to accept that [AI] is part of our set of tools, kind of like the calculator and auto-correct, and encourage students to be open about its use,” Stiepe wrote. “It’s up to us as professors to provide an education that remains relevant as technology around us evolves at an alarming rate.” 

Stiepe added that the new possibilities of AI inspired him to spend this semester working on what he’s named the “Sentient Syllabus Project.” As part of the project, Stiepe, as well as his colleagues, which include a philosopher in Tokyo and a historian at Yale, are creating publicly available resource to help educators teach students to use AI tools to expedite work such as “formatting an Excel spreadsheet or summarizing literature that exists on a topic.”

Stiepe explained that, by getting this “academic grunt work” out of the way, students will be able to better focus on “higher-level reasoning.” Stiepe also suggested that instead of grading skills such as eloquent language — “[which] an AI can manage — instructors could grade the quality of a student’s questions and opinions about an issue, and “how they improve on the algorithm’s answer.”

Access to this framework, Stiepe explained, would have changed the way he chose to design assignments. In addition to asking students to read data and write their findings on that data, Stiepe wants to add another challenge: “Tell me how you came up with that answer.” This type of prompt, he reasons, encourages students to creatively engage with the facts that they’re presented in the classroom — “whether they receive them from [AI] or not.”

This thinking helped inspire Stiepe’s main goal for his project, which is to teach instructors to “create a course that an AI cannot pass.”

“If an algorithm can pass our tests,” he questioned, “What value are we providing?”

Universities, with rigid assignments and young people eager to try out ways to bypass them, might be the canary in a coal mine for the implications of AI in the world beyond. As Maddalena shared with The Varsity, The most interesting questions about AI aren’t about AI in the classroom, they are about AI technology in the larger world.” 

AI [is] being used to diagnose rare diseases, fill in for human therapists in psychotherapy settings, and pick out outfits for people,” she continued. “As it does these things, it may replace human work that used to help establish human social networks and connections. Which of these tasks are worth replacing? What will it mean for our social realities when we replace them?” 

With files from Maeve Ellis and Alexa DiFrancesco.