The explosion of artificial intelligence (AI) technology over the past year is liable to transform how people work in variousthe way people work in a variety of industries — including academia. As universities struggle to decide how to properly implement and regulate AI in the classroom, virtual teaching assistants (TAs) like PyTA and QuickTA have started popping up in some courses. 

QuickTA is a ChatGPT-based AI teaching service founded by UTM Professor Michael Liut, which he designed to help his large computer science class get personalized help in the form of an AI TA.

CSC108 students may also be familiar with PyTA — another AI-based TA service that can check users’ code for them and help point out errors in styling conventions in code writing. ChatGPT’s Open AI model can write code for a student straight up if prompted.

Impact on TAs

These tools may reduce students’ need to go to TAs’ office hours to get help with coursework. 

ChatGPT’s Open AI model can even write code for a student if prompted — prompting more broad concerns about academic misconduct. 

AI technology has been slower to impact other programs — although this might be starting to change. Kevin Chen, a TA for MAT188 — Linear Algebra, noted that large-language models (LLMs) like ChatGPT are currently not very good at doing math. But “[that] might change in the future,” he noted. 

U of T-wide guidelines currently direct professors and course instructors not to use generative AI to grade students’ work — unless the university approves the software. The policy also points to the concern of third-party tools keeping student work in their databases. “A completed assignment, or any student work, is the student’s intellectual property (IP), and should be treated with care,” it notes.

Grading tools like Speedgrader and Crowdmark are already approved for use in U of T classrooms and may evolve to include AI components in the future. Students in MAT133  and other math courses may also be familiar with Gradescope — an AI tool that can read students’s handwriting on exam papers. 

CUPE3902, the union representingthat represents TAs and other contract academic workers at U of T, is consideringstarting to consider demanding labour protections against the encroachment of AI technology for the first time.  “People have a better sense of what impact contemporary AI tools have on learning compared to them two or three years ago,” CUPE3902 President Eriks Bredovskis told The Varsity in an interview. 

“Unregulated AI use can lead to human rights and privacy violations, lower wages, more precarious work, widespread job loss and increased income inequality,” CUPE stated on their website.

Can AI truly replace human TAs?

Despite concerns about AI’s growing potential to replace human academic workers, Chen pointed out that the technology is still very flawed. “I worry that sometimes AI could give the wrong answer, so students should take its outputs and examine them critically,” he noted. 

Susan McCahan — a mMechanical and iIndustrial eEngineering professor in the Faculty of Applied Science & Engineering —, wrote in U of T News that LLMs like ChatGPT are “pretty good at writing at the level of a first-year or second-year student, but it’s not up to what would be expected of a student in their third or fourth year.” If students use LLMs to take shortcuts in the years leading up to their upper years of university, McCahan warns that meeting these higher expectations could be a significant challenge. 

Chen also pointed to this issue and asserted that students must be discouraged from relying too much on AI to do their work for them, noting but he noted that “they are only hurting themselves.” 

Bredovskis also emphasized that AI concerns are not just about TAs making a living. “It’’s about maintaining the quality of education. Using AI is not a shortcut. And it should not be a shortcut,” he said.

“I think we will come to a point where people recognize when it is useful to use AI to help and when is it not going to be very helpful,” McCahan wrote. 

As CUPE3902’s negotiations with the university are underway, TAs may eventually get a definitive answer on what scope of AI usage is permissible in the classroom, and how it will impact their job security. But for now, heated debates will continue around where universities should draw the line.