This article could have been written by ChatGPT. I assure you that it was not — but you would likely never be able to tell if it was. This fact is a testament to the ever-growing impact and influence that artificial intelligence has in nearly every aspect of our society.
Among the plethora of new artificial intelligence (AI) software lies large language models, which are leading the charge that is changing our world. The term large language model (LLM) refers to a machine learning model that has been extensively trained to understand and generate text much like we do. These models are specifically designed to mimic human language production, which includes creating unique syntactic structures every time this software is used.
This makes it very difficult to regulate the use of LLM tools in education, as originality and critical thinking are key elements of the learning process. Thus, we must consider how to accommodate the use of LLM-based AI tools without compromising students’ learning.
The impact of LLM-based tools in education was a main focus of the Absolutely Interdisciplinary Conference, hosted by the Schwartz Reisman Institute for Technology and Society at U of T from June 20–22. The session included two panellists: Paolo Granata, an associate professor of Book and Media Studies at St. Michael’s College, and Lauren Bialystok, an associate professor at the Ontario Institute for Studies in Education, faculty associate at the Anne Tanenbaum Centre for Jewish Studies, and acting director of the Centre for Ethics.
Granata’s and Bialystok’s session on LLMs featured discussions on how to approach both the rapidly increasing presence of LLM tools in educational fields and the ideal role they might play in education. The discussions included questions surrounding the value of originality and creativity as part of the learning process as well as the opportunities that LLM tools provide.
Originality in education
In her presentation, Bialystok explored the value of originality from the perspective of educational philosophy. Bialystok unpacked the view that LLM tools promote and facilitate cheating in academic settings, particularly in the humanities.
One focus of her presentation was the emphasis the Western world places on originality. Our academic paradigm asserts that the use of ChatGPT is a form of plagiarism, the use of which instructors deem wrong for two reasons. The first is that students using LLM-based tools will fail to learn what they are supposed to and not develop certain skills. Bialystok argues, however, that this presupposes generally agreed-on aims of education, which are not always clearly defined.
The second argument against plagiarism is that it provides an unfair advantage over other students who follow the rules. According to Bialystok, this emphasis on originality, from which notions like plagiarism emerge, is more common in individualist cultures of the West. Different perspectives might encourage us to question whether or not there is such a thing as an original, or new, idea at all. Some argue that all the human brain does is provide new combinations of these ideas, disguising them as original.
All of this together led Bialystok to conclude that the emphasis on originality in academic settings might be fundamentally flawed. Yet, this realization does not diminish the dangers that LLM tools pose in this space, particularly when their use is unchecked and unregulated.
The opportunities of emerging technology
Whilst Bialystok takes a more cautious approach to integrating LLMs in both education and academic circles, Granata seems very optimistic. As a self-described media historian, he took us through the history of pedagogical tradition — a history that saw very little change until the COVID-19 pandemic.
Granata discussed a very interesting example of change in academic settings, which he likens to the present-day development of LLMs: the advent of the calculator. Banned in classrooms in the 1970s, we realized over time that calculators are a prime example of the value that technology can have on a student’s education without compromising educational objectives. Similarly, Granata argues, colleges started recognizing the value of LLM tools in academic settings when professors began to use them to write reference letters.
Granata views artificial intelligence as a method of “extending, augmenting, [and] expanding what makes us human.” As such, the advent of LLMs and their introduction into academic spaces afford a unique opportunity to enhance the educational experience. From fostering the dialogical approach to finding ways to create a more immersive learning experience, LLM tools can lead educational practices into the future and help students develop new skills. Granata thus encourages educators to foster AI literacy in students so that they may be able to play a more proactive role in their education.
The reality is that LLMs will only further advance in their ability to mimic human speech patterns and creativity. As a result, generative AI’s prevalence in our daily lives, including educational spaces, will continue to grow. Attempting to deny or avoid such a future would be futile.
Instead, these speakers argue, we must adapt our approach to AI in education. This involves continuing to challenge our current perspectives and procedures and consider how we can find balance between the objectives of our educational system and the demands of developing AI software.