Every day, artificially intelligent neural networks continue to astound us with each new feat they accomplish.
A particular type of neural network called large language transformer models (LLTMs) has recently garnered immense attention. These models are able to analyze massive quantities of text and spit out swathes of natural written language, with their output often being confused for that of a human. Neural networks can now be trained to pass the bar exam, write poetry, make music, and create pictures faster than a human can — all at the swift prompt of a keyboard or input of a file.
But with these advances in computational cognition, where does that leave us humans? Twenty-four of the world’s most distinguished academics from across all fields tried to answer this question and more at the Schwartz Reisman Institute for Technology and Society’s Absolutely Interdisciplinary Conference back in late June.
The last session of the conference, “AI and Creativity,” was a spicy one, delving into important and complex questions about the nature of LLTMs and other neural networks at large. The panel consisted of British poet Polly Denny and Nancy Katherine Hayles, an English professor from the University of California, Los Angeles, who discussed the technical and philosophical dilemmas that have arisen with artificial intelligence (AI) with a humanities-focused approach.
The defining question of the session was this: if a machine can create original text, is it being creative?
Is AI poetry any good?
A UK National Slam Champion, Denny’s presentation gave a practical take on large language models (LLMs) and how much they have affected her practice as an artist and poet.
Simply put: not a whole lot.
As the Cheltenham Science Festival’s first-ever poet-in-residence, Denny had the opportunity to have a neural network trained on her entire body of work. Once trained, she began experimenting with different prompts, generating pages of poetry in just a few seconds, which she quipped as probably the fastest way for any poet to get a “really sharp and swift hit” of imposter syndrome.
However, after a bit of effort prompting the model, she had to go and edit most of the output herself. In all, she edited around 150 pages generated by the neural network down to five pages.
Interestingly, Denny said that poetry might be a great benchmark for testing creativity in LLTMs. In her eyes, poetry is very concise and subjective, and its portrayal of emotion dynamically flows, all of which need cognitive faculties that are quite beyond an LLTM’s designed specifications. She pointed out that experience living in the real world is a massive part of what makes humans creative.
The text generated by LLTMs is “a very specific sort of aggregate of what it thinks you want,” she said. However, according to Denny, aggregation is not the main goal of poetry. She expressed that collaboration through editing is really the best way you can make an LLTM’s output make sense for a human audience.
Cognition and awareness of AI
Professor Hayles then took to the stage and tried to answer the question: do the outputs of LLTMs like GPT-3 actually imply cognition, and what does that mean for creativity at large?
LLTMs work by focusing their attention on a specific word of text input at a time, finding which words have the highest chances of following right after, and repeatedly inserting the most probable word until it creates full sentences. The huge advancement that transformer systems have over simple autocorrect is that they can understand the context of a sentence and infer meaning between words that aren’t right next to each other in text. That is why, according to Hayles, LLTMs are “cognitive.”
Hayles defines cognition as a process of interpreting information from one’s contexts; in other words, finding meaning. She argued that even if LLMs don’t store their conceptualization of the world like humans do, we can say that they interpret the world in their own unique way — through text. They can generate an awareness of the world, even without self-awareness.
In any case, the evolutionary paths of humans and AI are now “inextricably entwined” moving forward, Hayles said. Now, we must learn to live with other cognitive beings, both biological and artificial to ensure our survival into the future.
Ah, just some light philosophical chit-chat about life, creativity, and what it means to be human! What could possibly be better?