The University of Toronto’s Student Newspaper Since 1880

Share on facebook
Share on twitter
Share on email

The crossroads between artificial intelligence and music production

Tune in to BMO Lab’s webinar series on technology and art
Share on facebook
Share on twitter
Share on email
The BMO Lab has launched a new series exploring AI and the creative arts. ROSALIND LIANG/THE VARSITY
The BMO Lab has launched a new series exploring AI and the creative arts. ROSALIND LIANG/THE VARSITY

Can you imagine a world where music is made using artificial intelligence? The webinar series AI as Foils: Exploring the Co-Evolution of Art and Technology is a new initiative at the BMO Lab that features discussions with artists and artificial intelligence (AI) practitioners. 

The most recent event, “AI as Foil Series: A New Musical Frontier: AI Meets Music,” was held virtually on October 8. “The goal is to explore the curiosity, the excitement, as well as the fears and concerns regarding the role played by AI technologies in art and creativity,” said Natalie Klym, the curator and moderator of the series.

A new webinar series

In 2019, BMO Financial Group invested five million dollars — the largest gift to any single Canadian institution yet — in the BMO Lab for Creative Research in the Arts, Performance, Emerging Technologies, and AI. The lab is based in the Faculty of Arts & Science at U of T and aims to explore and research the intersection between creativity and new technologies, including artificial intelligence. 

This month, the BMO Lab invited producer and engineer Annelise Noronha and AI scientist Sageev Oore. Noronha has worked with notable artists such as Dragonette, Jennifer Lopez, Blue Rodeo, and Academy Award winning composer Mychael Danna. She currently works to compose music and has written various music placements for film and television. 

Oore is a musician and associate professor of computer science at Dalhousie University. He is a CIFAR AI chair and has also worked on Google Brain’s Magenta project that applies deep learning to music.

From tape machines to AI

Noronha began the session with an overview of her unique experience with audio engineering. When she first started engineering in studios, she utilized a software that would lock tape machines together. 

If you needed to shift part of an audio track on a multi-track — a recording made from the mixing of several separately recorded tracks — you would have to ‘bounce’ the audio off the tape machine to another tape machine before sending it back to the original tape machine. “Because we were recording to tape and there were no computers, we were also magicians. People were really in awe,” she said.

Being able to work with retro sound technology is a very specific and transferable skill set to have, Noronha explained. Now, when people want to record tapes for retro effect, she can still do it. 

Oore comes from a machine learning background and explained that he primarily works with AI and machine learning systems to get them to create musical sounds. It is mindblowing how much the music industry has developed from the traditional methods that Noronha worked with at the start of her career to the AI and machine learning-based technology that Oore works with today. 

Oore shared a Musical Instrument Digital Interface (MIDI) generation software, which allows musical instruments, computers, and hardware to communicate, plus several 30 to 40 second videos of classical music performances generated by the software. He explained that the computer did not generate the actual sound waves, but did generate the instructions for a sound sampler to play the music, including which notes to play and when. 

In order to learn these instructions, the computer was given hundreds of samples of classic piano performances. Oore explained that there is an element of randomness when the computer is generating instructions that’s similar to rolling a die, as it picks which notes to play and when. Going forward, Oore is looking to have a more controllable model where the computer has its own intelligence but can also be directed.

Future dilemmas

Artificial intelligence is still very new, and nobody can predict where it will go. It takes hundreds of years to invent a musical instrument and to have it accepted into our culture. Even though history tells us that new technology will eventually be accepted and any current rejection or resistance toward it is just a phase, Klym thinks we should at least try to be critical and aware of AI’s impacts on all levels. 

Ultimately, the goal of AI for music is not the ability to generate music but to build tools that are then used by artists.

Questions naturally arise regarding the application and the future of artificial intelligence. AI could not only be used to generate a musical melody, but also to generate lyrics and tempos for new pieces, and entirely new genres of music. Still, AI-generated music might also not be playable by humans. There is also the question of intellectual property — if AI software writes a piece of music, who has the copyright? The possibility of AI taking over the music industry sparks philosophical questions of whether AI-generated music could be viewed as creative work or not.

The past webinars from this series are all recorded and posted on BMO Lab website. There are going to be more sessions in spring 2022 that will examine AI and voice as well as writing and neuro-linguistic programming.