We’ve all been there, humming a tune to Shazam in desperate hope of finding its origin. Perhaps you overheard a song from a group of ‘cool kids’ and were scared to ask them the name in fear of being seen as ‘uncultured,’ or maybe you’ve unlocked a memory long buried, but you can’t get a hold of the lyrics. 

Whatever the reason may be, a few lines of the song replay in your head again and again, but your frustration builds as your voice doesn’t do the complexity of the music justice. You think to yourself, “If only there was a device to capture the music directly from my brain…” 

While such a device might sound like something out of a science fiction novella, some researchers think that in the distant future, it may indeed be possible. 

Reconstructing a Pink Floyd song from brain activity

In a study published on August 15 in the PLOS Biology journal, researchers described how they were able to partially reconstruct a segment of Pink Floyd’s popular rock song “Another Brick In The Wall, Part 1” using a dataset of participants’ brain activity. 

The dataset was collected over six years, from 2009 to 2015, from the brains of 29 epilepsy patients at Albany Medical Center in New York State. Because their epilepsy treatment required craniotomy — opening the skull to access the brain — neurologists had the opportunity to record patients’ brain activity as they passively listened to music. “Another Brick” played in the operating room while intracranial electroencephalography (iEEG) electrodes recorded neuronal activity. In total, the neurologists placed 2,668 electrodes on the brain cortices of the patients. 

The researchers then decoded the iEEG data into the song’s acoustics using 128 regression-based models. Simply put, they trained 128 programming models to find patterns between brain activity and the song’s musical elements for about 90 per cent — or 172.9 seconds — of the song. Specifically, the researchers compared the song’s audio frequencies to the frequencies of neural activity from 347 electrodes. Out of the total 2,668 electrodes, only these 347 electrodes across different patients’ brains were identified by the researchers as significantly predictive of the song’s frequencies.

While the reconstructed music sounds muffled — like it’s glitching, lagging — and, to be frank, straight from uncanny valley, those familiar with the original song would be able to recognize the phrase “All in all, it’s just another brick in the wall.” 

Brain regions involved in music perception

For years, neuroscientists have worked at decoding and reconstructing people’s perceptions from brain activity using machine learning. Reconstructing what an individual sees or hears may be useful for studying cognitive processes, understanding the neural basis of consciousness, and advancing applications in fields such as medical diagnostics, human-computer interaction, and personalized healthcare. 

In 2012, Robert Knight — a co-author of the Pink Floyd study and neuroscientist at UC Berkeley — and his colleagues were the first to reconstruct the words a person was hearing from brain activity alone. In 2017, researchers reconstructed images participants were viewing also solely based on their brain activity.

However, reconstructing music from brain activity presents a distinct set of challenges. Music is a multifaceted art form, combining elements such as melody, harmony, rhythm, and emotional nuance. Unlike words or images, music often lacks precise one-to-one mappings between brain regions and the perceived experience. A single piece of music can evoke a wide range of emotions and interpretations for different regions of the brain to process, making it a rich and intricate puzzle for neuroscientists. 

In the Pink Floyd study, researchers not only set out to reconstruct music but also to study which regions of the brain correspond to the perception of distinct music elements. The 347 significant electrodes were found mostly in three regions already known to contribute to music perception: the superior temporal gyrus (STG), sensorimotor cortex (SMC), and inferior frontal gyrus (IFG). The STG, which is crucial for auditory processing, is tied to rhythm perception. The SMC processes and responds to sensory information, while the IFG is linked to language comprehension and production, explaining why lyrics could be heard in the reconstructed music. 

The researchers also confirmed previous findings that music perception engages both brain hemispheres, although it involves more frequent engagement of the right hemisphere; this contrasts with speech processing, which tends to dominate the left hemisphere.

Neuroscientists have incentives to delve deeper into the neural mechanisms underlying music cognition, given not only the complexity of music as an auditory stimulus but also its potential benefit for clinical and therapeutic interventions.

Brain-computer interfaces for speech construction and intonation 

While I dream of never having to ask Gen Alpha the new song they’re listening to, the researchers have a more noble and grand goal in mind: creating brain-computer interfaces (BCIs) for people who can mentally form words but can’t physically speak — like those with ALS, short for amyotrophic lateral sclerosis, or locked-in syndrome — as such devices can help them communicate. 

Current BCIs can translate brain activity into words but can’t capture musical elements like pitch, melody, harmony, and rhythm. Consider speech-generating devices like that of the late physicist Stephen Hawking. Though Hawking’s device was updated several times over the years to adjust for his slowly disintegrating motor control, the voice stayed more or less the same: robotic without any indication of tone or mood. 

Knight said in a press release, “[Music] has prosody and emotional content. As this whole field of [BCIs] progresses, this gives you a way to add musicality to future brain implants for people who need it… It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the affect.”

If a BCI could tune into the brain activity of music-specific regions — like parts of the STG, SMC, and IFG — it might unearth some of the prosody and emotional weight needed for speech. Words would no longer be just words. “Instead of robotically saying, ‘I. Love. You,’ you can yell, ‘I love you!’” Knight told Scientific American.

It’ll be a long while, however, till we get fully functional mind-reading machinery. To begin with, it would be preferable not to have one’s skull opened for the devices to work. Ludovic Bellier, a postdoctoral fellow and another co-author of the study, said in the press release, “Noninvasive techniques are just not accurate enough today. Let’s hope… that in the future… from just electrodes placed outside on the skull, [researchers could] read activity from deeper regions of the brain with a good signal quality. But we are far from there.” 

The researchers from the Pink Floyd study also think the reconstructed song’s quality would improve with more electrode coverage of other regions, like the primary auditory cortex. Further, the electrodes the team used were spaced around five millimetres apart. “I think if we had electrodes that were like a millimetre and a half apart, the sound quality would be much better,” Knight told The Guardian

Hopefully, with more advancements in brain-imaging technology, researchers will be able to perfectly reverse-synthesize what our brain is seeing, hearing, and thinking about. 

And, one day, maybe I’ll finally figure out what song is stuck in my head.