NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
GOOD is part of GOOD Worldwide Inc.
publishing family.
© GOOD Worldwide Inc. All Rights Reserved.

Scientists recreate a Pink Floyd song by extracting it from listeners' brainwaves and it sounds eerie

The research could prove to be a wonderful thing for people who have difficulty in speech, like those suffering from a stroke or muscle paralysis.

Scientists recreate a Pink Floyd song by extracting it from listeners' brainwaves and it sounds eerie
Cover Image Source: Photo of PINK FLOYD; L-R: Roger Waters, Nick Mason, Syd Barrett, Rick Wright - posed, group shot, standing behind mixing desk in recording studio control room (Photo by Andrew Whittuck/Redferns)

A few decades ago, reading people’s minds seemed like pure science fiction. But with the rise of “neural decoding,” neuroscientists can now decode brain activity by monitoring brainwaves. While previous studies have focused on reconstructing images and words, researchers have now taken a groundbreaking step: reconstructing “music” from the mind. In August 2023, scientists at the University of California, Berkeley, successfully reconstructed a 1979 Pink Floyd song by decoding the electrical signals in listeners’ brainwaves. The study was published in the journal PLOS Biology.

Representative Image Source: Pink Floyd perform on stage on 'The Wall' tour, on August 7th, 1980 in London, England. (Photo by Pete Still/Redferns)
Representative Image Source: Pink Floyd perform on stage on 'The Wall' tour, on August 7th, 1980 in London, England. (Photo by Pete Still/Redferns)

To carry out the study, lead researchers Robert Knight and Ludovic Bellier analyzed the electrical activity of 29 epileptic patients undergoing brain surgery at Albany Medical Center in New York. As the patients were being operated on, Pink Floyd’s single “Another Brick in the Wall, Part 1” played in the surgery room. The skulls of these patients were taped with several electrodes that recorded the electrical activity going on in their brains as they listened to the song. Later on, Bellier was able to reconstruct the song from this electrical activity using artificial intelligence models. The resulting piece of music was both eerie and intriguing. “It sounds a bit like they’re speaking underwater, but it’s our first shot at this,” Knight told The Guardian.



 

This experiment provided several insights into the connection between music, muscles, and mind. According to the university’s press release, this reconstruction showed the feasibility of recording and translating brain waves to capture the musical elements of speech, as well as the syllables. In humans, these musical elements, called prosody — rhythm, stress, accent, and intonation — carry meaning that the words alone do not convey. Because these "intracranial-electroencephalography (iEEG) recordings" could be made only from the surface of the brain, this research was as close as one could get to the auditory centers.

Representative Image Source: Unsplash | Pawel Czerwinski
Representative Image Source: Unsplash | Pawel Czerwinski

This could prove to be a wonderful thing for people who have difficulty in speech, like those suffering from a stroke or muscle paralysis. “It’s a wonderful result,” said Knight, per the press release. “It gives you the ability to decode not only the linguistic content but some of the prosodic content of speech, some of the effect. I think that’s what we’ve begun to crack the code on.”

Speaking about why they chose only music and not voice for their research, Knight told Fortune that it is because “music is universal.” He added, “It preceded language development, I think, and is cross-cultural. If I go to other countries, I don’t know what they’re saying to me in their language, but I can appreciate their music.” More importantly, he said, “Music allows us to add semantics, extraction, prosody, emotion, and rhythm to language.”

Representative Image Source: Pexels | Pixabay
Representative Image Source: Pexels | Pixabay

“Right now, the technology is more like a keyboard for the mind,” Bellier told Fortune. “You can’t read your thoughts from a keyboard. You need to push the buttons. And it makes kind of a robotic voice; for sure there’s less of what I call expressive freedom.”

The study not only unveiled a way to synthesize speech but also pinpointed new brain areas involved in detecting rhythm, such as a thrumming guitar. Additionally, the researchers also confirmed that the right side of the brain is more attuned to music than the left side. “Language is more left brain. Music is more distributed, with a bias toward right,” said Knight, per the press release. “It wasn’t clear it would be the same with musical stimuli,” Bellier added. “So here, we confirm that that’s not just a speech-specific thing, but that it’s more fundamental to the auditory system and the way it processes both speech and music."

This article originally appeared 2 months ago.

More Stories on Good