Streaming issues? Report here
John Maytham 2019 1500 BW John Maytham 2019 1500 BW
Afternoon Drive with John Maytham
15:00 - 18:00
volume_up
volume_mute

Up Next: The Money Show with Bruce Whitfield
See full line-up
Afternoon Drive with John Maytham
15:00 - 18:00
Home
arrow_forward
Lifestyle

Computer reads human brain listening to Pink Floyd. Recreates song from reading

18 August 2023 7:11 AM
Tags:
Neuroscience
Barbara Friedman
Barb's wire
Views and News with Clarence Ford

Scientists at Berkeley University have successfully trained a computer to analyse the brain activity of people listening to music.

Barbara Friedman reports on trending news including the latest research involving scientists who trained a computer to analyse the brain activity of a group of people to recreate a song - creating a leap for sonic-to-speech technology.

(Skip to 5.25 for this one.)

A group of researchers from Berkeley University in California published data on Tuesday (15 August) that revealed how computers can be used to recreate music or sound based on neuronal patterns alone.

To collect the data for the study, the researchers recorded the brains of 29 epilepsy patients at Albany Medical Center in New York State from 2009 to 2015.

As part of their epilepsy treatment, the patients had a net of nail-like electrodes implanted in their brains.

This created a rare opportunity for the neuroscientists to make recordings of their brain activity while they listened to music (Pink Floyd's 1979 hit song "Another Brick in the Wall") - and it worked.

The computer produced a recognisable version of the song based on brain signals.

Listen to what was produced below.

Researchers say that while the audio sounds like it’s being played underwater, it’s a first step toward creating more expressive devices to assist people who can’t speak.

Although artificial intelligence (AI) technologies exist to make speech a reality for those living a life without it, it doesn't always sound natural. This is because a significant amount of the information conveyed through speech comes from what linguists call “prosodic” elements which includes tone.

This means that audio-to-speech technology can be created that doesn't sound robotic because there is room to play with intonation, rhythm and tone.

Friedman says, "this is just an incredible use of technology... that can be life-changing for some."

Read the full study, here.

Scroll up to listen to the full conversation.




18 August 2023 7:11 AM
Tags:
Neuroscience
Barbara Friedman
Barb's wire
Views and News with Clarence Ford

More from Lifestyle