Google’s Brain2Music AI

Technology

Mankind may experience the world in the

 realistic digital universe in dimensions that 

can’t be in the physical world.

Fig 1: Listening to his favourite music Brain2Music was trained using fMRI data collected from participants while they were listening to music of various genres such as hip-hop, jazz, rock, and classical. Google is developing a new AI dubbed ‘Brain2Music,’ which generates music using brain imaging data. According to the researchers, the AI model can make music that closely mimics portions of songs that people were listening to when their brains were scanned.
Fig2: Google Brain Music AI

The AI model can rebuild music by detecting your brain activity using functional magnetic resonance imaging (fMRI) data, according to a recently released research article on arXiv by Google and Osaka University.

    For those who are unfamiliar, fMRI works by monitoring the flow of oxygen-rich blood to the brain and determining which areas of the brain are most active.

Fig3: Google Brain Music AI

Scientists evaluated fMRI data from five participants who listened to identical 15-second-long music snippets from diverse genres such as classical, blues, disco, hip-hop, jazz, metal, pop, reggae, and rock during the study.

    Following that, brain activity was utilized to train a deep neural network to detect the association between brain activity patterns and musical aspects such as emotion and rhythm. The mood was then classified as gentle, sorrowful, thrilling, furious, frightened, or cheerful by the researchers.

Brain2Music was personalized for each participant in the study and effectively converted data from the brain to music using original song snippets. It was then fed into Google’s MusicLM AI engine, which can produce music based on text descriptions.

    During the study, it was discovered that ‘when a person and MusicLM are exposed to the same music, the internal representations of MusicLM are connected with brain activity in particular regions’. The major goal of the experiment, according to Yu Takagi, professor of computational neuroscience and Al at Osaka University and co-author of the article, was to understand how the brain interprets music.

Fig 4: Google Brain Music AI

However, the study article says that because each person’s brain is wired differently, it will be impossible to adapt a model developed for one person to another.

    Furthermore, because capturing fMRI signals requires users to spend hours in an fMRI scanner, this sort of technology is unlikely to become viable soon. However, future research may be able to determine whether AI can rebuild music that humans imagine in their thoughts.

Share this

Leave a Reply

Your email address will not be published. Required fields are marked *