Google’s Brain2Music AI to produce the songs that you liked based on your brain signals- Read

Advertisement
Google is currently working on its new AI model called as Brain2Music which will make use of brain signals to produce music. As per researchers, AI will generate music based on the rhythm, vocals, and tunes of the song, the person previously listened to. This will be done by interpreting the brain signals with the help of the upcoming AI model.
According to the recent article published by researchers on the arXiv website, they have explained how the Brain2Music AI will make use of functional magnetic resonance imaging (fMRI) data. This means that the Ai-powered model will decode the brain signals and produce music based on various segments of a recently heard song by an individual during the test being taken.
fMRI keeps check of the flow of oxygen-rich blood to the brain, to analyze which areas are more active. The test was taken with the help of this technology on five participants. They were given 15 seconds to listen to songs ranging from different genres such as jazz, hip-hop, classical, country, and pop.
Advertisement
The collected data from the individuals to added to MusicLM, yet another Google’s AI which is aimed at generating music from text. The model produced snippets of the original song, which wasn’t accurately correct. However, the emphasis wasn’t on the literary study but it has astounded everyone how AI could study the brain to such an extent. It also makes us believe how music plays an important role and how it is received by our brain with certain emotions.