Skip to content Skip to sidebar Skip to footer

Source: Libération 14-03-2020

More and more musicians are experimenting with artificial intelligence, which already knows how to write songs alone and will no doubt be the source of tomorrow’s hits. But the real revolution is to be sought elsewhere: in the ability to

In 1994, the English electro label Warp released a series of albums entitled Artificial Intelligence produced by humans. Today, it is the turn of artificial intelligence (AI) to produce music. Finally, this is what spreads a received idea that the machine will end up replacing man here as well. In reality, at this stage, AI tends to find its place in aid to creation and production. In this field, research is well underway, both among private and public actors as in France at the Ircam, the venerable Institute for Research and Acoustic Music Coordination founded in 1970 by Pierre Boulez. What is AI? In short, this involves implementing elaborate behaviours in the computer, usually attributed to humans. Artificial intelligence, which appeared in the late 1950s, had its ups and downs.

“The first phase consisted of programming techniques to reproduce human reasoning,” explains Gérard Assayag, a researcher at Ircam. Since the 1970s, it has evolved into connecting systems, including the famous neural networks, which are hopeful but have not produced the desired results. They have regained interest over the past fifteen years thanks to the technique of deep learning.” This “deep learning” gave the computer the data history it lacked to deduce rules and make decisions. After swallowing the entire repertoire, the machine will grind a song in the manner of an artist who had asked for nothing, such as the song Daddy’s Car of 2016 in the style of the Beatles, which sounded like a bizarre caricature of the Fab Four. “It’s a way to make a song close to a tube while bypassing the copyright problem,” says Frank Madlener, director of the Ircam. Creative interest is very limited outside of similarity.”

“A gigantic world is opening up”

To take more relevant advantage of this revolution, Bjork had a whole stock of his own choirs ingested by the machine. Dubbed the “collection of choirs”), his installation in collaboration with Microsoft for the lobby of a New York hotel produces ambient music, which varies according to the seasons, the weather, the presence of birds or airplanes, thanks to the information from a camera installed on the roof.

“The power of AI and its ability to learn open up a gigantic world. The machine will make proposals to the human who will listen to them and be able to react,” continues Frank Madlener. An American duo signed to the DFA label, Yacht spent three years forming a system based on Magenta, the AI developed by Google, by delivering 82 of his old songs. “They then gave the machine the tracks of the songs they were working on as well as lyrics,” explains Doug Eck, a researcher at Google’s Paris lab. Ai used them to build a model that generated melodies, harmonies, rhythms and lyrics.” Google is not intended to invest in the music sector but uses it as a field of experimentation in order to enrich its consumer services like YouTube with new creative and playful functions. Good for Yacht, whose seventh album, Chain Tripping, is reinvigorated. This thrilling experience, the American artist Holly Herndon also succeeded for his album Proto released last year. Here, it is above all the treatment of voices that fascinates, from the marriage between real and virtual choirs to the astonishing creation inspired by the Sacred Harp, the oldest choir in North America. “Working with AI has referred me to the traditional ways of collaborating, when it is necessary to organize a group of people to sing, through common forms of languages and songs,” says Holly Herndon. Human first, then.

An AI in my guitar

Ircam’s research currently tends to engage the musician in real-time dialogue with the machine. “Our latest project aims to bring three types of improvisation into one tool: free, creative and structured,” says Gérard Assayag. This means that the computer will “jammer” and react live to a musician’s play while following a scenario that he has defined beforehand. The next step for this researcher is for digital technology to join the physical by embedding algorithms in acoustic instruments, such as the guitar developed with the French start-up HyVibe: “Sensors detect movements or vibrations, then send the information to an on-board computing unit that analyzes it and makes decisions. A mechanism can thus increase the volume, add sound effects, improve the acoustic quality … There will be a creative dimension so that the guitar produces harmony or orchestration in addition to the guitarist’s playing. We are at the beginning of this fascinating project, everything remains imaginable.”

Jean-Michel Jarre believes in and contributes to the revolution. The musician’s latest creation takes the form of a mobile application called E-N, developed with Sony CSL, the Japanese giant’s research laboratory. With her, he joins Brian Eno’s dream of an infinite work in 2017 with the album Reflection. “The idea is that the algorithm rearranges, rearranges

Show CommentsClose Comments

Leave a comment

News ORS © 2020. All Rights Reserved.