Artificial Intelligence as a chance and threat to music in the media

by Stephan Eicke

The digital revolution that has taken place for the past two decades has had a deep impact on film- and production music. In the course of this series of articles we will dive into various aspects of how the music-in-the-media landscape has changed and what this means for the creative minds in front of the computer.

The music industry hasn’t slept when it comes to fully functioning A.I. composers. Since the early 2010s, new programs have been developed whose function it is to create music – on their own without having a human being sitting in front of the computer to put melodies, harmonies and counterpoints together. Before this kind of “generic” music came up – meaning music that was not written by a composer specifically for the film – there was library music. With this, producers and directors have been able to pay little money for generic musical cues which transport an atmosphere or a mood that could fit to a variety of scenes. In most cases, licensing such a piece is cheap, especially if the creator is exempt from receiving royalties, and in any case much cheaper than commissioning and recording a new score.

„We are already one step further though: The first A.I. composer goes by the name of Melomics and was first introduced in 2010.¹ The music, that was generated by a computer in different styles, is freely downloadable – all one billion tracks. Melomics‘ goal has been to “uncover the world’s largest music marketplace” and lets the registered user access the royalty-free pieces (which all of them are) in formats such as mp3, MIDI, xml and even in a printable pdf.“

While Melomics hasn’t caught on as the developers probably expected originally, other companies have been working steadily on fully functioning A.I. composing services to stir up the music market. A company called AIVA developed the first A.I. that received recognition as a composer and therefore as a person by law, retaining its author’s rights despite not being a fully-fledged human being.² AIVA uses algorithms to produce music and the developers went a slightly different route than those of Melomics by feeding the A.I. countless hours of classical music. The software then emulated the style of the music it had listened to and was able to write down its compositions in a sequencer and therefore on a printable pdf, ready to be performed by actual human beings – which happened in June 2017 when AIVA’s opus 23 was performed for the National Day celebrations in Luxembourg, much to the pleasure of the prime minister of the country, Xavier Bettel: “Thrilled about This #Luxembourg based #Start-up developed Artificial Intelligence composing the music of the future.“3 However, how much additional work had to be done by human beings to make the piece ready for performance is unclear, especially since the company employs an artistic director in Olivier Hechon, whose job description poses this exact question without answering it: “Olivier is a proven musician, composer, arranger and orchestrator who works as Artistic Director for Aiva and makes sure that every music delivered to our clients are up to the highest standards.“4 The clients are the companies and private persons who decide to commission a musical piece and who immediately receive clearance for the master and publishing rights, enabling them to use the music for any purpose they desire. Depending on how specific the clients are in their demands, they can also feed AIVA with a melody and the A.I. would then add the remaining ingredients accordingly.

While these compositions are designed to stand on their own instead of accompanying a video, different platforms have arisen which want to fill that niche. Jukedeck, for example, lets the user upload a video and make the A.I. create music for the video.5 The control the user has lies in choosing the style of music, the instruments and deciding where the peaks of the scene are that the music is supposed to hit. By paying royalties for the use of the musical piece that was created – or by signing up as a Gold member to the Jukedeck platform – this piece can then be used publicly. Up to July 2017, Jukedeck had created more than half a million musical pieces and, according to their website, supplied their services to clients in 169 countries.

Quite similar is Amper Music.6 Like with Jukedeck, the user can upload a film clip, choose the instruments, the musical style and the peaks the music is supposed to hit. In less than a minute after hitting the Create-button, the piece comes into existence and can then afterwards be changed by the client in various ways, such as a different tempo, where to have the climax or where the music is supposed to start or end.

A.I. composing technology raises questions on if or how long library music written by human beings will still be required. Library music has a long history in film and television since: as early as the 1930s movies were supplied with pre-existing pieces from a library. In the last few years, more and more renowned film composers have agreed to write, produce and record library music for corporations; among the composers are Javier Navarrete and Joe Kraemer. However, they are not the only ones. Countless composers in all parts of the world earn their living by writing library music they license to film studios and television stations, to game developers and companies which need something gripping for their newest commercial. If algorithmic music continues to get better musically – which it necessarily will since the developers will profit financially from the improvement – it is highly doubtful that composers writing library music will be able to keep their job. Instead, they could be replaced by machines like the employees at the conveyor belt. Says Rens Machielse, head of the department for Music and Technology at the HKU University of Arts in Utrecht, Netherlands: “I think to a certain extent it can be dangerous. It will be easy to generate music that simply needs to be there in television as a background filler. The profession of composition for media, for film and documentary, will still be there but only if you really need something special. If you really need something to be produced the fast way then it will be done with A.I.”

Carter Burwell, who follows the developments in the A.I. industry closely, shares a similar sentiment: “[Film] is the one place where I think it conceivably could [have an impact]. I don’t think you could simply put a film through the machine and expect it to come up with an interesting score for that. It can come up with a score, but I don’t think it can come up with an interesting score. I would like to say that mostly when people are talking about A.I. music they are talking about trying to come up with a neural net that imitates the work that humans do. But if you were trying to see what kind of new ideas machines might have about music, I think that is interesting. I think that there are films for which that approach might be interesting. It is an interesting area to go into when the machine comes up with some melodic structure that you would have never thought of or a sonic pallet that no human could have come up with.”

Will this be the case? Will composers who are focused entirely on technology and samples find themselves obsolete because their job will be taken over by A.I.s if they don’t produce musical works that cannot be written by machines under any circumstance? Says Nan Schwartz: “I know people whose idea of a good time is reading a manual and learning how all that gear works and being into that whole world. Those are the people that I don’t hear a lot of music coming out of. It’s like a carpenter who only sees nails. You only see what you are focused on. And if you are thinking about technology all day long and finding a sample and a sound that’s going to be your focus and that’s what you will excel at.” A.I.s could therefore very well be the ultimate motivator for composers to dive into counterpoint, harmony and orchestration in order to create spellbinding, carefully crafted compositions that no machine could come up with, to develop an own musical voice and be the client’s therapist by guiding him through the process of adding music to their production – and not just any music, but music that comes into existence when the client’s and composer’s minds meet. Pushing a button on the computer is not enough.

End Notes: