Generative algorithms are redefining the intersection of software and music

What if you could mix and match different tracks from your favorite artists, or create new ones on your own with their voices?

This could become a reality sooner than later, as AI models similar to the ones used to create computer-generated art images and embed deepfakes in videos are being increasingly applied to music.

The use of algorithms to create music is not new. Researchers used computer programs to generate piano sheet music as far back as the 1950s, and musicians from that era such as Iannis Xenakis and Gottfried Koenig even used them to compose their own music.

What has changed are the improvements in generative algorithms, which first gained popularity back in 2014, coupled with large amounts of compute power that are increasingly changing what computers can do with music today.

OpenAI recently released a project called JukeBox, which uses the complex raw audio form to help create entirely new music tracks based on a person’s choice of genre, artist and lyrics. Meanwhile, tools such as Amazon’s AWS DeepComposer and ones released by the Google Magenta project are helping to democratize the ability for developers to experiment with deep learning algorithms and music.

With respect to commercial use, startups such as Amper Music, which lets users create customized, royalty-free music, are seeing businesses adopt computer-generated pieces for a range of use cases surrounding background tracks for videos, and record labels have started to play around with music written by AI.

As the technology and quality of computer-generated music matures, it will likely bring a lot of changes to the media industry from individual artists to record labels to music streaming companies, and present a slew of legal questions over computer-generated music rights.


Source: Tech Crunch

Leave a Reply

Your email address will not be published. Required fields are marked *