Technology's role in art creation will be debated and explored in 2021. Technologists and creatives are constantly experimenting with new ways art can be produced, consumed and monetized.
BT, the Grammy-nominated music composer for 2010's These Hopeful Machines has been a global leader in the intersection of technology and music. He has been a producer and writer for the likes Death Cab For Cutie and Madonna as well as composing music for The Fast and the Furious and Smallville. He also helped to pioneer new production techniques such stutter editing and fine synthesis. GENESIS.JSON was released by BT this spring. It contains 24 hours worth of original music and visual arts. The software contains 15,000 audio and video clips, each one unique. They include field recordings of crickets and cicadas, as well as audio and video clips. It is also available on the blockchain. It is the first of its kind, according to my knowledge.
Ideas like GENESIS.JSON could be the future for original music. In this instance, composers use AI to create new forms of art using blockchain technology. What is it that makes an artist in this age of algorithm? To find out more, I spoke to BT.
What are your main interests in the interface between artificial intelligence and music?
This idea of an artist fascinates me. In my common tonguemusic it's a small number of variables. There are 12 notes. We use a variety of rhythms. There is a vernacular of instruments and tones that we use, but when you add them all up it becomes a really rich data set.
It seems simple enough to make you wonder, "What's so special about an artist?" This is something I have been interested in my entire adult life. I was immediately struck by the research in artificial intelligence and thought that music was low-hanging fruit.
We can now take the total output of artists and convert it into a training set. This is a huge, multivariable training program. We don't even know the variables. They are automatically referred to as CNNs (convolutional and recurrent neural networks), and RNNs (recurrent neural network).
This is a collection of music that can be used as a training set for an artificial intelligence algorithm to create original music. It can also be used to train the algorithm to create music that is similar to the music it was taught. How will musicians and music lovers respond if we can reduce the genius of artists such as Coltrane and Mozart into a training set that can recreate their sounds?
It seems that the more we are closer to each other, the more it becomes an uncanny valley idea. Some might argue that music is sacred and has to do with fundamental aspects of our humanity. It is easy to have a spiritual conversation about music as a language and its power, as well how it transcends time, culture, and race. Traditional musicians might respond, "That's impossible." It takes so much nuance, feeling, and your entire life experience to make music.
The engineer in me says, "Look at what Google has done." It's a simple MIDI-generation engine that has taken all Bach's works and can produce [Bach-like] fugues. He is a great example because Bach wrote so many fugues. He is also the father of modern harmony. Some of the Google Magenta fugues are listened to by musicologists, but they can't be distinguished from Bach's original work. This again raises questions about what it means to be an artist.
SIGN UP Subscribe to WIRED to stay informed with your favorite Ideas writers.
I am both thrilled and terrified about the space we are expanding into. Perhaps the question I should be asking is not "We can, or should we?" It's more about "How can we make this responsible, since it's happening?"
There are currently companies using Spotify and YouTube to train models with live artists. Companies are permitted to use the work of others and train models using it. Do we really want to do that? Or should we first speak to the artists? Protective mechanisms must be put in place for visual artists as well as programmers and musicians.