String and Tins
Wed, 16 May 2018 13:05:46 GMT
We’re finally at a point where if someone mentions 'artificial intelligence' we don’t immediately picture HAL 9000 or Terminator. Alexa, Siri or Cortana have been chilling in our pockets, desks and kitchens for a while, but there’s more to AI than personal assistants. What’s interesting to me is how AI can be used to extend or maybe limit our creativity. So with these clever bots regularly beating chess grandmasters and even starting to upstage doctors, are they any good at music and sound? Can they emulate emotion as well as logic?
Artificial Neural Networks (ANNs) are computing systems of nodes comparable to neurons in the brain. These systems learn and evolve through examining examples and previous tasks and experiences just like we do. ANNs have driven development of things like voice recognition, stock market prediction and the way search engines categorise and serve up data. Google’s ‘Deep Dream’ has been knocking out acid-tripesque artwork since 2015, examining existing artwork and spitting out something new with its own spin.
Deep Style examines and interprets existing artwork to create its own
It’s not just digital pictures, ANNs can examine any data and attempt to interpret its findings. Feed a song or artist into the right code and it can actually learn what it is about the music that makes it sound the way it does to us, boiling emotions and feelings into ones and zeroes to be replicated. If you do a little Web scouring you’ll find fairly recognisable examples of AI replicating a band’s sound. Like this Beatles soundalike created by the guys at SONY CSL Research Lab.
Daddy's Car: a song composed by Artificial Intelligence - in the style of the Beatles
A research project called Magenta is creating a ton of tools to help you collaborate with AI, like the pretty basic but intriguing AI-Duet. Bash a few notes in and the AI will respond, either attempting to finish your musical phrase or by responding directly with something new based on what it’s learned through examining other music it’s been presented with behind the scenes. Some of these examples are a little rudimentary but it’s not a huge leap to imagine being able to select a few check boxes and waiting a few moments before a fresh composition is created through AI based on your criteria, all based on everything that’s come before.
Google’s new Nsynth Super is an attempt at bundling AI Neural Network tech into a clean looking, tactile bleepy bloopy box. NSynth Super is effectively a tub of buttons, knobs, and a touchscreen with a Raspberry Pi mini PC under the hood. The synth takes two existing sounds, learns what it is about the characteristics of those sounds that makes them they appear to us, then combines those characteristics to create something totally new. Want to make a car exhaust sound like a lion’s roar? A sparkle fart? A major shampoo explosion? Maybe collaborating with with AI like this will allow us to create something new we haven’t seen before.
Google’s Nsynth Super
The Future of Intelligent Computers in Art
These technologies have progressed at an incredible rate over the past decade because simply knowing isn’t enough, computers need to understand and interpret that information. With this understanding, AI tools will make creation quicker, easier and automatable. Which must be a good thing right? I just wonder if we’ll get to a point where AI will be the lead in these human machine collaborations. If I’m stuck for a new idea, I’ll generally dive straight into a search engine to look for inspiration. That content is already curated and refined based on what the AI knows about me, to suit my personality based on my history. So maybe I’m already being pigeonholed into a certain way of thinking rather than being exposed to something broader. You know that once a particularly creative YouTube video goes viral there’s a good chance something similar will find its way into an ad campaign of some sort in the coming months. AI and the companies behind it have huge steer in deciding what goes viral. Are we destined to become more and more trapped by our refined AI served bubble?
With the efficiency these systems bring us, we’ll be expected to achieve more, more quickly. Perhaps we need to be making sure we keep time to talk, experiment, discover outside of the computer and outside of our own little world, digital or otherwise to keep our creativity alive.
I think my job as a sound creator and curator is safe for now, these AI guys aren’t there just yet. I might just get Siri to set me a reminder to check again in another five years.
Lawrence Kendrick is sound designer / composer at String and Tins
Genres: Music & Sound DesignString and Tins, Wed, 16 May 2018 13:05:46 GMT