Google’s Deep Dream project used machine learning to turn everyday images psychedelic, but what happens when that processing power is aimed at the realm of music? The answer is Google Magenta, a nascent new letter of the Alphabet which will create software that produces melodies that send chills down a listener’s spine.
Although it was announced this weekend a panel at Moogfest in Durham, NC, the Magenta team is still holding its cards close. Up until an hour after the panel, when Popular Science broke the story, there was nary a single Google result for the project. There’s still little concrete search trail, but a presentation by Google research scientist Douglas Eck shed some light on what Magenta really is, with more to be revealed on June 1st.
The two key concepts driving the music project are surprise and attention. Deep Dream excels at generative iteration, but a quick polling of the crowd revealed that most users hadn’t interacted with the software more than once. To create a machine learning experience that truly keeps users’ attention requires a more complex approach. “Can we get past generation and move onto other things that make art art?” asked Eck.
It’s not hard to create software that produces music following simple genre templates like a concerto or techno; the challenge is to make it truly original. Eck aptly recognized that musicians typically accomplish this by either starting from a traditional forms and iterating in ways that listeners wouldn’t expect, or building a song from a place of ambiguity and transitioning to a familiar resolution.
The Improvising Robotic Marimba player, another project displayed at Moogfest, is an example of Georgia Tech engineers designing robotics to transcend these musical tropes. The praying mantis-like robot is armed with a deep knowledge of musical forms, four dexterous limbs, and the ability to riff off fellow musicians in real-time.
Google’s approach is less robotically-minded, instead incorporating the Google Brain machine learning algorithms into open source tools that other researchers can use to push the boundaries of audio experimentation. Eck compared the software to Moog synthesizers, with the hope that developers would use them in creative and unexpected ways. An early test of the Magenta principles actually took place on Robert Moog’s 78th birthday, with a synthesizer Doodle that iterated on user-supplied melodies.
It was a neat experiment, but the results were far from harmonious. Eck is well-aware that once the technology reaches a point where it’s not only listenable, but compelling, it creates a series of problematic artistic dilemmas. Much of the power of song lies in the personality of the songwriter, which when emulated too perfectly can detract from the original art. “Once you completely get the Beatles right, you can generate the 14th Beatles album. And the 15th. But the Beatles are the Beatles because they only have so many songs,” said Eck.
So even if Google does manage to enables machines to create spine-chilling melodies in the style of the Beatles, they might just let their catalog be.