Jun 25, 2017
NOTE: This podcast is presented as a collaboration with Synthtopia.com for the presentation of people designing and implementing synthesizers. You can listen to the podcast here, on the Synthtopia website (in an embedded player) or on iTunes. But you can also read the article as well as search for information by viewing the transcription available here:
I first got to know the Magenta Project at Google when I heard a podcast with Douglas Eck. I subsequently interviewed him for my podcast, where we talked about using machine learning to do interesting work with composition. This led to an invite to meet with the team, and I got a great introduction to their work at their Mountain View headquarters, and first got to me this week's guest, Jesse Engel.
But something interesting happened a few months ago: I got blindsided by the project when they put up details on their 'NSynth' project. This effort is about using machine learning for music, but not for composition - but rather for sound design. Somehow, I never saw that coming, but it really makes a lot of sense, and it comes up with some pretty interesting results.
As part of this series on Synthesizer Design we've talked to people about their past work in synth design. But it is interesting to also talk to someone about the future of synthesis, how computers might be brought into play to enhance the sound design functions, and how machine learning can drive (and/or be driven into managing) massive parameter sets.
Jesse Engel breaks things down for us in this talk, and we get a chance to see how big the datasets are, how all of this data might be managed, and how he goes about wrangling a bunch of scientists and statisticians into working with sound. Sometimes the work is as expected (a "better violin"), and sometimes not (the "cat flute"). It's a crazy ride, and I hope you learn as much as I did about one of the future possibilities for synth design. Enjoy!