Update: 18/09/2016

A bit more experimentation with sampling and generation: the following samples are generated by sampling from the embedding space rather than output probabilities. The sound (at least to me) a bit better. Here are a few tracks:

[soundcloud url=”https://api.soundcloud.com/playlists/260388443″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /]


So I’ve been playing with different models for algorithmic generative music for a while as a side project. Lately, the models I’ve been building are starting to sound better than garbage.

I will write at length later on the details of the models, but it’s suffice to say for now that they are close to Deep Dynamic Graphical Models (what else they would be ;-)) trained on a bunch of MIDI files. The algorithm is capable of modelling multiple polyphonic instruments as a whole (meaning multiple instruments playing multiple notes with different values/durations per instrument at the same time) and hence it models more or less the band at its entirety.

Following samples are generated from a heavy metal model with a drummer, a bass guitarist, and two electric guitarists – lead and rhythm. Me and Aida call the band LnH 🙂

[soundcloud url=”https://api.soundcloud.com/playlists/245793719″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /]