Musebots (musical robots) are algorithmic software agents that generate or process music. Typically a network of musebots form an ensemble and communicate via OSC (Open Sound Control) messages. Humans can also participate in musebot ensembles as performers or controllers.

I’ve been working on musebot development since 2015 mainly in Pure Data (although musebots are written in a variety of software platforms) with some intensive excursions in musebot music making with international colleagues taking place in late 2016 and late 2017.

Official information about the musebot protocol can be found on the Musical Metacreation site.

Check out the musebot specification version 1.0 here and and version 2.0 here.

Papers I have coauthored about my experiments in making music with musebot include:

  • Eigenfeldt, Arne, Andrew R Brown, Oliver Bown, and Toby Gifford. 2017. “Distributed Musical Decision-Making in an Ensemble of Musebots: Dramatic Changes and Endings.” In Proceedings of the International Conference on Computational Creativity, 88–95. Atlanta, GA: Association for Computational Creativity.
  • Brown, Andrew R., Matthew Horrigan, Arne Eigenfeldt, Toby Gifford, Daniel Field, and Jon McCormack. 2018. “Interacting with Musebots.” In Proceedings of New Interfaces for Musical Expression. Blacksburg, VA: NIME.