This was my first JavaScript project of any real complexity and it's quite hacky. It's gotten better in some places over a few years of updating, and in other places it's predictably gotten worse.
That said,
This project began as an attempt to reproduce the magic of the "bot2bot" ecosystem on Twitter circa 2015, where dozens of image-creating and -processing bots passed around their artwork. The coolest part of that for me was watching new works of generative art emerge not from some program but from the chaotic collaborations between programs. I believe that magic is possible for music too, but without the "common language" provided by .png attachments, there's a clear technical barrier. Midi Ditty Notation and the MDN API are my attempt to overcome that barrier and let bots talk to each other in music, through a simple text-based language.
You are highly encouraged to build your own musicbots to join in the fun! It's very easy with tools like Cheap Bots Toot Sweet and Cheap Bots Done Quick. I began this project on Twitter, but that's not such a friendly place for botmakers anymore and the bots that I run there are infrequently maintained. Now the heart of the action is Mastodon. You might also try other Fediverse platforms, Tumblr, or even static website pages. The possibilities are endless with a good API (and with this API, the possibilities are at least many).
Until recently, the cornerstone of the ecosystem was
MidiDittyBot, which would
take text strings and return links to midi-ditty.glitch.me
. Now
that Mastodon allows attaching audio to toots, bots can make use of the (new
and currently undocumented)
midi-ditty.glitch.me/audio?tune=yourtunegoeshere
endpoint to
fetch and serve their own audio, meaning listeners no longer
have to leave the app to listen to musicbots' compositions. This feature
could also be a link between musicbots who operate on the note level and those
who operate on the signal/recording level.