I have been studying different music programming languages for years and eventually landed on the design of Glicol:
<a href="https://glicol.org" rel="nofollow">https://glicol.org</a><p>I think the combo of Overtone and Emacs is really cool. Essentially, Overtone, Tidalcycles, Sonic Pi or FoxDot are note/OSC msg generators for SuperCollider. Perhaps the post should mention that. The reason is that when users start to care about audio synthesis and "sound-based music", then they will find a missing picture/concept between the pattern-based language abstraction to the audio float numbers. My experience is that if a user does not understand this process, it is very hard to master the language, tend to forget the syntax and is prone to errors.<p>One example is that before I made Glicol, I made QuaverSeries (<a href="https://quaverseries.web.app/" rel="nofollow">https://quaverseries.web.app/</a>), which shares a very similar syntax to Glicol. I would call it a functional wrapper for Tone.js. But as a functional programming language, even I myself forget the input/output type for each function after not using it for a while. Yet in Glicol, this problem is solved from the first day as Glicol's node IO are all audio streams. One reason I call Glicol "the next generation computer music" is partly because we are now in an era when browsers can also handle real-time GC-free audio, and audio-first makes a modern design.<p>In designing Glicol, my experience is that when one begins from the audio level, it does affect the language design a lot. How to make the trade-off among readability, simplicity and ergonomics for speedy writing in live coding performance, error handing, abstraction from audio to language, lowering the learning curve is really an art.