TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Functional, declarative audio applications

83 pointsby trypwirealmost 4 years ago

17 comments

matheistalmost 4 years ago
Very ambitious and looks like quite an accomplishment! I&#x27;ve been working recently on several DSP projects, including for web and embedded applications, and I definitely appreciate writing performant code without having to write C++ by hand.<p>I couldn&#x27;t tell from your post and linked material how the runtime works — I see that the high-level graph is handed over to the runtime, but is it interpreted? Compiled? Does it require platform-specific binaries?<p>For my current projects I settled on Faust [1], which has the advantage of compiling to C++ source files (which can then be used with any platform&#x2F;framework that supports C++), but that has the disadvantage that swapping components in and out (as you describe in the linked article) is not so easy.<p>[1] <a href="https:&#x2F;&#x2F;faust.grame.fr&#x2F;" rel="nofollow">https:&#x2F;&#x2F;faust.grame.fr&#x2F;</a>
评论 #27701162 未加载
_defalmost 4 years ago
As an electronic musician and as a dev, who is interested in touching DSP, React and FP... This is awesome :) thank you
ur-whalealmost 4 years ago
No actual sound (other than the guy&#x27;s voice) played during the entire video.<p>Let&#x27;s see (and hear) a drum module built with the framework.
评论 #27698331 未加载
stefanhaalmost 4 years ago
Does this support writing audio sample processing code in JavaScript? From the examples it looks like JavaScript code connects together pre-existing audio processing nodes, but it&#x27;s unclear if you can write your own nodes in JavaScript.<p>Also, the use of the term &quot;pure function&quot; confused me. Pure functions have no side-effects (<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pure_function" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Pure_function</a>). Why is this necessary? It seems like non-pure functions could generate the graph of nodes too, so restricting ourselves to pure functions seems like a limitation. Or does the term &quot;pure function&quot; actually mean &quot;in functional programming style&quot; (i.e. functions, not objects) in this blog post?
评论 #27701112 未加载
crucialfelixalmost 4 years ago
I released a similar declarative node audio library, using supercollider as the audio backend.<p>There was still work to do to bring it to it&#x27;s full potential. I think audio apps are a great fit for this virtual audio graph paradigm.<p>Congratulations on the release, looks great!<p>It&#x27;s the dryadic components here: <a href="https:&#x2F;&#x2F;crucialfelix.github.io&#x2F;supercolliderjs&#x2F;#&#x2F;packages&#x2F;supercolliderjs&#x2F;api" rel="nofollow">https:&#x2F;&#x2F;crucialfelix.github.io&#x2F;supercolliderjs&#x2F;#&#x2F;packages&#x2F;su...</a><p>The examples repository has more.<p>Work and babies cut me short from finishing it though.
fancy_hammeralmost 4 years ago
Nice work!<p>Is voice triggering with midi sample accurate?<p>What&#x27;s the story for building VST plugins? How much c++ glue code do you need to write?<p>The amount of effort required to build a VST plugin is very off-putting. IMHO it&#x27;s hard to justify doing unless it&#x27;s a commercial project. I think we would see more idiosyncratic and creative plugins if building one took weeks instead of years.
评论 #27698399 未加载
moritzwarhieralmost 4 years ago
Very interesting. Must admit I don&#x27;t understand the calls like <i>el.lowshelf(</i> ... in the example.<p>Is <i>el</i> related to the user interface or does it already describe signal processing in some declarative way?<p>At first glance, I thought the actual processing is invoked when <i>core.render</i> is called.<p>Anyway, maybe I just have to read more about the project or watch the video.
评论 #27698325 未加载
评论 #27698002 未加载
评论 #27698117 未加载
otikikalmost 4 years ago
I like the idea, a lot!<p>My only fear is that on a first glympse it seems that it is biased towards someone familiar with React and its quirks - down to a `state` variable that gets passed around through callbacks. I would personally prefer a more &quot;vanilla Javascript&quot; approach. But what constitutes &quot;vanilla&quot; these days?
评论 #27697909 未加载
maxbendickalmost 4 years ago
Incredible! I have a toy webaudio livecoding project that I&#x27;ve been dying to get working efficiently in VST-form. Looks like Elementary could be a great way to do this.
FrankyFirealmost 4 years ago
Honest question, as I&#x27;m not really deep into this: What are the differences to the JS runtime Reaper is integrating in its DAW, despite being able to run it outside of Reaper?
评论 #27697317 未加载
Aarvayalmost 4 years ago
Would&#x27;ve loved it if it wasn&#x27;t a JUCE wrapper. Nevertheless, some great work here. The same approach can be applied to any non-JUCE stuff too, which would be kickass! :)
评论 #27698349 未加载
stagasalmost 4 years ago
How does the dual licensing AGPLv3 plus commercial work? If one creates a derivative are the contributions also dual licensed automatically, or how is it done?
JohnCurranalmost 4 years ago
This looks very cool, I will check it out for sure.<p>One note: minor typo here: “I want the signal flow through my applicatioin to look like S.&quot;<p>Applicatioin -&gt; application
评论 #27698449 未加载
hootbootscootalmost 4 years ago
Interesting. How does this compare to various browsers &quot;web audio&quot; implementations?
评论 #27698433 未加载
nyanpasu64almost 4 years ago
EDIT: I didn&#x27;t mean to sound so overconfident or showing off my credentials. This is my personal view given what I know, but there are things I don&#x27;t know (like good vs. bad OS audio APIs, how to tune kernels and pick good audio drivers, and whether you can get away with allocations, mutexes, or a JITted&#x2F;GC&#x27;d language on the audio thread). It&#x27;s possible that writing audio code in JS or non-real-time C++ won&#x27;t cause problems to users in practice, as long as it runs on a dedicated thread and doesn&#x27;t contend with the GUI for the event loop, and processes notes synchronously and deterministically, but I haven&#x27;t tried. For various opinions on this from people who have done more experimentation than me, see <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27128087" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27128087</a> and <a href="http:&#x2F;&#x2F;www.rossbencina.com&#x2F;code&#x2F;real-time-audio-programming-101-time-waits-for-nothing" rel="nofollow">http:&#x2F;&#x2F;www.rossbencina.com&#x2F;code&#x2F;real-time-audio-programming-...</a>.<p>----<p>I&#x27;ve spent many years working on computer music and DSP in general, and the last two years writing interactive audio apps like DAWs and trackers (traditionally written in real-time code).<p>Personally I feel that not only should the signal processing be handled in compiled code, but the process of allocating voices for incoming notes and sweeping filter parameters for existing notes (like a VST&#x2F;VSTi plugin) should be real-time&#x2F;allocation-free and deterministic.<p>Additionally for a sequencer or DAW-like program playing a scripted project file, the process of scanning through a document and determining the timing of events should also be real-time and deterministic, requiring it be synchronous with the code driving synthesis&#x2F;notes&#x2F;envelopes. (Some Web Audio music demos I&#x27;ve seen on HN, like the 64-step drum sequencer[1] or The Endless Acid Banger[2], are not deterministic. On a slow computer, when the browser&#x27;s JS engine stutters, notes stop being triggered even though existing notes sustain.) I think that requires that the &quot;document&quot; is stored in C&#x2F;C++&#x2F;Rust structs and containers, rather than GC&#x27;d objects like in JS.<p>(Processing user input depends on low latency without latency spikes rather than determinism and synchronous processing, but I don&#x27;t know if browsers are good at that either.)<p>At this point, I find it significantly easier to write a GUI in native toolkits like Qt, than to learn and write bindings for a GC&#x27;d language to access native data structures. And unfortunately there is a limited selection of mature native toolkits that are both not buggy (Qt is buggy) and has accessibility and internationalization, and optionally theming and native widgets. I still believe that writing a GUI in a native language <i>can</i> become a better user and developer experience than browsers, if more people invest effort and resources into better paradigms or implementations to fix native&#x27;s weaknesses compared to browsers (buggy libraries, no billion-dollar megacorps funding web browsers, apparently people nowadays think that React using a virtual DOM and recomputing parts of the UI that don&#x27;t change is a strength), while maintaining its strengths (less resources required, more predictable memory and CPU usage, and trivial binding to native code).<p>What&#x27;s the current state of native desktop GUIs? Qt is nearly good enough, but is run by a company trying to antagonize the open-source community and rip off commercial users, binds you to C++ (which is less pleasant than Rust), suffers from bugs (some neglected, deep-seated API design flaws), and handles reactivity poorly (though QProperty and transactions[3] promise to improve that aspect). GTK has cross-language support, but in gtk-rs, GTK&#x27;s refcounted design and inheritance and Rust&#x27;s move-based design and composition are fighting against each other. There&#x27;s other older APIs like WxWidgets, FLTK, FOX, etc, many of which I personally dislike as a user. Flutter is promising, but still buggy, the upper layers are virtual-DOM-based (I have reservations), and feels foreign on desktops (many missing keyboard shortcuts, one app (FluffyChat&#x27;s unfinished desktop port) is missing right-click menus, has broken keybinds, and has painfully slow animations).<p>Is there an alternative? Where would you draw the seam between a GC&#x27;d GUI and a realtime audio engine, to minimize the boilerplate glue code, and ensure that note processing and audio synthesis are real-time and deterministic (does this require being synchronous with each other?)?<p>I&#x27;ve seen a lot of software with a non-real-time or nondeterministic audio engine&#x2F;sequencer (even though I personally think it&#x27;s bad for users and unacceptable when I design a program). For example, ZorroTracker has an audio processing path written in JS (a RtAudio thread calling into V8 running in parallel with the GUI thread, calling JS bindings to native .dll&#x2F;.so audio generators), coupled with a (planned) sequencer written in JS and operating on JS objects, giving up on real-time. As I&#x27;ve mentioned, several browser-based audio projects I&#x27;ve seen on HN generate audio in Web Audio (real-time), but trigger notes asynchronously in non-real-time JS code.<p>&gt; Eventually it dawned on me that the task of composing audio processing blocks is itself free of those realtime performance constraints mentioned above: it&#x27;s only the actual rendering step that must meet such requirements.<p>Given my current understanding of real-time programming, I think that the task of feeding inputs into audio processing blocks is not free of realtime performance constraints. In any program with an audio sequencer, I&#x27;m not aware of how to get deterministic playback with predictable CPU runtime (which I do not think is worth compromising), without worrying about &quot;memory management, object lifetimes, thread safety, etc.&quot;, preallocating memory for synthesis, and picking a way to communicate with the UI (eg. lock-free queues for processing commands sequentially, and atomics or triple buffers for viewing the latest state). If you have a solution, do let me know!<p>[1]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27112573" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=27112573</a><p>[2]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=26870666" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=26870666</a><p>[3]: <a href="https:&#x2F;&#x2F;doc-snapshots.qt.io&#x2F;qt6-dev&#x2F;template-typename-t-qproperty-t-proxy.html" rel="nofollow">https:&#x2F;&#x2F;doc-snapshots.qt.io&#x2F;qt6-dev&#x2F;template-typename-t-qpro...</a>
v-yadlialmost 4 years ago
cool, so it&#x27;s like PyTorch for audio where the heavy lifting is done in native code?
12thwonderalmost 4 years ago
off-topic but the author&#x27;s negative sentiments toward c++ is interesting to me as it is quite opposite with me.<p>I use c++ mainly but when I have to use javascript at work, i really hate it for whatever reason. wonder if other c++ programmers feel the similar way.