<i>Graceful pure JavaScript fallback when GPU is not available.</i><p>This is a bad thing, not a happy thing to be advertised. Software fallback was of the most frustrating parts of working with OpenGL circa 2010.<p>If you're making a hardware accelerated library, stop trying to make it "gracefully" fail. Just fail! That way people can address the root problem.<p>On the other hand, as long as the library is sufficiently noisy about the fact that it's in CPU mode, it seems fine. I just hope they don't follow OpenGL's design decisions.<p>It's surprisingly hard to figure out the answer from the README: <a href="https://github.com/gpujs/gpu.js/#readme" rel="nofollow">https://github.com/gpujs/gpu.js/#readme</a> There's a "How to check what's supported" section, but it's also "mostly for unit tests."
For what it's worth, HN might be interested to look into how programming a simple WebGPU calculator works. I worked on one some time ago: <a href="https://laskin.live/" rel="nofollow">https://laskin.live/</a><p>Source-code is here: <a href="https://github.com/Laged/laskin.live" rel="nofollow">https://github.com/Laged/laskin.live</a><p>This way you can use some "better" API like Vulkan to run programs that are written in the new SPIR-V intermediate representation. In general, if you are interested in GPU programming, I would definitely look into WebGPU. The API is much easier to get started than Vulkan is, and achieves the most basic things required for simple GPGPU tasks (despite the fact that WebGPU can use Vulkan as the GPU API driver).<p>Note that WebGPU needs a browser flag to be enabled, and is generally very dangerous. It's possible to kernel panic operating systems on web page load, despite the fact that you are using a web browser.
This is cool and I am glad it exists as someone who works with nodejs often, but if you’re doing heavy server side computations where serious parallelization is actually important, I’m a little skeptical that investing in having those computations in JS is likely to be a reasonable choice in many cases outside of small hobby projects. I could be wrong, but I think the performance compared to other alternatives also using parallelization via the GPU would be very unfavorable and you also wouldn’t have rich complementary libraries for this as a result of the low performance ceiling for JS in this domain.
As noted by others, instead of the handcuffs of lowest-common-denominator, we do GPU JS with best of class in node, where we get to play with Apache Arrow, RAPIDS, etc. We now shim nodejs to PyData for GPU tech vs doing node-opencl/cuda to get more GPU lib access, but I'd love to add a more direct numba-equiv and serverless layer here as V8 is better in key ways than cpython in many key ways here, and only a few glaring gaps afaict (gotchas in bigint / large memory space / etc still smoothing, TBD for multigpu and networking streaming.)<p>GPU JS and related frameworks are really 'browser GPU JS', and we find predictability there low enough that we still handroll WebGL 1.0 instead of them. When we shift, I'd hope it'll be for a webgl2+ (opencl2+...), but 10+ years later, I've stopped tracking the politics.
I like how easy it is to write a new GPU kernel function.<p>I have never done any GPU kernel programming but I might give this a try.<p>I wonder if TensorFlow.js is implemented in much the same way. TensorFlow.js is fairly much awesome, largely because the examples are so well done, getting up to speed is painless.
This could be useful for embarrassingly parallel workloads but 2 points:<p>- the backend is OpenGL so it won't be as performant on NVIDIA hardware as CUDA (NVIDIA don't like OpenGL)<p>- I don't see any explicit GPU memory management so you might not be able to setup a pipeline of operations which all operate GPU memory (aka the big pipelines you see in ML). That would be another performance hit.<p>Having said that it looks like fun and I'm going to check it out!
Seems like if it's possible to accelerate JS with a GPU it should be far more possible to do it with a many-core CPU. This bodes well for the likely future of dozens of X64 cores or even hundreds of ARM cores on a higher-end desktop/laptop chip and hundreds to thousands on server chips.
I may be confused as I do not know much javascript, but the installation page mentions a dependency on Mesa, is it relying on it to perform the actual work?
If so, I do not quite understand the debates here around avoiding OpenGL design and fallback to software emulation.
Since this supports doing image convolutions I guess it can be used to do efficient CNN inference. Are there any examples doing that?<p>I see there are some other libraries to run deep learning networks in the browser (like Tensorflow.js), but it seems they have some limitation regarding the use of the GPU.<p>It may be interesting in pushing the boundaries of this project to get an efficient and generic CNN library. They seem to not use CUDA, which may be a limitation...
My point is that if you are using JS as a wrapper for a big black box of gpu computations that are close to the metal then you are not really using JS in any meaningful sense and can wrap anything else that has a much better library ecosystem and performance qualities for anything that you’re not just getting the gpu to crank out in a server context (which is implied by nodejs).<p>There’s the speed of generating inputs, the speed of transforming and passing data at the input/output boundaries, and the ability to conveniently and performantly work with data in memory natively (i.e. while not outsourcing the computations elsewhere) that matters here. Is JS a great choice for any of these things? Most importantly, the last thing? This library isn’t using ArrayBuffers for setting up all data or working on the data in JS so even if they work great for performance it seems totally irrelevant and let’s be honest, if it were working with ArrayBuffers directly you would be so far away from usual JS and any convenience JS offers, you might as well not be writing JS.<a href="https://www.bloggerzune.com/2020/05/follow-my-blog-with-bloglovin.html?m=1" rel="nofollow">https://www.bloggerzune.com/2020/05/follow-my-blog-with-blog...</a>
really a bummer that they require function keywors so that the gpu function can access this.threads ! Would have been cleaner IMO - and more modern ideomatic - if the callback would have a parameter that would refer to the gpu.<p>otherwise this stuff is AWESOME. it would be great if we could reduce our dependency on python
Do you want exploits? Because exposing more and more bare metal OS functionality to javascript and then running all javascript you receive without a care in the world is how you get exploits.<p>And then once the exploits appear now the user is the danger for going to those sites, or installing that add-on, so the control must be taken away from the user. And so on with HTTPS only and no more accepting self signed certs so everyone has to be leashed to a cert authority and ask for permission to host a visitable website.<p>No, making the browser the OS is the path to loss of control and darkness.