TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

WebMonkeys: parallel GPU programming in JavaScript (2016)

115 点作者 surprisetalk8 天前

7 条评论

butokai5 天前
By coincidence I was just having a look at the work by the same author on languages based on Interaction Nets. Incredibly cool work, although the main repos seem to have been silent in the last couple of months? This work however is much older and doesn't seem to follow the same approach.
评论 #43913862 未加载
评论 #43914040 未加载
评论 #43914438 未加载
Anduia5 天前
The title should say 2016
kreetx5 天前
Unfortunately this is not maintained since 2017: <a href="https:&#x2F;&#x2F;github.com&#x2F;VictorTaelin&#x2F;WebMonkeys&#x2F;issues&#x2F;26">https:&#x2F;&#x2F;github.com&#x2F;VictorTaelin&#x2F;WebMonkeys&#x2F;issues&#x2F;26</a><p>Are there other projects doing something similar on current browsers?
评论 #43917987 未加载
sylware5 天前
Maybe the guys here know:<p>Is there a little 3D&#x2F;GFX&#x2F;game engine (plain and simple C written) strapped to a javascript interpreter (like quickjs) without being in apple or gogol gigantic and ultra complex web engines?<p>Basically, a set of javascript APIs with a runtime for wayland&#x2F;vulkan3D, freetype2, and input devices.
评论 #43913852 未加载
评论 #43914748 未加载
评论 #43920698 未加载
评论 #43919978 未加载
评论 #43914207 未加载
评论 #43913748 未加载
评论 #43914365 未加载
zackmorris5 天前
This is cool but doesn&#x27;t actually do any heavy lifting, because it runs GLSL 1.0 code directly instead of transpiling Javascript to GLSL internally.<p>Does anyone know of a Javascript to GLSL transpiler?<p>My interest in this is that the world abandoned true multicore processing 30 years ago around 1995 when 3D video cards went mainstream. Had it not done that, we could have continued with Moore&#x27;s law and had roughly 100-1000 CPU cores per billion transistors, along with local memories and data-driven processing using hash trees and copy-on-write provided invisibly by the runtime or even in microcode so that we wouldn&#x27;t have to worry about caching. Apple&#x27;s M series is the only mainstream CPU I know of that is attempting to do anything close to this, albeit poorly by still having GPU and AI cores instead of emulating single-instruction-multiple-data (SIMD) with multicore.<p>So I&#x27;ve given up on the world ever offering a 1000+ core CPU for under $1000, even though it would be straightforward to design and build today. The closest approximation would be some kind of multiple-instruction-multiple-data (MIMD) transpiler that converts ordinary C-style code to something like GLSL without intrinsics, pragmas, compiler-hints, annotations, etc.<p>In practice, that would look like simple for-loops and other conditionals being statically analyzed to detect codepaths free of side effects and auto-parallelize them for a GPU. We would never deal with SIMD or copying buffers to&#x2F;from VRAM directly. The code would probably end up looking like GNU Octave, MATLAB or Julia, but we could also use stuff like scatter-gather arrays and higher-order methods like map reduce, or even green threads. Vanilla fork&#x2F;join code could potentially run thousands of times faster on GPU than CPU if implemented properly.<p>The other reason I&#x27;m so interested in this is that GPUs can&#x27;t easily do genetic programming with thousands of agents acting and evolving independently in a virtual world. So we&#x27;re missing out on the dozen or so other approaches to AI which are getting overshadowed by LLMs. I would compare the current situation to using React without knowing how simple the HTTP form submit model was in the 1990s, which used declarative programming and idempotent operations to avoid build processes and the imperative hell we&#x27;ve found ourselves in. We&#x27;re all doing it the hard way with our bare hands and I don&#x27;t understand why.
评论 #43922453 未加载
qoez5 天前
Awesome stuff. Btw: &quot;For one, the only way to upload data is as 2D textures of pixels. Even worse, your shaders (programs) can&#x27;t write directly to them&quot; With webgpu you have atomics so you can actually write to them.
punkpeye5 天前
So what are the practical use cases for this?