TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Nvidia Warp: A Python framework for high performance GPU simulation and graphics

490 pointsby jarmitage11 months ago

20 comments

raytopia11 months ago
I love how many python to native&#x2F;gpu code projects there are now. It&#x27;s nice to see a lot of competition in the space. An alternative to this one could be Taichi Lang [0] it can use your gpu through Vulkan so you don&#x27;t have to own Nvidia hardware. Numba [1] is another alternative that&#x27;s very popular. I&#x27;m still waiting on a Python project that compiles to pure C (unlike Cython [2] which is hard to port) so you can write homebrew games or other embedded applications.<p>[0] <a href="https:&#x2F;&#x2F;www.taichi-lang.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.taichi-lang.org&#x2F;</a><p>[1] <a href="http:&#x2F;&#x2F;numba.pydata.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;numba.pydata.org&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;cython.readthedocs.io&#x2F;en&#x2F;stable&#x2F;" rel="nofollow">https:&#x2F;&#x2F;cython.readthedocs.io&#x2F;en&#x2F;stable&#x2F;</a>
评论 #40682488 未加载
评论 #40681794 未加载
评论 #40681919 未加载
评论 #40681883 未加载
评论 #40684037 未加载
评论 #40693004 未加载
评论 #40685851 未加载
评论 #40686919 未加载
eigenvalue11 months ago
I really like how nvidia started doing more normal open source and not locking stuff behind a login to their website. It makes it so much easier now that you can just pip install all the cuda stuff for torch and other libraries without authenticating and downloading from websites and other nonsense. I guess they realized that it was dramatically reducing the engagement with their work. If it’s open source anyway then you should make it as accessible as possible.
评论 #40681256 未加载
评论 #40681604 未加载
评论 #40681241 未加载
评论 #40686175 未加载
评论 #40688511 未加载
评论 #40684963 未加载
评论 #40683495 未加载
w-m11 months ago
I was playing around with taichi a little bit for a project. Taichi lives in a similar space, but has more than an NVIDIA backend. But its development has stalled, so I’m considering switching to warp now.<p>It’s quite frustrating that there’s seemingly no long-lived framework that allows me to write simple numba-like kernels and try them out in NVIDIA GPUs and Apple GPUs. Even with taichi, the Metal backend was definitely B-tier or lower: Not offering 64 bit ints, and randomly crashing&#x2F;not compiling stuff.<p>Here’s hoping that we’ll solve the GPU programming space in the next couple years, but after ~15 years or so of waiting, I’m no longer holding my breath.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;taichi-dev&#x2F;taichi">https:&#x2F;&#x2F;github.com&#x2F;taichi-dev&#x2F;taichi</a>
评论 #40688118 未加载
评论 #40681650 未加载
评论 #40681931 未加载
评论 #40682203 未加载
评论 #40689271 未加载
评论 #40681817 未加载
VyseofArcadia11 months ago
Aren&#x27;t warps already architectural elements of nvidia graphics cards? This name collision is going to muddy search results.
评论 #40686086 未加载
评论 #40682012 未加载
marmaduke11 months ago
Ive dredged though Julia, Numba, Jax, Futhark, looking a way to have good CPU performance in absence of GPU, and I&#x27;m not really happy with any of them. Especially given how many want you to lug LLVM along with.<p>A recent simulation code when pushed with gcc openmp-simd matched performance on a 13900K vs jax.jit on a rtx 4090. This case worked because the overall computation can be structured into pieces that fit in L1&#x2F;L2 cache, but I had to spend a ton of time writing the C code, whereas jax.jit was too easy.<p>So I&#x27;d still like to see something like this but which really works for CPU as well.
评论 #40690360 未加载
TNWin11 months ago
Slightly related<p>What&#x27;s this community&#x27;s take on Triton? <a href="https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;triton&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;triton&#x2F;</a><p>Are there better alternatives?
owenpalmer11 months ago
&gt; Warp is designed for spatial computing<p>What does this mean? I&#x27;ve mainly heard the term &quot;spatial computing&quot; in the context of the Vision Pro release. It doesn&#x27;t seem like this was intended for AR&#x2F;VR
评论 #40683271 未加载
dudus11 months ago
Gotta keep digging that CUDA moat as hard and as fast as possible.
评论 #40687332 未加载
评论 #40681968 未加载
wallscratch11 months ago
Can anyone comment on how efficient the Warp code is compared to manually written &#x2F; fine-tuned CUDA?
jarmitage11 months ago
&gt; What&#x27;s Taichi&#x27;s take on NVIDIA&#x27;s Warp?<p>&gt; Overall the biggest distinction as of now is that Taichi operates at a slightly higher level. E.g. implict loop parallelization, high level spatial data structures, direct interops with torch, etc.<p>&gt; We are trying to implement support for lower level programming styles to accommodate such things as native intrinsics, but we do think of those as more advanced optimization techniques, and at the same time we strive for easier entry and usage for beginners or people not so used to CUDA&#x27;s programming model<p>– <a href="https:&#x2F;&#x2F;github.com&#x2F;taichi-dev&#x2F;taichi&#x2F;discussions&#x2F;8184">https:&#x2F;&#x2F;github.com&#x2F;taichi-dev&#x2F;taichi&#x2F;discussions&#x2F;8184</a>
jkbbwr11 months ago
I really wish python would stop being the go-to language for GPU orchestration or machine learning, having worked with it again recently for some proof of concepts its been a massive pain in the ass.
评论 #40689786 未加载
评论 #40689393 未加载
arvinsim11 months ago
As someone who is not in the simulation and graphic space, what does this library bring that current libraries do not?
评论 #40681463 未加载
jorlow11 months ago
Does this compete at all with openAI&#x27;s triton (which is sort of a higher level cuda without the vendor lock in)?
bytesandbits11 months ago
How is this different than Triton?
beebmam11 months ago
Why Python? I really don&#x27;t understand this choice of language other than accessibility.
评论 #40687399 未加载
评论 #40687500 未加载
评论 #40687805 未加载
评论 #40687394 未加载
评论 #40688519 未加载
BenoitP11 months ago
This should be seen in light of the Great Differentiable Convergence™:<p>NERFs backpropagating pixels colors into the volume, but also semantic information from the image label, embedded from an LLM reading a multimedia document.<p>Or something like this. Anyway, wanna buy an NVIDIA GPU ;)?
nurettin11 months ago
How is this different than taichi? Even the decorators look similar.
paulluuk11 months ago
While this is really cool, I have to say..<p>&gt; import warp as wp<p>Can we please not copy this convention over from numpy? In the example script, you use 17 characters to write this just to save 18 characters later on in the script. Just import the warp commands you use, or if you really want &quot;import warp&quot;, but don&#x27;t rename imported libraries, please.
评论 #40681523 未加载
评论 #40681974 未加载
评论 #40681706 未加载
评论 #40685824 未加载
评论 #40683394 未加载
jokoon11 months ago
funny that now some softwares are hardware dependent<p>OpenCL seems like it&#x27;s just obsolete
评论 #40688375 未加载
water-your-self11 months ago
&gt;GPU support requires a CUDA-capable NVIDIA GPU and driver (minimum GeForce GTX 9xx).<p>Very tactful from nvidia. I have a lovely AMD gpu and this library is worthless for it.
评论 #40683964 未加载