TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Using JAX to Accelerate Research

54 点作者 umangkeshri超过 4 年前

4 条评论

jphoward超过 4 年前
I find JAX really exciting. The idea of numpy with autograd is exactly what Pythonistas want. The elephant in the room, though, is “why not Pytorch?”<p>Everyone knows JAX is what Google realised Tensorflow should have been when they realised how much of a joy Pytorch was to use. I actually think JAX does offer some advantages, not least true numpy interoperability. However, not mentioning *torch a single time in the blog post seems a little disingenuous for a Google-owned deep learning enterprise.
评论 #25467795 未加载
评论 #25466718 未加载
评论 #25467630 未加载
评论 #25468284 未加载
评论 #25467486 未加载
6d65超过 4 年前
The description says it&#x27;s autograd + XLA. So I assumed it always compiles to GPUs via XLA.<p>But, had a look in the code and jax has cublas and RoCm blas, and it looks like there is a flow where it uses the gpu directly, unless I&#x27;m missing something.<p>Definitely worth having a closer look. Autograd via function reflection should be faster than backprop. And if it&#x27;s running on AMD GPUs then it&#x27;s quite intriguing.
taliesinb超过 4 年前
From people with more experience than me: does JAX have a good story about how to manage state cleanly, where it needs to occur?
评论 #25467616 未加载
评论 #25467561 未加载
Jabbles超过 4 年前
Is JAX in competition with TF? Is it something you use instead of TF, or in combination with it?
评论 #25467902 未加载