TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A common mistake when NumPy’s RNG with PyTorch

261 点作者 sunils34大约 4 年前

22 条评论

_coveredInBees大约 4 年前
Yeah, I&#x27;d run into this 2 years ago and ended up also reporting an issue on the Centernet repo [1]<p>The solution I have in that issue adapts from the very helpful discussions in the original Pytorch issue [2]<p>`worker_init_fn=lambda id: np.random.seed(torch.initial_seed() &#x2F;&#x2F; 2*32 + id)`<p>I will admit that this is *very* easy to mess up as evidenced by the fact that examples in the official tutorials for Pytorch and other well known code-bases suffer from it. In the Pytorch training framework I&#x27;ve helped develop at work, we&#x27;ve implemented a custom `worker_init_fn` as outlined in [1] that is the default for all &quot;trainer&quot; instances who are responsible for instantiating DataLoaders in 99% of our training runs.<p>Also, as an aside, Holy Clickbaity title Batman! Maybe I should have blogged about this 2 years ago. Heck, every 6 months or so, I think that, and then I realize that I&#x27;d rather spend time with my kids and on my hobbies when I&#x27;m not working on interesting ML stuff and&#x2F;or coding. An added side benefit is not having to worry about making idiotic clickbaity titles like this to farm karma, or provide high-quality unpaid labor for Medium in order for my efforts to be actually seen by people. But it could also just be that I&#x27;m lazy :-)<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;xingyizhou&#x2F;CenterNet&#x2F;issues&#x2F;233" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;xingyizhou&#x2F;CenterNet&#x2F;issues&#x2F;233</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;pytorch&#x2F;pytorch&#x2F;issues&#x2F;5059" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pytorch&#x2F;pytorch&#x2F;issues&#x2F;5059</a>
评论 #26773344 未加载
评论 #26771956 未加载
shoyer大约 4 年前
This post is yet another example of why you should never use APIs for random number generation that rely upon and mutate hidden global state, like the functions in numpy.random. Instead, use APIs that explicitly deal with RNG state, e.g., by calling methods on an explicitly created numpy.random.Generator object. JAX takes this one step further: there are no mutable RNG objects at all, and the users has to explicitly manipulate RNG state with pure functions.<p>It’s a little annoying to have to set and pass RNG state explicitly, but on the plus side you never hit these sorts of issues. Your code will also be completely reproducible, without any chance of spooky “action at a distance.” Once you’ve been burned by this a few times, you’ll never go back.<p>You might think that explicitly seeding the global RNG would solve reproducibility issues, but it really doesn’t. If you call into any code you didn’t write, it might also be using the same global RNG.
评论 #26769099 未加载
unityByFreedom大约 4 年前
Great catch!<p>&gt; I downloaded and analysed over a hundred thousand repositories from GitHub that import PyTorch. I kept projects that use NumPy’s random number generator with multi-process data loading. Out of these, over 95% of the repositories are plagued by this problem. It’s inside PyTorch’s official tutorial, OpenAI’s code, NVIDIA’s projects, etc. [1]<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;pytorch&#x2F;pytorch&#x2F;issues&#x2F;5059" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pytorch&#x2F;pytorch&#x2F;issues&#x2F;5059</a>
timzaman大约 4 年前
Iirc the bug Karpathy mentioned in his tweet was actually due to the seed being the same across multigpu data parallel workers! You need to account for this too. So the author hasnt solved it.<p>I know this bc I fixed the bug. And probably caused it. Hehe.<p>Also you dont just want to set ur numpy seed but also the native python one and the torch one.
评论 #26775626 未加载
jeeeb大约 4 年前
I always randomly log a sample of my inputs to TendorBoard to manually review what my training data <i>actually</i> looks like and (hopefully) pick up on bugs like these. Similarly I find logging high loss inputs very informative.<p>Coincidentally I find this article timely as I was recently reviewing PyTorch DataLoader docs regarding random number generator seeding. It’s the kind of thing unit test don’t pick up since it only occurs when you use separate worker processes.
Too大约 4 年前
.NET has a similar pitfall, but not due to forking but rather that the Random() default seed is based on the system clock. So starting several threads constructing new Random objects with the hope that they are unique might in fact give you same RNG sequences.
评论 #26775634 未加载
jandrese大约 4 年前
Forgetting to seed your RNG is a really classic bug. IMHO RNGs should auto seed unless explicitly set not to, but since the opposite behaviour was baked into C so many years ago it&#x27;s kind of the default. The worst part is how easy a bug this is to miss unless you&#x27;re explicitly printing out the first set of random numbers for some strange reason.
评论 #26767928 未加载
评论 #26768121 未加载
jamesfisher大约 4 年前
Note official TensorFlow tutorials make the exact same mistake. I&#x27;ve reported it but it hasn&#x27;t been fixed. [1]<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;tensorflow&#x2F;tensorflow&#x2F;issues&#x2F;47755" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;tensorflow&#x2F;tensorflow&#x2F;issues&#x2F;47755</a>
qd6pwu4大约 4 年前
I notice that the web page of this article is beautifully justified to two sides instead of left alignment, and there is hyphen in breaking lines. Does anyone know how to achieve this in web page? text-align: justify seems to produce inferior results than this page, e.g. rivers in text.
评论 #26770312 未加载
tsimionescu大约 4 年前
This seems like another reason to never use fork() without exec(). Fork is really a mine field when used this way (and a pretty big maintenance burden on the kernel, by my understanding, to provide the illusion of sharing read-only state with the parent process).
andrew_v4大约 4 年前
Is there something specific about numpy here, or would it be any RNG?<p>I&#x27;m looking at some code that uses random.random() to randomly apply augmentations, I suspect that will have the same issue right?
评论 #26776809 未加载
formerly_proven大约 4 年前
Python has os.register_at_fork nowadays, so why‘d still have this kind of behavior? Not reseeding after fork has been a footgun for almost as long as fork exists.
canjobear大约 4 年前
I usually write my own data handling functions rather than trying to play PyTorch’s game here. I find their abstractions here confusing or not useful.
etiam大约 4 年前
Would normally refrain from upvoting this on account of the title, but the actual topic was important enough that I think it can be worth an exception.
selimthegrim大约 4 年前
A less clickbaity title might have been: Bugs are easy to make. Here’s how to make fewer bugs when modifying existing PyTorch and NumPy code.
Nimitz14大约 4 年前
Oh wow, I&#x27;ve definitely done this mistake without realizing it...
ivoras大约 4 年前
A lot of comments are criticising the frameworks or the developers, but suprisingly almost no one is criticising Python, which remains a language of the early 90ies as far as parallelism is concerned.<p>A bit like Stockholm syndrome - &quot;Python doesn&#x27;t do threading&quot; is so ingrained in its users (and I&#x27;m a user) minds that it&#x27;s not even questioned as a potential source of problems.<p>(Noone said it&#x27;s easy to do. That&#x27;s why language developers and implementers are a special breed even today.)
anon_tor_12345大约 4 年前
This is probably because I never read these kinds of blogposts but this is one of the most flagrantly clickbait titles I&#x27;ve ever seen. Like the article doesn&#x27;t even suggest ditching numpy in favor of jax or some kind of other hot take (which would at least warrant such a bombastic title) it literally just presents one instance in which you <i>might</i> be making a mistake when using numpy&#x27;s rng (not even something more unique to numpy). And the PyTorch team is aware of this and hence exposes `worker_init_fn`. So the title should actually be &quot;Using fork without understanding fork? You might be making a mistake.&quot;
评论 #26767849 未加载
评论 #26767818 未加载
评论 #26768040 未加载
评论 #26769116 未加载
ummonk大约 4 年前
&quot;You&#x27;re making a mistake&quot; sounds like one shouldn&#x27;t use PyTorch and NumPy together, when the actual message is &quot;there might be a mistake in your code&quot;.
nxpnsv大约 4 年前
So click baity. A proper title would be, be careful when using random numbers and multi processing...
评论 #26768450 未加载
king_magic大约 4 年前
Aside from the infuriating clickbait title (which I shall not dignify with an upvote), this is part of why I preprocess augmented images. I don&#x27;t like too much magic in my custom derived (PyTorch) Dataset objects.
BlueTemplar大约 4 年前
Taking the title at face value : yeah, you <i>are</i> making a mistake, using GAFAM tools like PyTorch, Github or OpenAI&#x27;s GPTs.