TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Analyzing the performance of Tensorflow training on M1 Mac Mini and Nvidia V100

226 pointsby briggersover 4 years ago

16 comments

volta87over 4 years ago
When developing ML models, you rarely train &quot;just one&quot;.<p>The article mentions that they explored a not-so-large hyper-parameter space (i.e. they trained multiple models with different parameters each).<p>It would be interesting to know how long does the whole process takes on the M1 vs the V100.<p>For the small models covered in the article, I&#x27;d guess that the V100 can train them all concurrently using MPS (multi-process service: multiple processes can concurrently use the GPU).<p>In particular it would be interesting to know, whether the V100 trains all models in the same time that it trains one, and whether the M1 does the same, or whether the M1 takes N times more time to train N models.<p>This could paint a completely different picture, particularly for the user perspective. When I go for lunch, coffee, or home, I usually spawn jobs training a large number of models, such that when I get back, all these models are trained.<p>I only start training a small number of models at the latter phases of development, when I have already explored a large part of the model space.<p>---<p>To make the analogy, what this article is doing is something similar to benchmarking a 64 core CPU against a 1 core CPU using a single threaded benchmark. The 64 core CPU happens to be slightly beefier and faster than the 1 core CPU, but it is more expensive and consumes more power because... it has 64x more cores. So to put things in perspective, it would make sense to also show a benchmark that can use 64x cores, which is the reason somebody would buy a 64-core CPU, and see how the single-core one compares (typically 64x slower).<p>---<p>To me, the only news here is that Apple GPU cores are not very far behind NVIDIA&#x27;s cores for ML training, but there is much more to a GPGPU than just the perf that you get for small models in a small number of cores. Apple would still need to (1) catch up, and (2) extremely scale up their design. They probably can do both if they set their eyes on it. Exciting times.
评论 #25779246 未加载
评论 #25776170 未加载
评论 #25778902 未加载
mark_l_watsonover 4 years ago
I had the same experience. My M1 system does well on smaller models compared to a NVidia 1070 with 10GB of memory. My MacBook Pro only has 8GB total memory. Large models run slowly.<p>I found setting up Apple’s M1 fork of TensorFlow to be fairly easy, BTW.<p>I am writing a new book on using Swift for AI applications, motivated by the “niceness” of the Swift language and Apple’s CoreML libraries.
评论 #25780549 未加载
lopuhinover 4 years ago
&gt; I chose MobileNetV2 to make iteration faster. When I tried ResNet50 or other larger models the gap between the M1 and Nvidia grew wider.<p>(and that&#x27;s on CIFAR-10). But why not report these results and also test on a more realistic datasets? The internet is full of M1 TF brenchmarks on CIFAR or MNIST, has anyone seen something different?
评论 #25774810 未加载
tbalsamover 4 years ago
This is on a model designed to run faster on CPUs. It&#x27;s like dropping a bowling ball on your foot and claiming excitement that you feel bruised after a few days.<p>Maybe there&#x27;s something interesting there, definitely, but the overhype of the title takes away any significant amount of clout I&#x27;d give to the publishers for research. If you find something interesting, say it, and stop making vapid generalizations for the sake of more clicks.<p>Remember, we only can feed the AI hype bubble when we do this. It might be good results, but we need to be at least realistic about it, or there won&#x27;t be an economy of innovation for people to listen to in the future, because they&#x27;ve tuned it out with all of the crap marketing that comes&#x2F;came before it.<p>Thanks for coming to my TED Talk!
评论 #25779011 未加载
评论 #25779943 未加载
baxter001over 4 years ago
No, but it&#x27;s pretty good at retraining the final layer of low memory networks like MobileNet - weirdly a workload that the V100 is very poorly suited for...
评论 #25773948 未加载
评论 #25775909 未加载
whywhywhywhyover 4 years ago
&gt;We can see better performance gains with the m1 when there are fewer weights to train likely due to the superior memory architecture of the M1.<p>Wasn&#x27;t this whole &quot;M1 memory&quot; thing decided to be a myth now some more technical people have dissected it?
评论 #25775802 未加载
评论 #25774988 未加载
评论 #25778030 未加载
评论 #25775555 未加载
评论 #25775030 未加载
jlouisover 4 years ago
CPUs often outperform specialized hardware on small models. This is nothing new. You&#x27;d need to go to a larger model, and then power consumption curves change too.
procrastinatusover 4 years ago
One thing I haven’t seen much mention of is getting things to run on the M1’s neural engine instead of the GPU - it seems like the neural engine has ~3x more compute capacity and is specifically optimized for this type of computation.<p>Has anyone spotted any work allowing a mainstream tensor library (e.g. jax, tf, pytorch) to run on the neural engine?
评论 #25777275 未加载
sradmanover 4 years ago
I categorize this as an exploration of how to benchmark desktop&#x2F;workstation NPUs [1] similar to the exploration Daniel Lemire started with SIMD. Mobile SoC NPUs are used to deploy inference models on smartphones and IoT devices while discreet NPUs like Nvidia A100&#x2F;V100 target cloud clusters.<p>We don’t have apples-to-apples benchmarks like SPECint&#x2F;SPECfp for the SoC accelerators in the M1 (GPU, NPU, etc.) so these early attempts are both facile and critical as we try to categorize and compare the trade-offs between the SoC&#x2F;discreet and performance&#x2F;perf-per-watt options available.<p>Power efficient SoC for desktops is new and we are learning as we go.<p>[1] <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;AI_accelerator" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;AI_accelerator</a>
评论 #25775686 未加载
0x008over 4 years ago
Well, putting out a tl;dr and then a graph that does not mention FP16&#x2F;FP32 performance differences or anything related to TensorRT cannot be taken seriously if we talk about performance per watt. We need to see the a comparison that includes multiple scenarios so we can determine something like a break-even point between Nvidia GPUs and Apple M1 GPU, possibly even for several SotA models.
helsinkiandrewover 4 years ago
Can someone with more knowledge of Nvidia GPU&#x27;s please say how much the V100 costs ($5-10K?) compared with the $900 mac mini.
评论 #25774578 未加载
StavrosKover 4 years ago
I&#x27;m seeing a lot of M1 hype, and I suspect most of it us unwarranted. I looked at comparisons between the M1 and the latest Ryzens, and it looks like it&#x27;s comparable? Does anyone know details? I only looked summarily.
评论 #25781605 未加载
fxtentacleover 4 years ago
&quot;trainable_params 12,810&quot;<p><i>laughs</i><p>(for comparison, GPT3: 175,000,000,000 parameters)<p>Can Apple&#x27;s M1 help you train tiny toy examples with no real-world relevance? You bet it can!<p>Plus it looks like they are comparing Apples to Oranges ;) This seems to be 16 bit precision on the M1 and 32 bit on the V100. So the M1-trained model will most likely yield worse or unusable results, due to lack of precision.<p>And lastly, they are plainly testing against the wrong target. The V100 is great, but it is far from NVIDIA&#x27;s flagship for training small low-precision models. At the FP16 that the M1 is using, the correct target would have been an RTX 3090 or the like, which has 35 TFLOPS. The V100 only gets 14 TFLOPS because it lacks the dedicated TensorRT accelerator hardware.<p>So they compare the M1 against an NVIDIA model from 2017 that lacks the relevant hardware acceleration and, thus, is a whopping 60% slower than what people actually use for such training workloads.<p>I&#x27;m sure my bicycle will also compare very favorably against a car that is lacking two wheels :p
评论 #25774198 未加载
评论 #25774168 未加载
评论 #25774110 未加载
评论 #25774564 未加载
评论 #25775007 未加载
评论 #25774901 未加载
评论 #25774694 未加载
SloopJonover 4 years ago
The first graph includes &quot;Apple Intel&quot;, which is not mentioned anywhere else in the post. Any idea what hardware that was, and whether it used the accelerated TensorFlow?
评论 #25780096 未加载
tpoacherover 4 years ago
Betteridge says no.
评论 #25775567 未加载
JohnHaugelandover 4 years ago
&quot;Can Apple&#x27;s M1 do a good job? We cut things down to unrealstic sizes, turned off cores, and p-hacked as hard as we could until we found a way to pretend the answer was yes&quot;