The P3 instances are the first widely and easily accessible machines that use the NVIDIA Tesla V100 GPUs. These GPUs are straight up scary in terms of firepower.
To give an understanding of the speed-up compared to the P2 instances for a research project of mine:<p>+ P2 (K80) with single GPU: ~95 seconds per epoch<p>+ P3 (V100) with single GPU: ~20 seconds per epoch<p>Admittedly this isn't exactly fair for either GPU - the K80 cards are straight up ancient now and the Volta isn't sitting at 100% GPU utilization as it burns through the data too quickly ([CUDA kernel, Python] overhead suddenly become major bottlenecks).
This gives you an indication of what a leap this is if you're using GPUs on AWS however.
Oh, and the V100 comes with 16GB of (faster) RAM compared to the K80's 12GB of RAM, so you win there too.<p>For anyone using the standard set of frameworks (Tensorflow, Keras, PyTorch, Chainer, MXNet, DyNet, DeepLearning4j, ...) this type of speed-up will likely require you to do nothing - except throw more money at the P3 instance :)<p>If you really want to get into the black magic of speed-ups, these cards also feature full FP16 support, which means you can double your TFLOPS by dropping to FP16 from FP32. You'll run into a million problems during training due to the lower precision but these aren't insurmountable and may well be worth the pain for the additional speed-up / better RAM usage.<p>- Good overview of Volta's advantages compared to event the recent P100: <a href="https://devblogs.nvidia.com/parallelforall/inside-volta/" rel="nofollow">https://devblogs.nvidia.com/parallelforall/inside-volta/</a><p>- Simple table comparing V100 / P100 / K40 / M40: <a href="https://www.anandtech.com/show/11367/nvidia-volta-unveiled-gv100-gpu-and-tesla-v100-accelerator-announced" rel="nofollow">https://www.anandtech.com/show/11367/nvidia-volta-unveiled-g...</a><p>- NVIDIA's V100 GPU architecture white paper: <a href="http://www.nvidia.com/object/volta-architecture-whitepaper.html" rel="nofollow">http://www.nvidia.com/object/volta-architecture-whitepaper.h...</a><p>- The numbers above were using my PyTorch code at <a href="https://github.com/salesforce/awd-lstm-lm" rel="nofollow">https://github.com/salesforce/awd-lstm-lm</a> and the Quasi-Recurrent Neural Network (QRNN) at <a href="https://github.com/salesforce/pytorch-qrnn" rel="nofollow">https://github.com/salesforce/pytorch-qrnn</a> which features a custom CUDA kernel for speed
Hi guys, Dillon here from Paperspace (<a href="https://www.paperspace.com" rel="nofollow">https://www.paperspace.com</a>). We are a cloud that specializes in GPU infrastructure and software. We launched V100 instances a few days ago in our NY and CA regions and its much less expensive than AWS.<p>Think of us as the DigitalOcean for GPUs with a simple, transparent pricing and effortless setup & configuration:<p>AWS: $3.06/hr V100*<p>Paperspace: $2.30 /hr or $980/month for dedicated (effective hourly is only $1.3/hr)<p>Learn more here: <a href="https://www.paperspace.com/pricing" rel="nofollow">https://www.paperspace.com/pricing</a><p>[Disclosure: I am one of the founders]
But where are the C5 instances? It's been 11 months since Amazon announced Skylake C5's and we're still waiting!<p><a href="https://aws.amazon.com/about-aws/whats-new/2016/11/coming-soon-amazon-ec2-c5-instances-the-next-generation-of-compute-optimized-instances/" rel="nofollow">https://aws.amazon.com/about-aws/whats-new/2016/11/coming-so...</a>
More details in my blog post at <a href="https://aws.amazon.com/blogs/aws/new-amazon-ec2-instances-with-up-to-8-nvidia-tesla-v100-gpus-p3/" rel="nofollow">https://aws.amazon.com/blogs/aws/new-amazon-ec2-instances-wi...</a>
Slightly off-topic but I'm curious: Nvidia Volta is advertised as having "tensor cores" - what does it take for a programmer to use them? Will typical Tensorflow or Cafe code take advantage of it? Or should we wait for some new optimized version of ML frameworks?
Hmm just tried to spool up a p3.2xlarge in Ireland but hit an instance limit check (it's set at 0), went to request a service limit increase but P3 instances are not listed in the drop down box :(
Looks like Paperspace announced Volta support yesterday: <a href="https://blog.paperspace.com/tesla-v100-available-today/" rel="nofollow">https://blog.paperspace.com/tesla-v100-available-today/</a> One nice thing here is you can do monthly plans instead of reserved on AWS which is a minimum $8-17k upfront. Really great to see the cloud providers adopting modern GPUs.
An exaflop of mixed-precision compute for $250M over 3 years. That’s ballpark what the HPC community is paying for their exaflop-class machines.<p>You’d still build your own for that money, I think, but it’s an interesting datapoint.