Hey peeps full disclosure I work as one of Linode's RnD engineers. I want to try to get to as many of these as I can.<p>One of the biggest questions is why the Quadro RTX 6000? Few things:<p>1. Cost it has the same performance as the 8000. The difference is 8 more GB of RAM that comes at a steep premium. Cost is important to us as it allows us to be at a more affordable price point.<p>2. We have all heard or used the Tesla V100, and it's a great card. The biggest issue is that it's expensive. So one of the things that caught our eye is the RTX 6000 has a fast Single-Precision
Performance, Tensor Performance, and INT8 performance. Plus the Quadro RTX supports INT4.
<a href="https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/quadro-rtx-6000-us-nvidia-704093-r4-web.pdf" rel="nofollow">https://www.nvidia.com/content/dam/en-zz/Solutions/design-vi...</a>
<a href="https://images.nvidia.com/content/technologies/volta/pdf/tesla-volta-v100-datasheet-letter-fnl-web.pdf" rel="nofollow">https://images.nvidia.com/content/technologies/volta/pdf/tes...</a>
Yes, these are manufactures numbers, but it caused us pause. As always, your mileage may vary.<p>3. RT cores. This is the first time (TMK) that a cloud provider is bringing RT cores into the market. There are many use cases for RT that have yet to be explored. What will we come up with as a community?!<p>Now with all that being said, there is a downside, FP64 aka double precision. The Tesla V100 does this very well, whereas the Quadro RTX 6000 does poorly in comparison. We think although those workloads are important, the goal was to find a solution that fits a vast majority of the use cases.<p>So is the marketing true to get the most out of MI/AI/Etc? Do you need a Tesla to get the best performance? Or is the Tesla starting to show its age? Give the cards a try I think you'll find these new RTX Quadros with Turning architecture are not the same as the Quadros of the past.
I would go with Hetzner: <a href="https://www.hetzner.com/dedicated-rootserver/ex51-ssd-gpu" rel="nofollow">https://www.hetzner.com/dedicated-rootserver/ex51-ssd-gpu</a><p>GTX1080 for 100$ a month. Grantend, it is older, but it still works for DL. Let's say you do 10 experiments a month for ~20 hours. Thats 0.5$/hour and I don't think it is 3 times faster.<p>If you then want to do even more learning the price goes even down.<p>//DISCLAIMER: I do not work for them, but used it for DL in the past and it was for sure cheaper than GCP or AWS. If you have to do lots of experiments (>year) go with your own hardware, but do not underestimate the convenience of >100MByte/s if you download many big training sets.
Still way too much money when a 2x 2080Ti comparably specced machine under my desk costs less than 2.5 months of their billing rate, and 4x 1080Ti servers in my garage cost about 1 month of their 4-GPU machine _and_ have more SSD storage. This pricing is totally insane, especially if not billed per-minute (which in Linode's case it is not) and if there are no cheaper preemptible/spot instances.
Looks amazing. Linode has worked really well for me over the years.<p>One thing I noticed when recently trying to get a GPU cloud instance, the high core counts are usually locked until you put in a quota increase. Then sometimes they want to call you.<p>So I wonder if Linode will have to do that or if they can figure out another way to handle it that would be more convenient.<p>I also wonder if Linode could somehow get Windows on these? I know they generally don't do anything other than Linux though. My graphics project where I am trying to run several hundred ZX Spectrum libretro cores on one screen only runs on Windows.
That pricing isn't too bad. They come with decent SSD storage too, which is key for the large datasets that make a GPU instance worthwhile.<p>Linode skews more towards smaller scale customers with many of their offerings so I think the GPUs here make sense. The real test will be how often they upgrade them and what they upgrade them too.
Interesting to see another cloud provider go with Quadro chips. NVIDIA repackages the same silicon under several different brands (GeForce, Quadro, GRID, Tesla) and we (<a href="https://paperspace.com" rel="nofollow">https://paperspace.com</a>) have found Quadro to offer the best price/performance value. Despite minor performance characteristics, such as FP16 support in the Tesla family, Quadros can run all of the same workloads eg graphics, HPC, Deep Learning etc. If you’re interested in a similar instance for less $/hr, check out the Paperspace P6000.
Isn't AWS cheaper?<p>edit: could be wrong thought I read of AWS being .65 dollars an hour for deep learning GPU use.
edit2: Did a quick look, the .65 dollars doesn't include the actual instance, so its around 1.8 an hour on the low end, I think this cheaper.
Can these be used for crypto mining at any level of efficiency? I was able to mine GRLC back in the day on AWS spot instances at a VERY mild degree of profitability.