I've considered setting up my own GPU server to reduce costs, but if you've got P100s for $200/month that means you're paying off the hardware in 4 years? I can't compete with that. AWS was much easier to justify, since they're more expensive so the payoff is shorter. I was waiting on Google cloud's pricing to make any decision there:<p><a href="https://cloud.google.com/gpu/" rel="nofollow">https://cloud.google.com/gpu/</a><p>The only thing that concerns me is that your GPU+ instances only have a single P100, while someone like Google promises to let you attach up to 8 to a single machine. So if I wanted a single powerful machine for experimental work, the cloud providers are more expensive. But I'd have the same problem with buying my own hardware, because those cards are expensive.<p>If you have only a single GPU, have you done any performance testing comparing consumer GPUs like the GTX 1080 with the commercial GPUs? I believe two advantages of the commercial ones are 1. Better interlinks between multiple GPUs and 2. Better floating point performance at the precision used in deep learning. Advantage #1 seems like it wouldn't matter with only a single GPU. I think AWS only has K80s, so that's in your favor.<p>What motherboards/RAM/CPU/etc. are you using? If my estimate is right and you are pricing for a 4-year payoff just for capex, listing all of the hardware would make it an easy sell.