AWS and Azure offer instances with GPUs like K-80 that are expensive on themselves and the instances are pretty expensive too, for example p2.xlarge costs $0.9 per hour. These cards are so expensive partly because they have good double-precision capabilities.<p>However, for Deep Learning it is enough to use cards like GTX 1080 which are much cheaper than something like K-80. I wonder why none of the providers like Linode or DigitalOcean built servers with these cheaper cards and offered instances with them. I think there would be high demand for this kind of offer taking into account the current AI hype.<p>I've heard somewhere that Nvidia legally prohibits using their consumer cards in servers, but I didn't found anything confirming it. The consumer cards also probably have less expected lifetime than their server counterparts in 24/7 usage mode, but the huge difference in upfront cost probably could compensate it.
Upgrading to new graphics cards has a <i>very</i> high fixed cost which may not necessarily be offset by the decrease in variable costs from efficiency.