What's your licensing situation with Nvidia regards their prohibition [1] on datacenter deployment for 'consumer' cards?<p>[1] <a href="https://news.ycombinator.com/item?id=16002068" rel="nofollow">https://news.ycombinator.com/item?id=16002068</a>
If I understand correctly, the instances available are containerized instances that users run (i.e, the system matches hosts to guests and takes a cut).<p>Beyond being dangerous on multiple levels, there doesn't seem to be any guarantee of storage or network bandwidth/traffic. Having a multi-TFLOP GPU to train with is hardly useful if you can't get the training data on the device in a reasonable amount of time, or hold that data in local storage.
With more GPU-in-the-cloud offerings coming on line, is there a utility to dump GPU memory to see if your cloud provider has wiped it between customers?