If I understand correctly, the instances available are containerized instances that users run (i.e, the system matches hosts to guests and takes a cut).<p>Beyond being dangerous on multiple levels, there doesn't seem to be any guarantee of storage or network bandwidth/traffic. Having a multi-TFLOP GPU to train with is hardly useful if you can't get the training data on the device in a reasonable amount of time, or hold that data in local storage.