Kudos to Google for making moves here. Having spent the last year+ tackling GPUs in the datacenter, super curious how custom sizing works. It's a huge technical feat to get eight GPUs running (let alone, in a virtual environment), but the real challenge is making sure the blocks/puzzle pieces all fit together so there's no idle hardware sitting around There's a reason why Amazon's G/P instances require that you double the RAM/CPU if you double the GPU. Another example would be Digital Ocean's linear scale-up of instance types. In any case, we'll have to see what pricing comes out to.<p>Shameless plug, if you want raw access to a GPU in the cloud today, shoot me an email at daniel at paperspace.com We have people doing everything from image analysis to genomics to a whole lot of ML/AI.