GCE definitely has good tooling. The biggest complaint I have with their tooling, however, is the stupid business about authorizing your workstation/gcloud client with your Google account. The tool sets up a local web server and tells the Google OAuth stuff to redirect you back to that local webserver address.<p>So, you literally have to do the Google OAuth process, but on the machine you're running the gcloud tool from. In my case, that was a server out on the internet. I had to open the link in lynx, for god sakes. I probably could have port forwarded it, but still, what a pain in the ass. They need to do better. For better or for worse, just having keys (AWS) is way more straightforward: put these keys in this file, then you can use XYZ tool, etc etc.<p>Besides that, their actually performance/price ratio is on point. They've simplified the whole elastic storage bit with persistent disks, but you're still going to pay for a bunch of IOPS, if you need them. We run a 10TB disk on a master database because IOPS only scale with volume size, yet we use a few hundred GBs at most. We haven't explored CloudSQL or any of their object storage. We strictly use them for compute, and run all of our own class of machine.<p>Their firewall / addressing stuff is very simplistic compared to VPC and security groups. Even bigger than that, and not a showstopper, is that they have no split-horizon DNS like AWS does. So when you see those ec2-xx-xx-xx-xx.compute-1.amazonaws.com addresses, those will resolve to the private IP of the host if you query them inside EC2. Likewise, they resolve to the public IP when queries from the outside. So, if you have a vanity domain you use to do overlay DNS - machine2.prod.mycompany.io, or something - it's trivial with AWS. You just point it at the public DNS record given to the machine, and it works anywhere. Not possible in GCE without doing it yourself, and when you add up split-horizon DNS with VPCs and security groups, and did I mention private DNS in Route53?.. it makes for a really simple way to lay out DNS and get all your security on point. It's not a huge problem, but when we got to GCE, it was frustrating to not be able to snap together that stuff as easily as in AWS.<p>Beyond that, their outages / maintenance windows have been historically meh but are getting better. There's been more than a few hiccups of the "we pushed a router change accidentally, and now 33% of storage traffic is broken" variety. Zone stability used to be an issue, too. Oh, we're going to deprecate this zone entirely, so you need to move all of your shit. Wat? It used to be two US zones, so one of them going under for "maintenance" meant you now had to situate your workload in a single zone, unless you wanted to deal with transatlantic/transpacific latency and all the fun of that stuff. It was bumpy, that's the most succinct way I can put it.<p>Their support is good / can be very helpful, but it's not super cheap. I don't think AWS is particularly cheap either for that sort of "gold" support, though. (I don't know specific numbers off the top of my head.) We've had their PD team (PD is their EBS equivalent, Persistent Disk) talk to us looking for information on our workload, why we were doing the things we were doing with our resources, so they clearly are looking to better their service based on customer needs... I just don't think they've found their rhythm, internally, to straddle customer needs and the overall stability of the platform. They haven't nailed the basics.