I am not sure about the technical merit of this link. Best of show:<p>"Google probably has the best networking technology on the planet."<p>How do we quantify this?<p>"This is important for several reasons. On EC2, if a node has a hardware problem, it will likely mean that you'll need to restart your virtual machine."<p>I would much rather create a service that can tolerate single node outages than relying on "live migrations". I am not sure what he meant by the SSD comparison, Amazon EBS that can be SSD but still it is a network mounted storage.<p>"Most of GCP's technology was developed internally and has high standards of reliability and performance."<p>Guess what AWS was developed for.<p>I like hand-wavy, articles as much as any other guy, but it seems to me they picked GCP and wrote an article to justify it, an cooked up some numbers with single dimension comparisons to make it look like scientific. I wish I was working on single dimension problems in real life, but it is always more complex than that. I am more interested in worst case scenarios and SLAs than micro-benchmark results when comparing cloud vendors. Discarding Azure was purely arbitrary, in fact, Azure is more than happy running Linux or other non-Windows operating systems, I am not sure where he got the idea of " Linux-second cloud".<p><a href="https://azure.microsoft.com/en-us/blog/running-freebsd-in-azure/" rel="nofollow">https://azure.microsoft.com/en-us/blog/running-freebsd-in-az...</a>
They forgot to mention another nice feature of GCE - custom machine types. You can choose number of vCPUs and amount of memory and also the amount of local (ephemeral in AWS speak) storage in 375GB increments.<p>This is a huge advantage. For instance, some of our jobs are computationally-intensive but relatively light on memory. In GCE I can run 32 core machine with 28GB RAM and it will cost me $887.68/month (without any sustained use discounts).<p>In AWS, the closest option I have is c4.8xlarge (36 cores / 60 GB RAM) which will cost $1,226.10/mo.<p>And if I need local (ephemeral) storage in AWS, I'm severely limited in instance types I can choose from, while in GCE you can attach local SSD to any instance type, including custom.<p>If you factor in per-minute billing in GCE and automatic sustained use discounts, we are talking about serious savings without any advance planning (required for using reserved instances).<p>EC2 still has some advantages - it supports GPU-equipped instances, for example, but for our computational pipelines GCE is a clear winner for now (and Cloud Dataproc is so much nicer than EMR!).
"Azure was eliminated since its a Linux-second cloud"<p>I have a feeling this person never really dove into Azure, and just wrote it off because it had Microsoft services built in; and of course various sysadmins still have a strong bias against Microsoft, especially if they are Open Source advocates. Seems like the entire article is mostly just comparing AWS to GCP instead of giving an actual overview of the cloud landscape, just brushes off every other provider (that's not AWS or GCP) without diving into an actually reason -why.-
I see a lot of these sort of articles, and I really have to bring this up, because I don't understand why people don't realize it when they decide where to host their infrastructure:<p>Bandwidth on GCP (and AWS and most of the other providers) is really, really, really expensive. $0.12 per gigabyte, upwards of $0.19 per gigabyte for Asia. Paying $0.12 for every time you send an Ubuntu ISO is crazy. A bored script kiddie could just run up your bandwidth costs to thousands of dollars just for the hell of it. A DDoS could make you declare bankruptcy.<p>I have a server with OVH I can theoretically push 100+TB per month through and only pay $100. I get DDoS protection included. It may not be perfect DDoS, but it's not the $6000/mo I'd need to pay for Cloudflare to get the same thing with GCP (I need wildcards), plus the $0.12 per GB for anything not cached by them.<p>I know from people in the industry that they pay less than a cent per GB. Google, if you want to differentiate your cloud services, start charging better prices for bandwidth and do something about DDoS (project shield should be baked into your offerings). $0.02 would be reasonable and you'll still make a profit. That goes for all the other "great value" cloud services that are actually very expensive for anybody doing work that actually needs bandwidth on the internet.
This should really be titled "A comparison of AWS and GCP."<p>It totally wrote off Azure (2nd in market size) because its a "Linux second" cloud (what does that even mean in a virtualized world).<p>Also, you forgot to analyze support and SLAs around functionality. Good luck with GCP when something goes wrong or they decide to sunset a feature.
This is the best post on this subject I've read in awhile. If you're building your application in a modern manner, half the stuff I see in ridiculous comparison posts shouldn't matter. Disposable infrastructure is a thing. If you're making a choice of "the best cloud" (k...) with that fact in mind you should mostly be considering cost over time and capability to innovate on core offerings. Given unit economics and the continual drop in price of commodity hardware, everything is going to become utility pricing, and services like lambda will help you optimize your costs. Personally I'd put all my chips on AWS over GCP.<p>Also, nice to see someone finally identify DigitalOcean as a B2C provider.
Another great article previously written by Quizlet on their Google Cloud efforts:<p><a href="https://quizlet.com/blog/287-million-events-per-day-and-1-engineer-how-i-built-quizlets-data-pipeline-with-bigquery-and-go" rel="nofollow">https://quizlet.com/blog/287-million-events-per-day-and-1-en...</a><p>TL;DR:
One engineer leveraged Google BigQuery's Streaming API to build a pipeline to analyze ~300 million events per day in realtime.
I don't agree with the OP ; however GCP's sub-hour billing is nice. I need to process a lot of tasks that take longer that 5 minutes which makes them unsuitable for AWS Lambda. With GCP , I only end up paying for a maximum of 10 ~15 minutes-- which is a nice cost saving. Dear AWS -- if you are reading this, match this and I will never ever leave you, not even for 10 minutes.<p>edit : typo
Unless you rely on UDP!!!
<a href="https://code.google.com/p/google-compute-engine/issues/detail?id=87" rel="nofollow">https://code.google.com/p/google-compute-engine/issues/detai...</a><p>We had a gold level support ticket open about this for months and they recently responded that they are making it a "feature request". Yes, proper UDP packet reassembly is a " feature request".
When we were using GCP, it would live-migrate our db almost once a day, which caused us problems that were hard to figure out. I don't believe the frequency of a live-migrate was anywhere near as close on AWS. I don't know if GCP still does that as often since this was about 1-2 years ago.
I wonder why Quizlet didn't just stick with Joyent but switch from SmartOS to the newer Linux-based infrastructure containers and/or Docker containers. Joyent put a lot of effort into reviving LX-branded zones on Illumos precisely to address the concern that this article raises with using an OS other than Linux.<p>Also, why dismiss DigitalOcean as a niche provider for hobbyists? The simple pricing, with lots of data transfer included, should appeal to a lot of businesses too.
My main concern with picking GCP would be that Google has a history of shutting down projects. I feel like they are more likely to shutter GCP than Amazon is with AWS.
My main concern with GCP is its unreliability. BigQuery API randomly giving 40x/50x errors (that has been happening for over a year), signed URL API returning series of 50Xs every few days, CPU on instances going up to 100% for some unknown reason (they stay in that state until manually rebooted), and many other bigger or smaller issues. And they never respond to your questions.<p>The UI is also weird (at least by my taste), for example it is not possible to search instances by their addresses, it is not possible to spin up more than one instance at once, and so on. AWS has an ugly console, but it feels more productive.
Google's cloud does look extremely promising, but the one thing blocking us from migrating (which we would ultimately like, I think) is the lack of PostgreSQL support in CloudSQL. AWS's RDS is mature and pretty great.
In a similar vein: <a href="http://journalofcloudcomputing.springeropen.com/articles/10.1186/s13677-015-0049-1" rel="nofollow">http://journalofcloudcomputing.springeropen.com/articles/10....</a>
Does this line strike anyone else as just not true based on their numbers?<p><pre><code> Quizlet is now the ~50th biggest website in the U.S.</code></pre>
As others have said, the simplicity of GCP has made it a pleasure to use.<p>It has a great UI (material design), and the UX makes sense (the dashboard shows you a summary of your resources, resources are organized by project, notification/status icon animates when resources are changing, etc). Going back to the AWS dashboard feels clunky.<p>There aren't a million different image types for each region and zone - simple, autoupdated base images are available for Ubuntu, CoreOS, etc.<p>It has easy to understand base machine types and custom machine types with tailored specs can be created if needed. Product/service naming is clear (ex. Compute Engine vs EC2, Cloud Storage vs S3).<p>Addons like one-click secure web SSH sessions and Cloud Shell are amazing, no more key pairs to worry about.<p>Google Container Engine, with a hosted Kubernetes master, is a great concept and more transparent than closed source AWS ECS.<p>Their on-demand per minute pricing with sustained usage discounts is almost always significantly cheaper than AWS on demand instances, and your discounts are given automatically. Try the two calculators for yourself: Google (<a href="https://cloud.google.com/products/calculator/" rel="nofollow">https://cloud.google.com/products/calculator/</a>), AWS (<a href="https://calculator.s3.amazonaws.com/index.html" rel="nofollow">https://calculator.s3.amazonaws.com/index.html</a>).<p>Also, I have seen Google engineers all over HN (look at the comments on this post!) and other sites responding, commenting, and blogging - they seem actively engaged while I have seen very little from AWS.<p>That is not to say GCP is without problems.
AWS IAM is still superior - it is easier to grant access to specific services for specific users, or have an account for a web server to upload to S3. Part of that is due to the fact that there is more plug-and-play tooling available for AWS today - boto comes to mind (boto GCP integration isn't as seamless as with AWS), as well as WAL-E.
AWS's new certificate manager with free, auto-renewed SSL certs and installation on EC2 is awesome.
S3 is cheaper than Google Cloud Storage. AWS has a longer free tier.<p>Luckily, tools like Terraform allow us to mix and match services from each cloud.
Strange to do a whole long section on price-comparison without talking about AWS's spot instances, which are often much cheaper than the reserved instances.
Interesting read. Can someone explain this point to me? "Software-defined networking means that google.com appears to be one hop away"
One hop from...? Everywhere? Each server? How does that help? I understand 'traceroute' but not where that single hop to Google comes in and why that's great.
<i>From 2007 to 2015 Quizlet ran on Joyent, a cloud platform built on SmartOS, which is a Solaris fork (Joyent also offers Linux hosting).</i><p>I would like know why they made that choice.
So much drama for reserved instances. Different prices for different terms of service is a centuries-old business practice. If it's not for you, then don't buy it.