I have used it for work-related reasons and indeed the service is quite nice. But I don't use Google Cloud Run for personal projects for two reasons:<p>- No way of limiting the expenses AFAIK. I don't want the possibility of having a huge bill on my name that I cannot pay. This unfortunately applies to other clouds.<p>- The risk of being locked out. For many, many reasons (including the above), you can get locked out of the whole ecosystem. I depend on Google for both Gmail and Android, so being locked out would be a disaster. To use Google Cloud, I'd basically need to migrate out of Google in other places, which is a huge initial cost.<p>Both of those are basically risks. I'd much rather overpay $20-50/month than having a small risk of having to pay $100k or being locked out of Gmail/my phone. I <i>cannot</i> have a $100k bill to pay, it'd destroy everything I have.<p>Also I haven't needed it so far. I've had a Node.js project on the front page of HN, with 100k+ visits, and the Heroku hobby server used 30% of the CPU with peaks at 50%. Trying to do the software decently does pay off.
I wont use this for the simple reason that I bought into the Google Appengine stack in the past and it really bit me for several reasons:<p>They force-upgraded the java version. The problem was their their own libraries didn’t work with the new version and we had to rewrite a ton of code.<p>It ended up being insanely expensive at scale.<p>We were totally locked-in to their system and the way it did things. This would be fine but they would also deprecate certain things we relied upon fairly regularly so there was regular churn to keep the system running.<p>Support was extremely weak for some parts of the system. Docs for java were outdated compared with the python docs.<p>Support (that we paid for) literally said to us “oh... you’re still using appengine?”<p>Finally, they can jack up the pricing at any time and there really isn’t anything you can do - you can’t switch to an alternative appengine provider.<p>Certain pages in the management console were completely broken due to a js error (on any browser). In order to Use them i had to manually patch the javascript. Six months after reporting it several times and it was still broken.<p>Oh, and when we got featured on a bunch of news sites, our “scalable” site hit the billing threshold and stopped working. No problem, just update the threshold, right? Except it takes twenty four hours (!) for the billing stats to actually update. So were were down on the one day that “unlimited scaling” actually mattered to us.<p>I’m never again choosing a single-vendor lock-in solution. Especially since it’s not limited to appengine - Google once raised the fees for the maps API from thousands a year to eight figures (seriously) a year with barely any notice.
The thing I really want out of these services is the ability to set a payment cap. It’s probably never going to be an issue, but I have anxiety, and I can’t sleep easily knowing that if I fuck up, if someone sinister abuses my application or whatever I may be stuck with a giant bill.
I've been using Cloud Run for my GPT-2 text generation apps (<a href="https://github.com/minimaxir/gpt-2-cloud-run" rel="nofollow">https://github.com/minimaxir/gpt-2-cloud-run</a>) in order to survive random burst, and also for small Twitter bots (<a href="https://github.com/minimaxir/twitter-cloud-run/tree/master/human-curated" rel="nofollow">https://github.com/minimaxir/twitter-cloud-run/tree/master/h...</a>) which can be invoked via Cloud Scheduler to utilize the efficiency benefits. It has been successful in those tasks.<p>The only complaint I have with Cloud Run now (after many usability updates since the initial release) is that there is no IP rate-limiting to prevent abuse, which has been the primary cause of unexpected costs. (due to how Cloud Run works, IP rate-limiting has to be on Google's end; implementing it on your end via a proxy eliminates the ease-of-use benefits)
Can't you see this article is a paid advertisement for Google Cloud? Just like those 1 hour long videos on youtube - where they show how pilots fly an airplane of a specific company and how well is al organized or a 1 hour long video of a german car factory.<p>Just reading this line makes you suspicious:
"I have built hundreds of side projects over the years "<p>really? Hundreds?<p>And then below:<p>"I am yet to have a side project go ‘viral’"<p>Out of hundreds of projects over the years, none of them went viral?<p>And if you look at his "blog" you will see it has 3 entries in total: <a href="https://alexolivier.me/" rel="nofollow">https://alexolivier.me/</a>
This article fails to mention the issue of needing a database. It doesn't matter how seamlessly your application can scale if your data backend won't scale with it.<p>They mention Cloud SQL, which is of course instance based and would run into scaling issues if your app got suddenly hammered. Not to mention, the cost isn't $0 if your app gets 0 traffic, you are going to have to pay to keep that running around the clock.<p>I realize some applications are very heavy on the app side and light on needing to hit the DB, but in my experience, that isn't very common.
As a noob I have a question as to what advantages I have using docker compared to just a service like Heroku where I just push the application to them... and I don't bother with docker?<p>To me with my limited understanding this seems like just another step.<p>Now granted when it comes to work, I'm using docker with specifics that I know why I would want / can specify with docker ... but for personal projects this ever comes up for me.<p>But otherwise for personal projects it just never comes up.<p>Yet on the other hand when it comes to various examples I see more and more that involve docker where, I'm not sure it needs to / what the advantage is.<p>Obviously there must be some strategic choices / advantages I'm missing.
Interesting. I've been looking looking at options too & opted for essentially the opposite: Get a big(ish) VPS and stack everything on top of each other with docker behind a nginx reverse proxy.<p>So far so good. Managed to host gitlab, prometheus, grafana and ghost working this weekend, which I'm pretty chuffed about.<p>Not as clean as OP's, but the intention was learning, so sacrifices on convenience are acceptable.
What’s the advantage of GCR over AWS Fargate/ECS? I’ve been running an app on ECS for a couple months now and have been pretty happy with the ease of set-up, load-balancing, auto-scaling etc, though there are still kinks I’m figuring out (SSHing into containers to perform database management, for example, or deploying updated tasks without downtime). Is the main selling point of GCR just its price? I haven’t found ECS pricing to be an issue (but I’m also not running anything at scale, and I do pay more than a few cents a month — but still under 10 bucks).
AFAIK, PaaS solutions like heroku have a similar way of working, at least for side-projects. Here, you deploy a container and Google runs it somewhere and Heroku containerizes your application every time you push it. Similar to here, Heroku's free hobby containers also go to sleep in ~30min inactivity.
This is exactly the service that Azure needs and doesn't seem to have: while there is a consumption plan for functions, that's about it, and App Service is incredibly expensive for what you get.
If very few people visit a side-project, that's probably bad for Google search. Its crawler will be detecting slow response times and Google can penalize you in search results.
I was tinkering with it recently. My problem was .. support.<p>After a lot of double-checking on my part, I was finally convinced that Cloud Run messed things up (in my case: A Content-Type header was changed from ThingsISend to text/html and broke every client). The issue tracker is hard to find and more or less abandoned, SO wasn't helpful (but had people that .. love Google Cloud Run and didn't believe me) and only after tweeting a bit someone looked into it.<p>The issue is fixed now, which is nice. The way to get there was ... questionable?
Excuse my ignorance. if a container hasn't been hit in a long time, how long does it take to serve the first request back? Is it spun up or sort of hot paused?
For personal projects and client work I prefer a VPS with fixed pricing. Many have quoted Digital Ocean on this thread but you get much more for your money with a VPS from Hetzner.com. $11.75/month = 8Gb RAM, 2 CPUs, 80Gb SSD and 20Tb of traffic. That's 4 times the RAM, 2 times the CPU and 10 times the traffic compared with a $10 Digital Ocean VPS.
Do the kind of people who have tech jobs and have side projects really need it to cost nothing to run side projects?<p>I don't understand this obsession with running projects for "nothing" and contorting software architecture to do so.<p>$5/mo for a digital ocean droplet or $50/month for a a beefier VPS (or even dedicated hardware if you know where to look[0]) is not much compared to the normal monthly expenditure of people in tech on average.<p>If it was all for convenience/efficiency that'd be one thing, but learning "google cloud run" teaches you nothing about system maintenance, limits your understanding of the full stack, and encourages a myopic view of development, all so at some point when Google/AWS/Azure raises the temperature of the water in the pot everyone starts wondering "how did running software get so expensive?".
Sounds quite similar to Azure Container Instances, except ACI seems to be cheaper (at a glance). ACI is also not HTTP-only like this seems to be, but you do need to combine it with a Function App (Azure's serverless offering) if you want to trigger containers using HTTP.
We evaluated using it at work to replace App Engine Flex and unfortunately it was not ready for our use case:
1. There is no liveness/readiness check nor a way to move the traffic between versions, so you'll have downtime at every deployment
2. The only way to rollback to a previous version is to redeploy, no support from the web interface
3. There is no way to SSH to an instance (not so important)
4. You can't connect to Google Cloud MemoryStore (hosted redis)<p>Scale to zero + instant deploys would make cloud run a great candidate for staging environments deployed on every pull request but it's not quite there yet.
For my next side project, I'd like to test <a href="https://render.com/" rel="nofollow">https://render.com/</a>
It seems like a cheaper alternative to Heroku.
Any feedback on this?
There must be added cost for a managed databases or similar, right?<p>This sounds a lot like aws lambda (except nicer thanks to just running any container). In AWS’s case, you need to pay extra for RDS, redis, and any other persistence.
> The service will create more and more instances of your application up to the limit you defined (currently the cap is 1000 instances).<p>> As long as you have architected your application to be stateless - storing data in something like a database (eg CloudSQL) or object storage (eg Cloud Storage) - then you are good to go.<p>Won’t this just defer the scalability issue to the SQL part of the application? It’s nice that the stateless REST part can be scaled almost infinitely, but if the SQL part doesn’t offer the same scalability, what’s the point? Last time I looked, CloudSQL didn’t offer this kind of scalability.
Another option is Google Cloud AppEngine. It’s a little more limited in terms of languages that are supported, but the free tier is generous enough that I have never paid anything to run backends for side projects.
Is anyone familiar with pros/cons vs. AWS ecs?<p>I've used ECS a few times, it's pretty nice.<p><a href="https://aws.amazon.com/ecs/" rel="nofollow">https://aws.amazon.com/ecs/</a>
I often find myself what "at scale" means.
It is used by so many different providers and it gets confusing.<p>When all you give the service is a container, how dos it know how to scale your project?<p>I presume it gives it more CPU and more RAM automagically but does that really provide "at scale"<p>I think of "at scale" to mean really a lot of traffic.<p>Spinning up new instances if your container is possible,
but I think I would like an API where code can somehow interact with the scaling mechanism.<p>I guess "stateless" gives us some information.
I am using this and it is a much better development and deployment workflow when compared to cloud functions. The only thing it lacks is some bigger RAM for ML workloads.
We've been using Firebase and one of the issues we had is that methods become "cold": if it's not used for a few minutes, the latency of the next call is unpredictable and could be in tens of seconds. The way Cloud Run is described ('Only pay when your code is running') suggests that it may suffer from it too. Does anyone know if it actually has the problem?
started using this... pretty awesome vs the wall of yaml it replaces. Not suitable for all workloads (max 1cpu/2 gigs ram, 4 minute max pod startup time, can't do background work when not serving a request). But it replaces cert-manager, ingress-nginx, oauth2-proxy, k8s service, k8s deployment, k8s secret, k8s configmap, k8s hpa, k8s pdb, helm charts and cluster management.
this has already been said, but due to the fact that this is google you have no idea when it's going to get killed and most things from google get killed.<p>so i would suggest AWS. API Gateway + Lambda. It's basically free for side-projects and the setup + operating it is trivial. It also scales (and you're going to have to shell out real money) if you were to receive a lot of traffic.
As an illustrative of pricing:<p>If your deployment can run in 256MB of RAM w/ 1 vCPU, handles an average request in under 250ms, transfers 200kb or less per request on average, and you get 2 requests/minute on average to your site:<p>The cost is around $2/month USD, which I feel is a more likely scenario for a side project vs the “pennies a month” the OP claims.
I've been looking for a service like this for a while now. The only problem I'm seeing with regards to my use case is I need powerful GPUs for NLP inference to also scale up and down to demand. Can someone explain if this is possible with GCR and if so what do i need to do to accomplish it?
I’m just starting to explore cheap hosting for a web app and my initial digging suggests shared php hosting (with MySQL) is promising. It seems much cheaper than ruby, node, etc. Can anyone comment if my initial hunches are correct?
Is there an alternative with similar ease of use, i.e. you just pack the files, create a Dockerfile, and push it live?<p>(For same reasons as other comments, I don't want to use another Google service for this, risk of being locked out, etc.)
What kinds of applications take no time to startup though? I'm curious about the spin up time, 'cause my rails applications take on the order of minutes. I guess you can schedule a keep alive elsewhere though..
Can anyone shed light on how this compares with Netlify? (Pricing and tech wise).<p>Netlify has basic DB/Identity/Lambda support so I’m guessing it could replace this entirely.<p>I’m using it only to host static websites at the moment.
What do didn't see mentioned is the latency for the first request when it's effectively dealer to zero. To get initial traffic/customers this is very important.
Is there an easy equivalent for scheduled tasks in GCP? A background process that executes for a short period of time and then exists only charging you for the time it ran?
This doesn't seem all that different from app engine or any other PaaS.<p>Even the example he wrote is similar (probably longer) to a small guide I wrote for deploying to gae.
Nice post! But this is a big hurdle for some, no? Are all your side projects fully stateless?<p>> As long as you have architected your application to be stateless
YES with serverless you can really deploy your side projects at scale paying basically nothing if they don't get visited.<p>The Serverless Framework on AWS Lambda though are more mature to do that.
One of the biggest gotchas is using just one aws lambda function with the entire web server instead of doing one aws lambda for each endpoint.
Out of curiosity, how are people thinking about GCP platform risk after the '2023 deadline fiasco'[1]? Is it still a good idea to use GCP at all in the aftermath of them articulating that it could experience budgets cuts or even be axed entirely (though that latter seems much less likely)?<p>[1] <a href="https://news.ycombinator.com/item?id=21815260" rel="nofollow">https://news.ycombinator.com/item?id=21815260</a>
If you already have an EC2 instance reserved on AWS for a year, you could just throw all those small projects there.<p>If they are truly stateless then the bottleneck will probably be the database, anyway.<p>For anyone starting a new app I recommend just building apps that are TRULY serverless. Then you can make them client-first, work offline, not tied to on one particular domain name, support end-to-end encryption, be embarassingly parallel and scalable, and take an <i>activist position</i> against continuing centralization.<p>A fuller exposition is here, so I don’t have fo write a whole mountain of text: <a href="https://qbix.com/blog/2020/01/02/the-case-for-building-client-first-web-apps/" rel="nofollow">https://qbix.com/blog/2020/01/02/the-case-for-building-clien...</a>