Efficiency is an area where there can be a lot of cost hiding. We recently saved a lot of money by:<p>- Using the Kubernetes Vertical Pod Autoscaler (<a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow">https://github.com/kubernetes/autoscaler/tree/master/vertica...</a>) for CPU and memory scaling, and switching to metrics like requests per second and connection count for horizontal scaling<p>- Collecting metrics on container CPU and memory allocation vs utilization<p>- Writing scripts to auto-submit PRs to SREs with better recommended sizing based on actual usage<p>- Tuning our VM instance sizes and autoscaler configs<p>A few engineers were able to save the company several times their salary with a few months of work, and plan to 10x the savings over the next year
It's pretty cool that utility computing is large enough to spawn 3rd party companies that further increase effeciency of the pool.<p>It's layers upon layers of technical progress in parallel.
I just had a conversation about this product idea a couple days ago. How long before Amazon acquires it and cripples the functionality? Could be a good exit strategy, at least.
Related: if you're looking for a service that starts/stops instances on a schedule (we find this really good for QA and development instances), check out <a href="https://www.parkmycloud.com/" rel="nofollow">https://www.parkmycloud.com/</a> . You can also set an instance to "always parked" and unpark it for a certain number of hours or until a certain date/time.<p>(No affiliation, just a satisfied customer.)
Interesting model, although I suggest looking at the GCP Cloud Run approach using Knative to start and server containers on demand.<p>That's the next generation of Lambda that all clouds and vendors are moving towards, and increases developer agility with much faster cold-start times. If we could have Cloud Run today across multiple clouds and locations with geo-loadbalancing stitched together automatically, that would be valuable.
> Hakuna Cloud is a software-as-a-service HTTPS proxy. You don't need to change existing software or infrastructure, and you don't need to install additional tools on your servers.<p>> Each cloud server must have an FQDN/DNS name configured as a CNAME to our load balancers.<p>> When your server stops receiving requests, it will be stopped. As soon as a new request arrives, Hakuna will start it.<p>Interesting idea. It's like a proxy that kind of makes an instance/vm-based service act like a serverless service, without moving to containers or rewriting.<p>Seems kind of niche but I can see the use: there's a lot of services that have a time-based usage pattern (during working hours, or used interactively for a few minutes/hours sparsely through the day).<p>What are the cold start times like with this (at least for a typical, simple app - say on asp.net on Windows or something hosted via nginx on
Linux)? What happens if an instance is being stopped and a new request comes in - does the request have to wait for shutdown plus startup?
Now that we have CGI scripts in the cloud (lambdas), there <i>ought</i> to appear an implementation of inetd, too!<p>Jokes aside, I wonder when cloud providers will add something like this as a native feature.
Interesting. Why proxying though rather than monitoring DNS queries on the CNAME? And updating to point to the right ip when the server is live (This could maybe help with the 10gb base limit + $0.08/gb?)<p>Not trying to be an armchair coach but rather understand the architecture decisions and trade-offs that I must have missed
It seems like it does exactly (or a subset) of what Google Cloud Run already does. Just shove an application into a container and scale up/down depending on use. Other cloud providers probably have this too. So is the value add that this is less expensive or what?
From the FAQ:<p>> The HTTPS trigger is intercepting all my traffic?
> No, your data are safe if your server support HTTPS protocol. All the data exchanged between your server and your clients is encrypted and not accessible by us.<p>Unless there's an IP allocated to each user, I don't think this is accurate. With SSL, the HTTP headers are encrypted, so there would be no way to know where know where to route the request without first decrypting the data, and thus having access to the data.
I had long wondered if it would be possible to have a custom autoscaler that just stopped/started instances rather than terminating and re-creating, in order to respond to load increases more quickly than amazon's autoscaling groups. You still pay for the EBS even when it's stopped and deploys involve briefly starting all of the stopped instances, but EBS is a fraction of the overall spend and maybe in some cases the complexity is worth it?
Interesting idea. Almost bring the idea of lambda to VMs, doesn't it?<p>Also, how does hakuna work with DO? I thought DO still charges when VMs are powered off?
So is this the bring your own server version of “inactive app hibernation” that you see in the free tiers of PaaS providers like Heroku? If so, that’s neat!
The image in the front page reminds me of Drupal's logo [0].
Not necessarily an infringement, just saying.<p>[0]: <a href="https://www.drupal.org/about/media-kit/logos" rel="nofollow">https://www.drupal.org/about/media-kit/logos</a>
[Bikeshedding]<p>> Why Hakunacloud?<p>Having #000 headings on blue background while using #fff text without shadows or borders on that same background looks really amateur.<p>And the "read more!" is blue text on a blue background. Barely readable.
> You don't need to change existing software or infrastructure, and you don't need to install additional tools on your servers.<p>then<p>> Install Hakuna CLI<p>and<p>> Update the DNS<p>That certainly sounds like installing things and making changes to your infrastructure...<p>It sounds like a cool idea for sure and can be really helpful for a lot of companies but this seems like an outright lie.