With Fargate Savings Plans and Spot Instances, the cost of running workloads on Fargate is getting substantially cheaper, and with the exception of extremely bursty workloads, much more consistently performant vs Lambda. The cost of provisioning Lambda capacity as well as paying for the compute time on that capacity means Fargate is even more appealing for high volume workloads.<p>The new pricing page for lambda ("Example 2") shows the cost for a 100M invocation/month workload with provisioned capacity for $542/month. For that same cost you could run ~61 Fargate instances (0.25 CPU, 0.5GB RAM) 24/7 for the same price, or ~160 instances with spot. For context I have ran a simple NodeJS workload on both Lambda and Fargate, and was able to handle 100M events/mo with just 3 instances.<p>Serverless developers take note: its time to learn Docker and how to write a task-definition.json.
This feels like a step backwards to me, nevermind how necessary it may be. The magic was paying only for what you use on super bursty workloads.<p>Now this is like throwing your hands up and saying the users bursts are too big for AWS.
AWS 2006: "Run your workloads on our EC2 instances in the cloud 24/7."<p>AWS 2014: "Run your work loads on serverless so you don't have to deal with those pesky EC2 instances 24/7 anymore."<p>AWS 2019: "Click a checkbox and you can have your serverless workloads get dedicated EC2 instances 24/7!"
Hey all, I lead developer advocacy for serverless at AWS and was part of this product launch since we started thinking about it(quite some time ago I should say). I'm running around re:Invent this week, but will try and pop in and answer any questions I can.<p>Provisioned Concurrency (PC) is an interesting feature for us as we've gotten so much feedback over the years about the pain point of the service over head leading to your code execution (the cold start). With PC we basically end up removing most of that service overhead by pre-spinning up execution environments.<p>This feature is really for folks with interactive, super latency sensitive workloads. This will bring any overhead from our side down to sub 100ms. Realistically not every workload needs this, so don't feel like you <i>need</i> this to have well performing functions. There are still a lot of thing you need to do in your code as well as knobs like memory which impact function perf.<p>- Chris Munns - <a href="https://twitter.com/chrismunns" rel="nofollow">https://twitter.com/chrismunns</a>
I am a huge fan of serverless, and AWS as well.<p>I also find it deeply ironic that their solution to cold starts is to keep the function running 24/7...<p>Could I include openssh and Apache in my Lambda instance? Maybe run a Minecraft server? :P
Am I misunderstanding something here? Based on the AWS calculations on the Lambda pricing page, a single 256Mb Lambda would incur a cost of $2.7902232 per month, using "provisionedConcurrency: 1". Pushing it to 3008Mb, to get access to more processing power, makes that go up to $32.78 per month (EU London region).
Compared to the standard way of warming it up by hitting the endpoint once every 5 minutes, which comes out to 8640 calls per month, which costs next to nothing.<p>Unless I am terribly mistaken, it doesn't seem like allowing AWS to handle this and not doing it in code (warmup plugin, cron job, etc.) is worth the cost.
As a seasoned AWS developer, I love this feature. However, I wonder how the increasing complexity of AWS affects new devs as they try to grok the offered services. AWS typically does a pretty good job hiding advanced features from beginners, but I wonder how long they can do that.
Lambda has always been the most expensive compute you can buy on AWS -- you could think of that as the premium for being the most "elastic". So this feature is about giving away some of that elasticity for (a) performance predictability and (b) a bit of total cost savings. Note that you can still happily "burst" into exactly as much concurrency as you could before, you'll just have cold starts.<p>People used to write cron jobs to keep their functions warm, which besides being ugly didn't even work well -- you could at best keep one instance warm with infrequent pinging, i.e. a provisioned concurrency of 1. So this feature addresses that use case in a much more systematic way.<p>There's some precedent for features like this -- provisioned IOPS and reserved instances come to mind. In both those cases you tradeoff elasticity and get some predictability in return (performance in one case, cost in another).
They really went out of their wait to avoid using the word "server" in that article.<p>I've always hated the term "serverless", but its usage in this context is even more ridiculous.
So excited for this, between this and the removal of VPC cold start issues recently, avoiding Lambda for APIs because of latency seems to be a thing of the past.
Sorry for the stupid question, I genuinely want to know: how does this differ from firing up your function with an additional call every, idk, 5 mins? Wouldn’t it be cheaper and easier?
This is relatively easy to do with OpenFaaS and Knative on Kubernetes. If we're paying for idle, why not take a look at EKS on Fargate?<p><a href="https://www.openfaas.com" rel="nofollow">https://www.openfaas.com</a>
Request for anyone on the Lambda team who happens to read this: your API doesn’t appear to offer a way to retrieve the “last modified by” user when grabbing function metadata.<p>Very unlike other AWS APIs and very annoying.
I think this is a really good feature and has many use cases.
I also anticipate so many developers that shouldn't use Lambdas are going to use Lambdas becaues of provisioned concurrency.
Provisioned Concurrency is now supported in the Serverless Framework - <a href="https://github.com/serverless/serverless" rel="nofollow">https://github.com/serverless/serverless</a>
I'm still frustrated that Lambda can't have alias specific environmental variables. Aren't alias' supposed to be used for staging function versions through a release pipeline?