TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AWS Introducing Provisioned Concurrency for Lambda Functions

153 点作者 marvinpinto超过 5 年前

20 条评论

reilly3000超过 5 年前
With Fargate Savings Plans and Spot Instances, the cost of running workloads on Fargate is getting substantially cheaper, and with the exception of extremely bursty workloads, much more consistently performant vs Lambda. The cost of provisioning Lambda capacity as well as paying for the compute time on that capacity means Fargate is even more appealing for high volume workloads.<p>The new pricing page for lambda (&quot;Example 2&quot;) shows the cost for a 100M invocation&#x2F;month workload with provisioned capacity for $542&#x2F;month. For that same cost you could run ~61 Fargate instances (0.25 CPU, 0.5GB RAM) 24&#x2F;7 for the same price, or ~160 instances with spot. For context I have ran a simple NodeJS workload on both Lambda and Fargate, and was able to handle 100M events&#x2F;mo with just 3 instances.<p>Serverless developers take note: its time to learn Docker and how to write a task-definition.json.
评论 #21704682 未加载
评论 #21703378 未加载
评论 #21701915 未加载
评论 #21708070 未加载
etaioinshrdlu超过 5 年前
This feels like a step backwards to me, nevermind how necessary it may be. The magic was paying only for what you use on super bursty workloads.<p>Now this is like throwing your hands up and saying the users bursts are too big for AWS.
评论 #21699918 未加载
评论 #21701561 未加载
评论 #21700223 未加载
评论 #21700842 未加载
评论 #21700652 未加载
scottndecker超过 5 年前
AWS 2006: &quot;Run your workloads on our EC2 instances in the cloud 24&#x2F;7.&quot;<p>AWS 2014: &quot;Run your work loads on serverless so you don&#x27;t have to deal with those pesky EC2 instances 24&#x2F;7 anymore.&quot;<p>AWS 2019: &quot;Click a checkbox and you can have your serverless workloads get dedicated EC2 instances 24&#x2F;7!&quot;
评论 #21703132 未加载
评论 #21704293 未加载
munns超过 5 年前
Hey all, I lead developer advocacy for serverless at AWS and was part of this product launch since we started thinking about it(quite some time ago I should say). I&#x27;m running around re:Invent this week, but will try and pop in and answer any questions I can.<p>Provisioned Concurrency (PC) is an interesting feature for us as we&#x27;ve gotten so much feedback over the years about the pain point of the service over head leading to your code execution (the cold start). With PC we basically end up removing most of that service overhead by pre-spinning up execution environments.<p>This feature is really for folks with interactive, super latency sensitive workloads. This will bring any overhead from our side down to sub 100ms. Realistically not every workload needs this, so don&#x27;t feel like you <i>need</i> this to have well performing functions. There are still a lot of thing you need to do in your code as well as knobs like memory which impact function perf.<p>- Chris Munns - <a href="https:&#x2F;&#x2F;twitter.com&#x2F;chrismunns" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;chrismunns</a>
评论 #21703421 未加载
评论 #21706122 未加载
评论 #21704618 未加载
nexuist超过 5 年前
I am a huge fan of serverless, and AWS as well.<p>I also find it deeply ironic that their solution to cold starts is to keep the function running 24&#x2F;7...<p>Could I include openssh and Apache in my Lambda instance? Maybe run a Minecraft server? :P
评论 #21701525 未加载
评论 #21700586 未加载
评论 #21708429 未加载
leovingi超过 5 年前
Am I misunderstanding something here? Based on the AWS calculations on the Lambda pricing page, a single 256Mb Lambda would incur a cost of $2.7902232 per month, using &quot;provisionedConcurrency: 1&quot;. Pushing it to 3008Mb, to get access to more processing power, makes that go up to $32.78 per month (EU London region). Compared to the standard way of warming it up by hitting the endpoint once every 5 minutes, which comes out to 8640 calls per month, which costs next to nothing.<p>Unless I am terribly mistaken, it doesn&#x27;t seem like allowing AWS to handle this and not doing it in code (warmup plugin, cron job, etc.) is worth the cost.
评论 #21701839 未加载
评论 #21701479 未加载
评论 #21702028 未加载
评论 #21701721 未加载
jugg1es超过 5 年前
As a seasoned AWS developer, I love this feature. However, I wonder how the increasing complexity of AWS affects new devs as they try to grok the offered services. AWS typically does a pretty good job hiding advanced features from beginners, but I wonder how long they can do that.
soamv超过 5 年前
Lambda has always been the most expensive compute you can buy on AWS -- you could think of that as the premium for being the most &quot;elastic&quot;. So this feature is about giving away some of that elasticity for (a) performance predictability and (b) a bit of total cost savings. Note that you can still happily &quot;burst&quot; into exactly as much concurrency as you could before, you&#x27;ll just have cold starts.<p>People used to write cron jobs to keep their functions warm, which besides being ugly didn&#x27;t even work well -- you could at best keep one instance warm with infrequent pinging, i.e. a provisioned concurrency of 1. So this feature addresses that use case in a much more systematic way.<p>There&#x27;s some precedent for features like this -- provisioned IOPS and reserved instances come to mind. In both those cases you tradeoff elasticity and get some predictability in return (performance in one case, cost in another).
评论 #21701562 未加载
hn_throwaway_99超过 5 年前
This is a big deal. Cold starts were always the huge Achilles heal for using lambdas for interactive APIs. Kudos for this.
peterkelly超过 5 年前
They really went out of their wait to avoid using the word &quot;server&quot; in that article.<p>I&#x27;ve always hated the term &quot;serverless&quot;, but its usage in this context is even more ridiculous.
tybit超过 5 年前
So excited for this, between this and the removal of VPC cold start issues recently, avoiding Lambda for APIs because of latency seems to be a thing of the past.
gcatalfamo超过 5 年前
Sorry for the stupid question, I genuinely want to know: how does this differ from firing up your function with an additional call every, idk, 5 mins? Wouldn’t it be cheaper and easier?
评论 #21700688 未加载
alexellisuk超过 5 年前
This is relatively easy to do with OpenFaaS and Knative on Kubernetes. If we&#x27;re paying for idle, why not take a look at EKS on Fargate?<p><a href="https:&#x2F;&#x2F;www.openfaas.com" rel="nofollow">https:&#x2F;&#x2F;www.openfaas.com</a>
macintux超过 5 年前
Request for anyone on the Lambda team who happens to read this: your API doesn’t appear to offer a way to retrieve the “last modified by” user when grabbing function metadata.<p>Very unlike other AWS APIs and very annoying.
评论 #21699645 未加载
评论 #21700158 未加载
stunt超过 5 年前
I think this is a really good feature and has many use cases. I also anticipate so many developers that shouldn&#x27;t use Lambdas are going to use Lambdas becaues of provisioned concurrency.
ac360超过 5 年前
Provisioned Concurrency is now supported in the Serverless Framework - <a href="https:&#x2F;&#x2F;github.com&#x2F;serverless&#x2F;serverless" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;serverless&#x2F;serverless</a>
NewLogic超过 5 年前
I&#x27;m still frustrated that Lambda can&#x27;t have alias specific environmental variables. Aren&#x27;t alias&#x27; supposed to be used for staging function versions through a release pipeline?
评论 #21707931 未加载
k__超过 5 年前
At least if you build APIs, you can use VTL and avoid Lambda and its cold starts completely
评论 #21702860 未加载
choukri060超过 5 年前
Ok
tkyjonathan超过 5 年前
I am not even sure that developers around me know how to do concurrency, since moving to micro services.