TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Comparing Serverless Performance for CPU Bound Tasks

127 点作者 bosdev将近 7 年前

12 条评论

cremp将近 7 年前
Each and every one of these posts from Cloudflare is direct targeting and completely biased. Should be noted that their response times are indicative of them being far fewer hops away. Unless they can run in the same DC, or even make the RTT fair, their &#x27;Webpage response&#x27; metric is utterly useless.<p>Notice how they admit that they don&#x27;t know how lambda really works. They switch between lambda@edge and Region-based lambdas, and don&#x27;t seem to be able to be consistent with it.<p>Java Lambdas have horrible cold start times, and I&#x27;m not seeing any of this reflected anywhere in their report.<p>&gt; Our Lambda is deployed to with the default 128MB of memory behind an API Gateway in us-east-1<p>Well duh the lambda is slower; it&#x27;s going through API Gateway, and that does authentication processing as well.<p>All in all, these blog posts from Cloudflare are turning me off from them entirely, because they aren&#x27;t even saying &#x27;yeah, AWS got us beat in this case here.&#x27;
评论 #17501221 未加载
sreque将近 7 年前
Assuming the author&#x27;s tests are single-thread, I&#x27;m pretty sure 1024 MB doesn&#x27;t give you a full CPU core on Lambda. I could be wrong though; I haven&#x27;t payed attention to Lambda in a long time. Last I remember it was 1.5 GB that gave you a full core. This alone makes the comparison between a mid-range server and Lambda unfair, not to mention the differences between language runtimes.<p>That said, if you are using Lambda and expecting to not pay extra you have somehow been mislead. Lambda is definitely more expensive per cycle than managing your own instances, and I doubt that will change any time soon.
评论 #17500503 未加载
评论 #17500370 未加载
eximius将近 7 年前
Is it just me, or does anyone else find the documentation of AWS and related services nearly incomprehensible? Maybe it&#x27;s just too &#x27;enterprise-y&#x27; and I haven&#x27;t spent enough time in that environment, but it feels like all the information is squirreled away in 10,000 different pages and that I&#x27;d have to read <i>all</i> of it to just get the basics.<p>Also, does anyone know if there is an API for AWS to dynamically create, load, and launch EC2 and&#x2F;or Lambda instances (i.e., boto - though I&#x27;m open to suggestions for something else) AND, preferably, have separate billing for each thing? Do I need multiple accounts to do separate billing? Something about IAM roles...?
评论 #17502944 未加载
评论 #17503418 未加载
djhworld将近 7 年前
I remember a few years ago we tried to implement a scheduled Lambda that needed to download a bunch of files from an S3 prefix, perform some aggregation on the data and then write the result to a database.<p>Our EC2 prototype of this on one of the m3 class instances could do the work in about 2 minutes which seemed a perfect opportunity to port to Lambda.<p>Even on the top memory instance at the time (1536mb), the job just couldn&#x27;t finish, timing out after 5 minutes. The code was multi threaded, to parallelise the downloads, but not matter how much we tweaked this the Lambda would just never complete in time.<p>As you don&#x27;t have visibility of the internal we didn&#x27;t know whether this was due to CPU constraints (decompressing lots of GZIP streams), network saturation (downloading files from S3) or what.<p>In the end we gave up. Didn&#x27;t have the time or resource to keep digging, and just pinned the problem on the use case we were trying to fit was against what Lamba is designed for<p>Not saying this is an indictment of Lambda, we use it in lots of places, with a lot of critical path code (ETL Pipelines).
评论 #17502233 未加载
评论 #17500200 未加载
评论 #17502248 未加载
wolf550e将近 7 年前
I&#x27;ll copy from Twitter[1]:<p>@zackbloom @jgrahamc I can&#x27;t find it in the docs on AWS site, but I&#x27;ve read that AWS Lambda scales CPU linearly until 1.5GB, then gives you 2nd thread&#x2F;core and again scales linearly until 3GB. If your PBKDF2 was single threaded, Lambda bigger than 1.5GB is wasted.<p>11:12 AM - 9 Jul 2018<p>reply by blog post author[2]:<p>Replying to @ZTarantov @Cloudflare @jgrahamc I can&#x27;t think of a way to test that within the Node code. The only option seems to be to update the C++ version (or some other language) to use multiple threads.<p>5:16 PM - 9 Jul 2018<p>1 - <a href="https:&#x2F;&#x2F;twitter.com&#x2F;ZTarantov&#x2F;status&#x2F;1016384547364229120" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;ZTarantov&#x2F;status&#x2F;1016384547364229120</a><p>2 - <a href="https:&#x2F;&#x2F;twitter.com&#x2F;zackbloom&#x2F;status&#x2F;1016476314864312321" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;zackbloom&#x2F;status&#x2F;1016476314864312321</a>
评论 #17500195 未加载
handruin将近 7 年前
I&#x27;ve recently been exploring AWS Lambda in a stack which contained API Gateway + Python Flask under Lambda for a task I was working on. I deployed it using Zappa and its purpose was to be a simple REST frontend for transferring files to S3.<p>After experimenting with uploads from Lambda to S3 I was noticing that the time to upload a tiny 4MB file changed dramatically when I reconfigured the Lambda function&#x27;s memory size. At 500MB it took 16 seconds to upload the file which is pretty slow. Once I got past roughly 1500MB of memory, the performance no longer improved and the best I could get was about 8 seconds for the same payload.<p>None of my tests were controlled or rigorous in any way so take them with a grain of salt...they were just surprising to me that the speed changed dramatically with memory size allocation. I&#x27;m new to Lambda so I wasn&#x27;t ware that memory size is tied to other resource performance. I&#x27;m curious if this goes beyond CPU and also changes network bandwidth&#x2F;performance? The Lambda I deployed did not write data to the temp location that is provided, it streamed directly to S3.<p>I&#x27;ve since moved on from this implementation and now my Lambda function performs a much simpler task of generating pre-signed S3 URLs. I have noticed something else about Lambda that bothers me a little. If my function remains idle for some period of time and then I invoke it, the amount of time it takes to execute is around 800ms-1000ms. If I perform numerous calls right after, I get billed the minimum of 100ms because the execution time is under that. The part that bothers me is I&#x27;m being charged a one-time cost that&#x27;s about 8x-10x the normal amount because my function has gone idle and cold. I&#x27;ll have to continue reading to see if this is expected. It&#x27;s not a huge amount in terms of cost but surprising that I&#x27;m paying for AWS to wake up from whatever state it is in.
评论 #17500250 未加载
lucb1e将近 7 年前
This headline is weird. I thought it was going to be about doing computations client side since it says &quot;serverless&quot;, but what they mean is &quot;without a dedicated instance running all the time&quot; (about halfway through the article, I figured out what &quot;lambdas&quot; are in this context).<p>So if there goes so much effort into calculating costs for PBKDF2 on servers (ahem, &quot;serverless&quot;), why not move it to the client side? I like client side hashing a lot because it transparently shows what security you apply, and any passive or after-the-fact attacks (think 1024 bit encryption decryption which will slowly move from &#x27;impossible for small governments&#x27; to &#x27;just very slow&#x27; soon) are instantly mitigated. The server should still apply a single round of their favorite hash function (like SHA-2) with a secret value, so an attacker will not be able to log in with stolen database credentials.<p>But that&#x27;s probably too cheap and transparent when you can also do it with a Lambda™.
评论 #17502991 未加载
com2kid将近 7 年前
I&#x27;d love to see an honest comparison across other providers, throwing in Google&#x27;s Firebase Functions and Azure Cloud Functions.
评论 #17501480 未加载
sudhirj将近 7 年前
zackbloom you’ve made your point already, but remember that these posts represent a moving target. AWS could crush CF performance on pretty much all these numbers with a few configuration changes, which they might well do. And you’re not acknowledging the rest of the Lambda moat, like SQS integration, free S3 bandwidth etc.<p>Workers has a clear advantage over Lambda@Edge, but not because of the current resource configuration differences across the two products - the advantage is your choice of V8 and adoption of the Service Worker API standard, which brilliantly outshines the L@Edge API choices. Harp on that, most of what you’re talking about now will likely be invalidated by the next reinvent, and they’ll make it a point to tell the world.
评论 #17504577 未加载
CupOfJava将近 7 年前
The x-axis is percentile of requests with that latency or lower. You need to read the article to figure that out. Label all your axes!
dsl将近 7 年前
The question is, what does Amazon know now, that Cloudflare will figure out in a year or so?
评论 #17500277 未加载
microcolonel将近 7 年前
Good work at CloudFlare. I personally figure that Amazon ought to be doing more interesting things with Lambda, like maybe starting the workers from a memory snapshot.
评论 #17501211 未加载