TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

12 requests per second: A realistic look at Python web frameworks

503 pointsby giladover 4 years ago

35 comments

imperio59over 4 years ago
My experience doing perf optimizations in real world systems with many many people writing code to the same app is a lot of inefficiencies happen due to over fetching data, inefficiencies caused by naively using the ORM without understanding the underlying cost of the query, and lack of actual profiling to find where the actual bottlenecks are (usually people writing dumb code without realizing it&#x27;s expensive).<p>Sure, the framework matters at very large scale and the benefits from optimizing the framework become large when you&#x27;re doing millions of requests a second over many thousands of servers because it can help reduce baseline cost of running the service.<p>But I agree with the author&#x27;s main point which seems to be that framework performance is pretty meaningless when comparing frameworks if you&#x27;re just starting on a new project. Focus on making a product people wanna actually use first. If you&#x27;re lucky enough to get to scale you can work about optimizing it then.
评论 #26190216 未加载
评论 #26189497 未加载
评论 #26189461 未加载
评论 #26191263 未加载
评论 #26195638 未加载
评论 #26194968 未加载
评论 #26195668 未加载
评论 #26191044 未加载
评论 #26189659 未加载
srikuover 4 years ago
A humble request to folks making benchmark or other graphs - please understand that thin coloured lines are not easy to visually parse .. even for folks like me who aren&#x27;t totally colour blind but have partial red-green colour blindness. At least, the lines can be made thicker so it is easier to make out the colours. Even better, label the lines with an arrow and what they represent.
评论 #26190855 未加载
polyrandover 4 years ago
Related to ORMs&#x2F;queries&#x2F;performance, I have found the following combination really good:<p>* aiosql[0] to write raw SQL queries and having them available as python functions (discussed in [1])<p>* asyncpg[2] if you are using Postgres<p>* Map asyncpg&#x2F;aiosql results to Pydantic[3] models<p>* FastAPI[4]<p>Pydantic models become the &quot;source of truth&quot; inside the app, they are designed as a copy of the DB schema, then functions receive and return Pydantic models in most cases.<p>This stack also makes me think better about my queries and the DB design. I try to make sure each endpoint makes only a couple of queries. Each query may have multiple CTEs, but it&#x27;s still only a single round-trip. That also makes you think about what to prefetch or not, maybe I want to also get the data to return if the request is OK and avoid another query.<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;nackjicholson&#x2F;aiosql" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;nackjicholson&#x2F;aiosql</a> [1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=24130712" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=24130712</a> [2] <a href="https:&#x2F;&#x2F;github.com&#x2F;MagicStack&#x2F;asyncpg" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;MagicStack&#x2F;asyncpg</a> [3] <a href="https:&#x2F;&#x2F;pydantic-docs.helpmanual.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pydantic-docs.helpmanual.io&#x2F;</a> [4] <a href="https:&#x2F;&#x2F;fastapi.tiangolo.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fastapi.tiangolo.com&#x2F;</a>
评论 #26194811 未加载
评论 #26194908 未加载
评论 #26195313 未加载
评论 #26201567 未加载
评论 #26194019 未加载
评论 #26197358 未加载
评论 #26194266 未加载
评论 #26194776 未加载
ramraj07over 4 years ago
Don&#x27;t forget that you&#x27;re paying a huge price using the sqlalchemy orm - <a href="https:&#x2F;&#x2F;docs.sqlalchemy.org&#x2F;en&#x2F;13&#x2F;faq&#x2F;performance.html" rel="nofollow">https:&#x2F;&#x2F;docs.sqlalchemy.org&#x2F;en&#x2F;13&#x2F;faq&#x2F;performance.html</a><p>If I know an endpoint is going to be hit hard, I forgo trying to use the ORM (except to maybe get the table name from the model obj so some soul can trace it&#x27;s usage here in the future) and directly do an engine.execute(&lt;raw query&gt;). Makes a huge difference. Next optimization I do is create stored procedures on the database. Only then I start thinking about changing the framework itself.<p>For folks like me who want to get prototypes off the ground in hours, flask and fastapi are godsend, and if that means I have to worry about serving thousands of requests a second soon thats a happy problem for sure.
评论 #26189789 未加载
评论 #26190141 未加载
评论 #26197981 未加载
评论 #26190695 未加载
评论 #26189337 未加载
tnashover 4 years ago
Use of ORMs is often a performance choke point. Raw DB queries are often much, much faster. Almost always, the more you abstract, the worse you perform. It&#x27;s great as a developer but not so great as a user.
评论 #26195124 未加载
评论 #26189934 未加载
throwdbaawayover 4 years ago
Good article, but I can&#x27;t help but notice a gaping hole in the benchmark -- why was there no attempt to run gunicorn in multi-threaded mode?<p>The article has a link to <a href="https:&#x2F;&#x2F;techspot.zzzeek.org&#x2F;2015&#x2F;02&#x2F;15&#x2F;asynchronous-python-and-databases&#x2F;" rel="nofollow">https:&#x2F;&#x2F;techspot.zzzeek.org&#x2F;2015&#x2F;02&#x2F;15&#x2F;asynchronous-python-a...</a>, but failed to mention the key takeaway from the article:<p>&gt; threaded code got the job done much faster than asyncio in every case
评论 #26193680 未加载
Laminaryover 4 years ago
In my benchmark testing, SSL appears to be the bottleneck; e.g., Apache vs. Nginx does not really matter. I assume the benchmarks above 10,000 RPS are not using SSL and regular HTTP? How are people doing benchmarks at 10k-100k RPS?
评论 #26189290 未加载
评论 #26189230 未加载
评论 #26189221 未加载
评论 #26190463 未加载
qeternityover 4 years ago
As a Django shop, we’ve always hoped PyPy would one day be suitable for our production deployments but in the end with various issues we were never able to make the switch.<p>And then Pyston was re-released...and changed everything. It was drop in compatible for us and we saw a 50% drop in latencies.<p>Source availability aside, I suggest anyone running CPython in prod take a look.
评论 #26190910 未加载
jaimex2over 4 years ago
When you start hitting bottlenecks in your python web framework its probably time to switch to a faster language, not another framework in python.<p>You&#x27;re probably done rapid prototyping by this point anyway.
评论 #26190404 未加载
评论 #26190413 未加载
评论 #26191417 未加载
ximmover 4 years ago
Maybe I am missing something, but why wasn&#x27;t sanic tested with pypy? I expect that this combination would outperform everything else.
评论 #26192130 未加载
cryptosover 4 years ago
Why Python at all? About 10 years ago I liked Python a lot (and still like it in principle) and felt very productive compared to, say, Java. Java was full of inconvenience, XML, bloated frameworks and all that. But today you can use Kotlin, that is in my opinion even nicer than Python, with performant frameworks (e. g. Quarkus or Ktor) on the super fast JVM.<p>I don&#x27;t want to start a language war, but maybe Python is not the first choice for their requirements.
评论 #26191162 未加载
评论 #26193934 未加载
评论 #26190952 未加载
estover 4 years ago
Might as well refer to TechEmpower benchmarks.<p><a href="https:&#x2F;&#x2F;www.techempower.com&#x2F;benchmarks&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.techempower.com&#x2F;benchmarks&#x2F;</a>
评论 #26189177 未加载
KaiserProover 4 years ago
We did an evaluation for our API. The API accepts an image upload, passes it onto the backend for processing and returns a ~2k json lump in return.<p>Long story short, fastapi was much much faster than anything else for us. It also felt a bit like flask. The integration with pydantic for validating dataclasses on the fly was also great.
dfgdghdfover 4 years ago
I would question choosing Python for large server projects because the performance ceiling is so low. At least with the &quot;middle tier&quot; performance languages such as Java &#x2F; C# you are unlikely to require a complete language switch as the project scales.
评论 #26194175 未加载
ancountover 4 years ago
I inherited a flask queue worker, and it suffers from some major problems (like 12 req&#x2F;second when it&#x27;s not discarding items from the queue). I am primarily a javascript programmer so I&#x27;m a little bit out of my element.<p>I am tempted to refactor the worker to use async features, and that would require factoring out uWSGI, which is fine, I only added it last week. The article states that Vibora is a drop in replacement for flask, but I guess I&#x27;m a bit skeptical, as I can&#x27;t find much information outside of Vibora having a similar api. For a web service with basically one endpoint, I could refactor to another implementation fairly easily, I&#x27;m just looking for the right direction.<p>I thought maybe I should refactor the arch to either batch requests to the worker, or to use async. Anyone have a feeling where I should go? I am just getting started researching this, but any advice would be appreciated.<p>Edit: at least quart has a migration page.. probably will just try it out, what can I lose? <a href="https:&#x2F;&#x2F;pgjones.gitlab.io&#x2F;quart&#x2F;how_to_guides&#x2F;flask_migration.html" rel="nofollow">https:&#x2F;&#x2F;pgjones.gitlab.io&#x2F;quart&#x2F;how_to_guides&#x2F;flask_migratio...</a><p>Second edit: Also might try out polyrand&#x27;s stack in the comments.
robertlagrantover 4 years ago
Note: SQLAlchemy 1.4 is async. <a href="https:&#x2F;&#x2F;docs.sqlalchemy.org&#x2F;en&#x2F;14&#x2F;changelog&#x2F;migration_14.html#asynchronous-io-support-for-core-and-orm" rel="nofollow">https:&#x2F;&#x2F;docs.sqlalchemy.org&#x2F;en&#x2F;14&#x2F;changelog&#x2F;migration_14.htm...</a>
maxpertover 4 years ago
The fact that you are using offset of 50000 and complaining it slows everything down says a lot about the benchmarks. Top it all with ORM query with prefetch all, GIL, and shared CPU (I am guessing) that you used to run benchmark on. You see where this is headed?
gchamonliveover 4 years ago
I have great experiences with Falcon for backend REST APIs, and it is supposed to be great in terms of requests per second.<p>How does it compare to Sanic?
评论 #26191312 未加载
评论 #26189593 未加载
hgretg3443over 4 years ago
C#&#x2F;ASP.NET is the fastest web framework now:<p><a href="https:&#x2F;&#x2F;www.techempower.com&#x2F;benchmarks&#x2F;#section=test&amp;runid=8ca46892-e46c-4088-9443-05722ad6f7fb&amp;hw=ph&amp;test=plaintext" rel="nofollow">https:&#x2F;&#x2F;www.techempower.com&#x2F;benchmarks&#x2F;#section=test&amp;runid=8...</a><p>7.000.000 requests per second<p>Even GO can only achieve 4.500.000 million requests per secnod being a low-level language, in opposite to high-level C#.
评论 #26190961 未加载
评论 #26191156 未加载
评论 #26190801 未加载
评论 #26194788 未加载
评论 #26194371 未加载
评论 #26191180 未加载
oliwarnerover 4 years ago
The important thing to remember is that unless you&#x27;re running a massive service, <i>requests per second</i> is less important than <i>seconds per request</i>.<p>Getting an API hit from 300ms to 70ms, and proper frontend caching is far more valuable than concurrency (if you can afford to throw servers at it) because it actually affects user performance.
helsinkiandrewover 4 years ago
Since I&#x27;ve been a developer there have been two changes that I feel have given major performance improvements and made backend framework improvements much less significant (atleast in the apps I develop): CDNs and client side rendering (that means the more, smaller requests for data which are more suited to be served via a CDN)<p>Using (for example) AWS Cloudfront was a gamechanger in how I design webapps and view performance. Being able to &#x27;slice and dice&#x27; what requests get SSL terminated at the CDN, cached fairly locally, served from an Amazon managed webserver, or sent to our app server, increased our performance 10 fold.<p>That approach isn&#x27;t always practical, but I find that it&#x27;s now much easier to choose the backend for developer performance and doubling the server CPU&#x2F;memory is quicker and cheaper when needed.
Terrettaover 4 years ago
Not that it matters any more, but a colleague mentions Flask was originally a joke of what not to do:<p><a href="https:&#x2F;&#x2F;lucumr.pocoo.org&#x2F;2010&#x2F;4&#x2F;3&#x2F;april-1st-post-mortem&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lucumr.pocoo.org&#x2F;2010&#x2F;4&#x2F;3&#x2F;april-1st-post-mortem&#x2F;</a><p>Flask author reflects on that here:<p><a href="http:&#x2F;&#x2F;mitsuhiko.pocoo.org&#x2F;flask-pycon-2011.pdf" rel="nofollow">http:&#x2F;&#x2F;mitsuhiko.pocoo.org&#x2F;flask-pycon-2011.pdf</a><p>Quite relevant to the conclusion in the article.
luordover 4 years ago
And here I was living under the assumption that psycopg2 was the only option, and probably the biggest reason I was not using pypy. Gotta take a look at pg8000.<p>In general, I&#x27;ve always liked the idea of pypy, so I&#x27;ll try to use it more, and not just for performance. Will also donate when I can.
hendryover 4 years ago
I always assumed Python could scale because of Reddit: <a href="https:&#x2F;&#x2F;github.com&#x2F;reddit-archive&#x2F;reddit" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;reddit-archive&#x2F;reddit</a><p>Not quite sure if their current site&#x27;s code is opensource... anyone know?
评论 #26193575 未加载
评论 #26199345 未加载
评论 #26193443 未加载
sandGorgonover 4 years ago
TLDR - pypy is awesome. Dont use frameworks. Use pypy.<p>Please donate. Pypy needs funds - <a href="https:&#x2F;&#x2F;opencollective.com&#x2F;pypy" rel="nofollow">https:&#x2F;&#x2F;opencollective.com&#x2F;pypy</a><p>Pypy doesnt get a fraction of the funding that python does.
sdevonoesover 4 years ago
Interesting. I never heard of Japronto before. For the people working with Python: why Flask instead of Japronto?
评论 #26192406 未加载
cradover 4 years ago
I get much higher request throughput in my Tornado applications, with very low response latencies, strange.
lrossiover 4 years ago
This benchmark was run on a laptop, which has a very small number of cores compared to the servers that usually run such apps. The author doesn’t mention any attempt to tweak the number of workers, which would make sense in this case. Given that they did notice at some point that CPU usage is lower than expected, I am surprised that they did not try it.
AtlasBarfedover 4 years ago
Where is japronto on the techempower? It&#x27;s not even on there.
tubbyjrover 4 years ago
My god, the CSS and styling on that page is absolutely abysmal
Ambixover 4 years ago
Why is it so? I&#x27;ve got 100K requests per sec with PHP easily [1]<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;gotzmann&#x2F;comet" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;gotzmann&#x2F;comet</a>
评论 #26194620 未加载
p5vover 4 years ago
The TL;DR you should be looking for:<p>&gt; all of this emphasises the fact that unless you have some super-niche use-case in mind, it&#x27;s actually a better idea to choose your framework based upon ergonomics and features, rather than speed.
mlthoughts2018over 4 years ago
I think it’s been bog standard practice to run flask via uwsgi or gunicorn with async workers and use multiple process based workers per deployed server unit (eg per pod in Kubernetes).<p>What matters is that the cumulative latency &amp; throughput solve your problem, not how fast you can make one singular async worker thread.<p>I figure most people running complex web services in production would just do an eye roll at this post. Nobody&#x27;s going to switch to PyPy for any of this.<p>My team at work runs several complex ML workloads, and we use the exact same container pattern for every service running gunicorn to spawn X async workers per pod and then scale pods per service to meet throughput requirements. Sometimes we also just post complex image processing workloads to a queue and batch them to GPU processor workers. In all these use cases, super low effort “just toss it in gunicorn running flask” has worked without issue for services supporting up to peak load of thousands to hundreds of thousands of requests per second.
评论 #26190224 未加载
forgotmypw17over 4 years ago
I have to question the value of text written by someone who sets white-on-white text in their website...
评论 #26191049 未加载
ctvoover 4 years ago
It&#x27;s a bit of a step back in time reading things like this.<p>This is stateless HTTP requests hitting a relational database. How is this dead horse still being beaten? The patterns for load balancing, horizontal scalability, caching in this space well documented.<p>What are we gaining still profiling Django, Flask and Ruby on Rails in 2021.
评论 #26193608 未加载
评论 #26194177 未加载