TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask YC: Capacity planning question

8 点作者 goodgoblin超过 17 年前
How many simultaneous mongrels am I going to need to serve 120K users? 120K mongrels?<p>Assume the mongrels are sitting at the front of a rails app - any idea of a good metric for this? I know they're not all going to be hitting the site simultaneously, but I believe each request to rails blocks.<p>So at 4 Mongrels a virtualized server I'd need 250 to serve 1K simulataneous users. Is there a good metric for breaking down how many simul users I should plan for based on the total number of users? comments, links, ridicule welcome TIA

6 条评论

bfioca超过 17 年前
120K users a day? That's a _lot_ for a rails app. In fact, looking around, I'd be surprised if there were any existing rails applications that serve that many. 43things.com seems to max out in the 50K users per day range (on average). The first question I have to ask is do you really need to handle that kind of traffic? If you don't - don't bother trying to scale to that much yet. If you're talking hits, that's a different story... You'll definitely want to know how many concurrent you're getting. For reference, a TechCrunch post generates between 20-50 concurrent hits on an application that renders in ~1s/page (as of Nov 07). Plan for 2x-3x that for digg/reddit traffic. That TechCrunch load was served with 4 or 5 mongrels. 6-8 is the most I'd recommend running on a dual-core box.<p>I haven't seen a 1000 concurrent user app since I worked at UPS... so if you have this problem, why are you asking us? You should be hiring someone with the huge amount of money you have. ;)
评论 #101973 未加载
评论 #102716 未加载
blader超过 17 年前
Here are some numbers from our application, let me know if this helps:<p>We have 400K daily active users, doing 200 requests a second and around 10MM page views per day. All requests are dynamic and hit the full Rails stack. We're probably easily in the top 5 Rails sites on the net based on load.<p>We run all of this on 5x 4core 8GB application servers and 2x 4core 32GB db servers in master-master replication. We run 16 mongrels on each app server for a total of 80. Our average response time per request is around 100-200ms.<p>We host on Softlayer and pay around $6000 a month.<p>Also, the number of mongrels you will need is directly dependent on how fast your requests are, and how you are loading balancing across these mongrels. We use the nginx proxy with the fair load balancing patch. <a href="http://brainspl.at/articles/2007/11/09/a-fair-proxy-balancer-for-nginx-and-mongrel" rel="nofollow">http://brainspl.at/articles/2007/11/09/a-fair-proxy-balancer...</a>
评论 #103294 未加载
fendale超过 17 年前
Each request to Rails block a Mongrel while it servers it, but even on my home server I am getting upwards of 50 requests per second on my app (all logged in pages, so no caching). That means you could have 50 people all requesting the page at the same time and each of them will have it returned inside a second even with a single Mongrel.<p>You need to be careful that you don't tie up some of your Mongrels doing long running tasks - if you have actions that cause tasks to run that take on the order of seconds, consider queuing them up to be serviced by some other background process (which is what I decided to do).<p>As someone else mentioned here, try to cache as much as possible - cached full pages take the load off Rails completely, cached fragments reduce the time to serve a request inside Rails, so you can get more from each Mongrel. Make sure and not cache logged in pages though!<p>Other general advice for a database application - hit the database as little as possible - in Rails don't do things like:<p><pre><code> @user = User.find(params[:id]) @products = @user.products.find(:all) @profile = @user.profile.find(:all) </code></pre> That would result in 3 database queries, while this will do it in 1:<p><pre><code> @user = User.find(params[:id], :include =&#62; [:products, :profile]) </code></pre> etc ...
评论 #101975 未加载
icky超过 17 年前
&#62; So at 4 Mongrels a virtualized server I'd need 250 to serve 1K simulataneous users.<p>Depends on how simultaneous they really are. Are we talking 1000 hits per second? Or are we talking about 1000 unique people viewing some portion of your site/app at a given time?<p>If it's the latter, you can get away with a lot less.<p>Also, if you have shared-anything, it will become a bottleneck long before the mongrels. Your database especially will have to be replicated (for read-mostly apps), or sharded (for heavy read-write apps).<p>If it's a read-mostly app, consider aggressively caching fragments or even pages. (1K users hitting static pages will just hit Apache, given the right set of mod_rewrite rules, and you can have a lot more Apache processes (or threads; is Rails threadsafe these days?) running on a given server than mongrels (which, when I last used Rails, were very resource-hungry).<p>Consider also ways to extend the functionality of cached/static pages. You could have mod_rewrite check to see if the user has a login cookie and only then hit the non-cached app, OR you could have client-side javascript on the static cached page check for the same cookie and only then display the login name or do an XMLHttpRequest to the server (which then may cache a static html subpage named for that username, which can then be checked by mod_rewrite as well).<p>Just don't trust non-signed user cookies for looking up private information, or for making any database writes. Signed cookies, however, are a great alternative to centralized sessions (just remember to encrypt anything that you want the user to store, but not see: signing just protects against tampering). Jam the user's IP address into the signed cookie text and guard against replay attacks, as well!
评论 #101460 未加载
dedalus超过 17 年前
I dont think your question has enough data to answer properly. We need to know whats the service time(avg request takes how many seconds) for each request and how the arrival rate of your requests like (120K per second or minute?) and whats the limit on your request queue (put them on hold till they get server). Finally whats the tolerance level of the final response time (can support 120K users by serialising across 10 servers but that drives up the response time for end users)?<p>Anyways probably you can read some books here at <a href="http://www.cs.gmu.edu/faculty/menasce.html" rel="nofollow">http://www.cs.gmu.edu/faculty/menasce.html</a><p>or if you are in a hurry a quick glimpse at the tactical paper at <a href="http://www.cmg.org/measureit/issues/mit04/m_4_7.html" rel="nofollow">http://www.cmg.org/measureit/issues/mit04/m_4_7.html</a><p>Hope this gets you started if not answer your question thoroughly..
评论 #101461 未加载
carpal超过 17 年前
I don't mean to be an ass, but you are not going to have 1k simultaneous users. If you did, you would be able to pay someone who had a better understanding of how webservers work.<p>One decent machine running 10 mongrels on a reasonably well-designed Rails app will easily be able to handle 100 requests per second. That is more traffic than you will ever get, I guarantee you.
评论 #101977 未加载