TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to Interpret Site Performance Tests

60 pointsby TheMissingPieceover 7 years ago

6 comments

d0ugieover 7 years ago
Amazing how many e-commerce sites there are out there with time to first byte in excess of 3s, DOM loads over five seconds, as if they have no idea they are throwing away sales and killing their SERP ranks. And the fixes quite often don&#x27;t require money or that much skill, just a little help from something like Varnish in many cases.<p>Is the only good way for me to use QUIC with NGINX (or any server that can deliver most of what NGINX does) to use GoQuic and quick-reverse-proxy from github? I realize that, I assume due to the protocol being quite different and using UDP, that it may require a lot of from-scratch retooling by the NGINX devs to light it up (otherwise we&#x27;d have had it from them long ago, I imagine), but we need to cut down on round trips baby. Would be nice if Google were to give <a href="https:&#x2F;&#x2F;cs.chromium.org&#x2F;chromium&#x2F;src&#x2F;net&#x2F;tools&#x2F;quic&#x2F;" rel="nofollow">https:&#x2F;&#x2F;cs.chromium.org&#x2F;chromium&#x2F;src&#x2F;net&#x2F;tools&#x2F;quic&#x2F;</a> an update with a happy .deb to get it running. Kind of a funny page for google not to serve with quic, guess they like a little irony.<p>Further parenthetically, I am all in favor of Google making the web better both with things like protocol development, pagespeed, image and video things, and also with search signals that give sites with https and speed a ranking boost. Money talks.
评论 #15469362 未加载
评论 #15469338 未加载
josephscottover 7 years ago
I&#x27;m surprised to see webpagetest.org left out of the list of suggested tools.
评论 #15464233 未加载
评论 #15469501 未加载
teddyhover 7 years ago
See also <i>The Difference Between GTmetrix, PageSpeed Insights, Pingdom Tools and WebPagetest</i><p><a href="https:&#x2F;&#x2F;gtmetrix.com&#x2F;blog&#x2F;the-difference-between-gtmetrix-pagespeed-insights-pingdom-tools-and-webpagetest&#x2F;" rel="nofollow">https:&#x2F;&#x2F;gtmetrix.com&#x2F;blog&#x2F;the-difference-between-gtmetrix-pa...</a>
bluesmoonover 7 years ago
It&#x27;s fascinating that even 10 years after many people speaking about the importance of RUM when measuring performance, the majority of articles these days completely ignore the existence of RUM.
评论 #15463896 未加载
ahmetkunover 7 years ago
Which is more beneficial in today&#x27;s web: Serving static contents from a cookieless domain but two dns lookups vs. all contents from same domain, but one dns lookup?
评论 #15469430 未加载
ianamartinover 7 years ago
One thing not explicitly mentioned here is what happens to your performance profile under load. You can have a well-designed app with solid, maintainable code, but your overall stack isn’t set up to handle simultaneous users. Whether it’s node, ruby, python, or php, you still have to consider things like your web server, reverse proxy, load balancer, and database configurations.<p>When I’m building things that are going to have heavy use I run regular tests at each level with the very simple Loads package in Python. I want to know the answer to the following question as each layer gets added in: what’s the maximum number of requests I can handle before a complete page load gets above 50ms? 100ms? 500ms?<p>I start with no database. Just pure application code with hard coded values. That gives me the practical maximum for the code as written. Is it good enough for the expected use?<p>Add the database and test again. How much does that hurt? Are we still in good shape? Or do we need to optimize?<p>Add your reverse proxy in what you expect your production settings to be (i.e. serve your static assets through nginx and not through your application’s framework for static assets)<p>How are we doing now? Often times you’ll need to do some tweaking at this stage. I run multiple instances of my Python apps on each box. Usually one per core and reverse proxy them to nginx via a socket instead of the http default, so I have to load balance them. In the simplest case you can use an iptables set up to round robin every new connection, but that’s really pretty hacky.<p>In the real world, I use haproxy because the open source version has sophisticated load balancing tools and health checks, both of which are limited in the free version of nginx. So get that configured and turn it loose and test again. You should see <i>better</i> performance than in your last run because you have more resources. But you’re probably going to have to do some tweaking here as well.<p>Etc., etc. until you’ve tested your entire delivery stack. Also, <i>test with TLS enabled before you launch into prod</i> it adds overhead to each request. Don’t get caught flat-footed when your real-world numbers don’t live up to your benchmarks and your product manager wants to know why.<p>Loads is a pretty blunt tool though. It gives you an early insight in to the max performance you’re going to be able to get, and it alerts you when one of your layers have caused a significant decrease in performance or not increased as much as expected.<p>The tools in the article are good for drilling down into where you need to look when something does go awry. But even when you use all of them and do edge testing on your CDN and all that, if you only start doing speed and load tests after your build is done, it’s too late.<p>You have to speed test and load benchmark early and often, so you know what to expect and you know where to probe if things come up short of expectations (and they will).<p>It’s a good explanation of what various tools and resources mean, but it’s only part of what you need to be doing to make sure your app actually performs the way you want it to in the real world.<p>This is a good reminder to me that that I want to start hitting the pagespeed site earlier in the dev process. I haven’t traditionally cared all that much because when my junk is loading in 50ms or less, I kind of don’t care. But I’m probably leaving some performance on the table by ignoring that until late in the process.<p>It’s also a reminder that I need to clean some things up and create a github repo with all of this laid out and some starting configurations for a real deployment scenario for each part of the stack. I’ll do a show HN if I ever get around to this.
评论 #15469389 未加载