TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

In spite of an increase in Internet speed, webpage speeds have not improved

719 点作者 kaonwarb将近 5 年前

122 条评论

bob1029将近 5 年前
It doesn&#x27;t have to be this way. I am not sure when there was a new rule passed in software engineering that said that you shall never use server rendering again and that the client is the only device permitted to render any final views.<p>With server-side (or just static HTML if possible), there is so much potential to amaze your users with performance. I would argue you could even do something as big as Netflix with pure server-side if you were very careful and methodical about it. Just throwing your hands up and proclaiming &quot;but it wont scale!&quot; is how you wind up in a miasma of client rendering, distributed state, et. al., which is ultimately 10x worse than the original scaling problem you were faced with.<p>There is a certain performance envelope you will never be able to enter if you have made the unfortunate decision to lean on client resources for storage or compute. Distributed anything is almost always a bad idea if you can avoid it, especially when you involve your users in that picture.
评论 #24055367 未加载
评论 #24054973 未加载
评论 #24056923 未加载
评论 #24056525 未加载
评论 #24055613 未加载
评论 #24055632 未加载
评论 #24054877 未加载
评论 #24057281 未加载
评论 #24057797 未加载
评论 #24054769 未加载
评论 #24055638 未加载
评论 #24055133 未加载
评论 #24056028 未加载
评论 #24056735 未加载
评论 #24056011 未加载
评论 #24057168 未加载
评论 #24056765 未加载
评论 #24054837 未加载
评论 #24057601 未加载
评论 #24055336 未加载
评论 #24056241 未加载
评论 #24055919 未加载
评论 #24056109 未加载
评论 #24055836 未加载
评论 #24060303 未加载
评论 #24056398 未加载
anonyfox将近 5 年前
I just rewrote my personal website ( <a href="https:&#x2F;&#x2F;anonyfox.com" rel="nofollow">https:&#x2F;&#x2F;anonyfox.com</a> ) to become statically generated (zola, runs via github Action) so the result is just plain and speedy HTML. I even used a minimal classless „css framework“ and ontop I am hosting everything via cloudflare workers sites, so visitors should get served right from CDN edge locations. No JS or tracking included.<p>As snappy as I could imagine, and I hope that this will make a perceived difference for visitors.<p>While average internet speed might increase, I still saw plenty of people browsing websites primarily on their phone, with bad cellular connections indoor or via a shared WiFi spot, and it was painful to watch. Hence, my rewrite (still ongoing).<p>Do fellow HNers also feel the „need for speed“ nowadays?
评论 #24053125 未加载
评论 #24054120 未加载
评论 #24055139 未加载
评论 #24052614 未加载
评论 #24053088 未加载
评论 #24054567 未加载
评论 #24055427 未加载
评论 #24053941 未加载
评论 #24054423 未加载
评论 #24055007 未加载
评论 #24054430 未加载
评论 #24054409 未加载
评论 #24072583 未加载
评论 #24054045 未加载
评论 #24057992 未加载
评论 #24059177 未加载
评论 #24054076 未加载
评论 #24054221 未加载
partiallypro将近 5 年前
The main culprit, imo is javascript. People&#x2F;clients want more and more complex things, but javascript and its libraries are the main culprit. Image compression, minification...it helps, but if the page needs a lot of JS, it&#x27;s going to be slower.<p>Slightly off topic, but I have a site that fully loads in ~2 seconds but Google&#x27;s new &quot;Page Speed Insights&quot; (which is tied to webmaster tools now) give it a lower score than a page that takes literally 45 seconds to fully load. Please someone at Google explain this to me. At least GTMetrix&#x2F;Pingdom actually makes sense.
评论 #24054556 未加载
评论 #24054639 未加载
评论 #24054745 未加载
评论 #24054457 未加载
评论 #24055177 未加载
评论 #24054530 未加载
评论 #24054369 未加载
评论 #24054894 未加载
评论 #24054444 未加载
评论 #24056391 未加载
评论 #24054520 未加载
评论 #24057204 未加载
评论 #24054350 未加载
评论 #24054523 未加载
评论 #24054608 未加载
superkuh将近 5 年前
The last 5 years there has been a dramatic shift away from HTML web pages to javascript web applications on sites that have absolutely no need to be an application. They are the cause of increased load times. And of them, there&#x27;s a growing proportion of URLs that simply <i>never</i> load at all unless you execute javascript.<p>This makes them large in MB but that&#x27;s not the true cause of the problem. The true cause is all the external calls for loading JS from other sites and then the time to attempt to execute that and build the actual webpage.
评论 #24053897 未加载
评论 #24054314 未加载
评论 #24053910 未加载
评论 #24051685 未加载
评论 #24053864 未加载
评论 #24054243 未加载
评论 #24054176 未加载
bdickason将近 5 年前
The article doesn’t dig into the real meaty topic - why are modern websites slow. My guess would be 3rd party advertising as the primary culprit. I worked at an ad network for years and the number of js files embedded which then loaded other js files was insane. Sometimes you’d get bounced between 10-15 systems before your ad was loaded. And even then, it usually wasn’t optimized well and was a memory hog. I still notice that some mobile websites (e.g. cnn) crash when loading 3p ads.<p>On the contrary, sites like google&#x2F;Facebook (and apps like Instagram or Snapchat) are incredibly well optimized as they stay within their own 1st party ad tech.
评论 #24052263 未加载
评论 #24051505 未加载
评论 #24051488 未加载
评论 #24054353 未加载
评论 #24053377 未加载
评论 #24054226 未加载
评论 #24058167 未加载
sarego将近 5 年前
As someone who just recently worked on reducing page load times these were found to be the main issues<p>1- Loading large Images(below the fold&#x2F;hidden) on first load 2- Marketing tags- innumerable and out of control 3- Executing non critical JS before page load 4- Loading noncritical CSS before page load<p>Overall we managed to get page load times down by 50% on average by taking care of these.
评论 #24054281 未加载
评论 #24054220 未加载
评论 #24056279 未加载
sgloutnikov将近 5 年前
This post reminded me of this quote:<p>&quot;The hope is that the progress in hardware will cure all software ills. However, a critical observer may observe that software manages to outgrow hardware in size and sluggishness. Other observers had noted this for some time before, indeed the trend was becoming obvious as early as 1987.&quot; - Niklaus Wirth
评论 #24055503 未加载
评论 #24054327 未加载
评论 #24057608 未加载
rayiner将近 5 年前
It’s comical. I’ve got 2 gbps fiber on a 10 gbps internal network hooked up to a Linux machine with a 5 GHz Core i7 10700k. Web browsing is just okay. It’s not instant like my 256k SDSL was on a 300 MHz PII running NT4 or BeOS. Really, there isn’t much point having over 100 mbps for browsing. Typical web pages make so many small requests that don’t even keep a TCP connection open long enough to use the full bandwidth (due to TCP’s automatic window sizing it takes some time for the packet rate to ramp up).
评论 #24054126 未加载
评论 #24057315 未加载
评论 #24062511 未加载
评论 #24053031 未加载
评论 #24054334 未加载
joncrane将近 5 年前
I&#x27;ve recently started using a Firefox extension called uMatrix and all I can say is, install that and start using your normal web pages and you&#x27;ll very quickly see exactly why web pages take so long to load. The number and size of external assets that get loaded on many websites is literally insane.
评论 #24052228 未加载
评论 #24051653 未加载
评论 #24052188 未加载
joncrane将近 5 年前
I think this is a general problem with technology as a whole.<p>Remember when channel changes on TVs were instantaneous? Somehow along the way the cable&#x2F;TV companies introduced latency in the channel changes and people just accepted it as the new normal.<p>Phones and computers were at one point very fast to respond; but now we tolerate odd latencies at some points. Apps for your phone have gotten much much bigger and more bloated. Ever noticed how long it takes to kill an app and restart it? Ever notice how much more often you have to do that, even on a 5-month old flagship phone? It&#x27;s not just web pages, it&#x27;s everything. The rampant consumption of resources (memory, CPU, bandwidth, whatever) has outpaced the provisioning of new resources. I think it might just be the nature of progress, but I hate it.
评论 #24051600 未加载
评论 #24051560 未加载
评论 #24052328 未加载
评论 #24051587 未加载
评论 #24052889 未加载
评论 #24051730 未加载
评论 #24052307 未加载
评论 #24052262 未加载
评论 #24051771 未加载
评论 #24053194 未加载
评论 #24052327 未加载
评论 #24052367 未加载
评论 #24052202 未加载
评论 #24052311 未加载
评论 #24052416 未加载
评论 #24052285 未加载
评论 #24052418 未加载
评论 #24056947 未加载
评论 #24052123 未加载
评论 #24052498 未加载
评论 #24052976 未加载
评论 #24051682 未加载
评论 #24051969 未加载
评论 #24052387 未加载
评论 #24053445 未加载
评论 #24051958 未加载
MaxBarraclough将近 5 年前
Wirth&#x27;s Law: <i>Software is getting slower more rapidly than hardware is becoming faster</i>.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Wirth%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Wirth%27s_law</a>
评论 #24053470 未加载
评论 #24051635 未加载
btbuildem将近 5 年前
Part of the problem is analagous to traffic congestion &#x2F; highway lane counts: &quot;if you build it, they will come&quot;. More lanes get built but more cars appear to fill them. Faster connection speeds allow more stuff to be included, and the human tolerance for latency (sub 0.1s?) hasn&#x27;t changed, so we accept it.<p>Web sites and apps are sidled with advertising content and data collection code; these things often get priority over actual site content. They use bandwidth and computing resources, in effect slowing everything down. Arguably, that&#x27;s the price we pay for &quot;free internet&quot;?<p>Finally (and some others have mentioned this), the software development practices are partially to blame. The younger generation of devs were taught to throw resources at problems, that dev time is the bottleneck and not cpu or memory -- and it shows. And that&#x27;s those with some formal education; many devs are self-taught, and the artifacts of their learning manifest in the code they wrote. This particularly in the JS community, which seems hellbent on reinventing the wheel instead of standing on the shoulders of giants.
weka将近 5 年前
I was on AT&amp;T&#x27;s website the other day (<a href="https:&#x2F;&#x2F;www.att.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.att.com&#x2F;</a>) because I am a customer and I was just astonished how blatantly they&#x27;re abusing redirection and just the general speed of the page. (ie: Takes 5-10 seconds to load your &quot;myATT&quot; profile page on 250MB up&#x2F;down).<p>It&#x27;s 2020. This should not be that hard. I&#x27;ve worked at a bank and know that &quot;customer data&quot; is top priority but at what point does the buck stop? Just because you can, doesn&#x27;t mean you should.
jameslk将近 5 年前
Hundreds of comments yet not one questions the fact that the premise of the article might be flawed. They&#x27;re using &quot;onload&quot; event times and calling this &quot;webpage speed&quot; (there&#x27;s no such thing as webpage speed btw[0]). It&#x27;s especially known onload is not a very reliable metric for visual perception of page loading[1] (visual perception of loading = what most think of as &quot;page speed&quot;), that&#x27;s why we have paint metrics (LCP, FCP, FMP, SI, etc). Tools like PageSpeed Insights&#x2F;Lighthouse don&#x27;t even bother to track onload.<p>In fact, HTTPArchive (the source of data the article uses) has been tracking a lot of performance metrics, not just onload. Some have been falling, some have been rising, and it depends on the device&#x2F;connection. Also, shaving 1 second off a metric can make a huge difference. These stats are interesting to ponder about, but you can&#x27;t really make any sweeping judgements about it.<p>It looks like people just want to use this opportunity to complain about JavaScript and third party scripts, but for above-the-fold rendering, this isn&#x27;t usually the only issue for most websites. Frequently it&#x27;s actually CSS blocking rendering or other random things like huge amounts of HTML, invisible fonts, or below-the-fold images choking the connection. Of course, this doesn&#x27;t fit the narrative of server-side vs client-side dev very well, so maybe that&#x27;s why there&#x27;s hundreds of comments here without any of them being an ounce skeptical of the article itself.<p>[0]. <a href="https:&#x2F;&#x2F;developers.google.com&#x2F;web&#x2F;fundamentals&#x2F;performance&#x2F;speed-tools#myth_1" rel="nofollow">https:&#x2F;&#x2F;developers.google.com&#x2F;web&#x2F;fundamentals&#x2F;performance&#x2F;s...</a><p>[1]. <a href="https:&#x2F;&#x2F;www.stevesouders.com&#x2F;blog&#x2F;2013&#x2F;05&#x2F;13&#x2F;moving-beyond-window-onload&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.stevesouders.com&#x2F;blog&#x2F;2013&#x2F;05&#x2F;13&#x2F;moving-beyond-w...</a>
speeder将近 5 年前
One thing is bothering me is how browsers themselves are becoming ridiculously slow and complicated.<p>I made a pure HTML and CSS site, and it still takes several seconds to load no matter how much I optimize it, after I launched some in-browser profiling tools, I saw that most of the time is spent with the browser building and rebuilding the DOM and whatnot several times, the download of all the data takes 0.2 seconds, all the rest of the time is the browser rendering stuff and tasks waiting each other to finish.
Spearchucker将近 5 年前
Yeah. Because modern tech is bloat. Started on a JavaScript-based search tool the other day. ALL the JavaScript is hand-coded. No libraries, frameworks, packages. No ads. Just some HTML, bare, bare-bones CSS, and vanilla JavaScript. Data is sent to the browser in the page, where the user can filter as needed.<p>It&#x27;s early days for sure, and lots of the code was written to work first and be efficient second, so it will grow over the next few weeks. But even when finished it will be nowhere near the !speed or size of modern web apps&#x2F;pages&#x2F;things.<p><a href="https:&#x2F;&#x2F;www.wittenburg.co.uk&#x2F;Rc&#x2F;Tyres&#x2F;default.html" rel="nofollow">https:&#x2F;&#x2F;www.wittenburg.co.uk&#x2F;Rc&#x2F;Tyres&#x2F;default.html</a><p>It is possible.
zelphirkalt将近 5 年前
It rather has slowed down with some websites, or those websites did not exist back then, because they would not have been possible.<p>Just today morning, when I opened my browser profile with Atlassian tabs (Atlassian needs to be contained in its own profile), there were perhaps 7 or 8 tabs, which were loaded, because they are pinned. It took approximately 15-20s of this Core i7 7th Gen, under 100% CPU usage of all cores at the same time to render all of those tabs. Such a thing used to be unthinkable. Only in current times we put up with such state of affairs.<p>As a result I had Bitbucket show me a repository page, Jira showing me a task list, and a couple of wiki pages, which render something alike markdown. Wow, what an utter waste of computing time and energy for such simple outcome. In my own wiki, which covers more or less the same amount of actually used features, that stuff would have been rendered within 1-2s and with no real CPU usage at all.<p>Perhaps this is an outcome of pushing more and more functionality into frontend client-side JS, instead of rendering templates on the server-side. As a business customer, why would I be entitled to any computation time on their servers and a good user experience?
austincheney将近 5 年前
Not a surprise. Most people writing commercial front end code have absolutely no idea how to do their jobs without 300mb of framework code. That alone, able to write to the basic standards and understand simple data structures, qualifies my higher than average salary for a front end developer without having to do any real work at work.
评论 #24051450 未加载
评论 #24051519 未加载
评论 #24052361 未加载
评论 #24054661 未加载
评论 #24055333 未加载
temporama1将近 5 年前
JavaScript is not the problem.<p>Computers are AMAZINGLY fast, EVEN running JavaScript. Most of us have forgotten how fast computers actually are.<p>The problem is classes calling functions calling functions calling libraries calling libraries.....etc etc<p>Just look at the depth of a typical stack trace when an error is thrown. It&#x27;s crazy. This problem is not specific to JavaScript. Just look at your average Spring Boot webapp - hundreds of thousands of lines of code, often to do very simple things.<p>It&#x27;s possible to program sanely, and have your program run very fast. Even in JavaScript.
评论 #24054505 未加载
评论 #24054475 未加载
tines将近 5 年前
This assumes that the thing that should be held constant is complexity, and that the loadtimes will therefore decrease. On the contrary, loadtime itself is the thing being held (more or less) constant, and the complexity is the (increasing) variable.<p>Progress is not being able to do faster the same things we used to do, but to be able to do more in the same amount of time.<p>These seem to be equivalent, but they&#x27;re not, because the first is merely additive, but the second is multiplicative.
draklor40将近 5 年前
As a backend dev now working on frontend tasks and primarily with Javscript and Typescript, I think I might have an insight. Server-side engineering is in some &quot;well-defined&quot;. Software such as the JVM, operating systems behave in a rather well-defined manner, support for features are predictable and by front-end standards, things move slowly, providing time for the developer to understand the platform and use it to his&#x2F;her best.<p>The browser platforms are a total mess. An insane number of APIs, a combinatorial explosion of what feature is supported on what platform. And web applications move fast. REAL fast. Features are rolled out in days, fixes in hours and frameworks come and go out of fashion in weeks. It is no longer possible for devs to keep up with this tide of change and they seem to end up resorting to do libraries for even trivial tasks, just to get around this problem of fancy APIs and their incorrect implementation and backwards compatibility. And needless to say, every dependency comes with its own burden.<p>Web platforms are kinda a PITA to work with. On one hand Chrome&#x2F;Google wants to redefine the web to suit their requirements and FF, the only other big enough voice really lags in terms of JS performance. Most devs nowadays end up simply testing on Chrome and leaving it at that. My simple browser benchmarks show anywhere between 5-30% penalty in performance for FF vs Chrome.<p>Unless we slow down the pace of browser API changes and stop releasing a new version of JS every year and forcing developers to adopt them, I guess slow web will be here to stay for a while.
CM30将近 5 年前
You could probably say the exact thing about video game consoles and loading times&#x2F;download speeds&#x2F;whatever. The consoles got more and more powerful, but the games still load in about the same amount of time (or more) than they used to, and take longer to download.<p>And the reasoning there is the same as for this issue for webpage speed or congestion on roads; the more resources&#x2F;power is available for use, the more society will take advantage of it.<p>The faster internet connections get, the more web developers will take advantage of that speed to deliver types of sites&#x2F;apps that weren&#x27;t possible before (or even more tracking scripts than ever). The more powerful video game systems get, the more game developers will take advantage of that power for fancier graphics and larger worlds and more complex systems. The more road capacity we get, the more people will end up driving as their main means of transport.<p>There&#x27;s a fancy term for that somewhere (and it&#x27;s mentioned on this site all the time), but I can&#x27;t think of it right now.
评论 #24054484 未加载
评论 #24053686 未加载
评论 #24053716 未加载
alkonaut将近 5 年前
Despite an increase in computer speed, software isn’t faster. It <i>does more</i> (the good case) or it’s simply sloppy, but that’s not necessarily a bad thing because it means it was cheaper&#x2F;faster to develop.<p>Same with web pages. You deliver more and&#x2F;or you can be sloppier in development to save dev time and money. Shaking a dependency tree for a web app, or improving the startup time for a client side app costs time. That’s time that can be spent either adding features or building another product entirely, both of which often have better ROI than making a snappier product.
评论 #24054346 未加载
strstr将近 5 年前
This is caused by induced demand. This comes up a lot for car traffic [0]. If you build wider roads, you will almost always just see an increase in traffic, up to the point where the roads are full again. The metaphor is not perfect, but I think it is fairly apt.<p>Expanding infrastructure increases what people can do, and so people do more things. In some cases, it just decreases the cost of engineering (you can use more abstractions to implement things more quickly, but at the cost of slower loading sites). But in the end, you should not expect wider pipes to improve speeds.<p>[0] <a href="https:&#x2F;&#x2F;www.bloomberg.com&#x2F;news&#x2F;articles&#x2F;2018-09-06&#x2F;traffic-jam-blame-induced-demand" rel="nofollow">https:&#x2F;&#x2F;www.bloomberg.com&#x2F;news&#x2F;articles&#x2F;2018-09-06&#x2F;traffic-j...</a>
jbob2000将近 5 年前
It’s the marketing team’s fault. I proposed a common, standardized solution for showing promotions on our website, but no... they wanted iframes so their designers could use a WYSIWYG editor to generate HTML for the promotions. This editor mostly generates SVGs, which are then loaded in to the iframes on my page. Most of our pages have 5-10 of these iframes.<p>Can someone from Flexitive.com please call up my marketing coworkers and tell them that they aren’t supposed to use that tool <i>for actual production code</i>?<p>Can someone also call up my VP and tell them they are causing huge performance issues by implementing some brief text and an image with iframes?<p>Can someone fire all of the project managers involved in this for pushing me towards this solution because of the looming deadline?
评论 #24051603 未加载
manigandham将近 5 年前
The reason websites have gotten worse is because they don&#x27;t make performance a priority. That&#x27;s all it is. Most sites optimize for ad revenue and developer time (which lowers costs) instead.
lmilcin将近 5 年前
And they will not.<p>The reason is that web designers treat newly improved performance as an excuse to either throw in more load (more graphics, more quality graphics, more scripts, etc.) or let them produce faster at the cost of performance.<p>Nowadays it is not difficult to build really responsive websites. It just seems designers have other priorities.
vendiddy将近 5 年前
It frustrates me that the same applies to CPU power, RAM, and disk space. We have orders of magnitude more of each, but the responsiveness of apps remains the same. At least from my subjective experience.<p>If someone has a good explanation of what has happened, I&#x27;d love to know the cause and what can be done to fix this.<p>I understand that some of this has gone to programmer productivity and increased capabilities for our apps, but what we&#x27;ve gotten doesn&#x27;t seem proportional at all.
bangonkeyboard将近 5 年前
I frequent one forum only through a self-hosted custom proxy. This proxy downloads the requested page HTML, parses the DOM, strips out scripts and other non-content, and performs extensive node operations of searching and copying and link-rewriting and insertion of my own JS and CSS, all in plain dumb unoptimized PHP.<p>Even with all of this extra work and indirection, loading and navigating pages through the proxy is still much faster than accessing the site directly.
tonymet将近 5 年前
I&#x27;m developing a tiny-site search engine. upvote if you think this product would interest you. The catalog would be sites that load &lt; 2s with minimal JS
评论 #24053948 未加载
评论 #24054079 未加载
mensetmanusman将近 5 年前
I tested content blockers on iOS.<p>Going to whatever random media site without it enabled is a couple mb per page load (the size of SNES roms.. for text!).<p>With content blockers enabled it was a couple kb per page load.<p>Three orders of magnitude difference in webpage size due to data harvesting...<p>Now, imagine how much infrastructure savings we would have if suddenly web browsing was even just 1 order less data usage. Would be fun to calculate the CO2 emission savings, ha.
paradite将近 5 年前
In other news:<p>In spite of an increase in mobile CPU speed, mobile phone startup time have not improved (in fact they became slower).<p>In spite of an increase in desktop CPU speed, time taken to open AAA games have not improved.<p>In spite of an increase in elevator speed, time taken to reach the median floor of an building have not improved.<p>My point is, &quot;webpage&quot; has evolved the same way as mobile phones, AAA games and buildings - it has more content and features compared to 10 years ago. And there is really no reason or need to making it faster than it is right now (2-3 seconds is a comfortable waiting time for most people).<p>To put things in perspective:<p>Time taken to do a bank transfer is now 2-3 seconds of bank website load and a few clicks (still much to improve on) instead of physically visiting a branch &#x2F; ATM.<p>Time taken to start editing a word document is now 2-3 seconds of Google Drive load instead of hours of MS Office Word installation.<p>Time taken to start a video conference is now 2-3 seconds of Zoom&#x2F;Teams load instead of minutes of Skype installation.
评论 #24053812 未加载
评论 #24053552 未加载
评论 #24052870 未加载
habosa将近 5 年前
Here on HN we like to complain about JS frameworks and Single Page Apps. Yes, they can be slow. But they also power some great interactive web experiences like Figma or Slack that just aren&#x27;t feasible to build any other way.<p>The low hanging fruit here is content websites (news, blogs, etc) which are loaded down with hundreds of tracking scripts, big ads, and tons of JS that has nothing to do with the content the user came to read.<p>Try loading this page (which is far from the worst): <a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;21351770&#x2F;google-pixel-4a-review-camera-specs-price" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;21351770&#x2F;google-pixel-4a-review-cam...</a><p>Privacy Badger reported 30 (!!!) tracking scripts on that page. Even with PB blocking those, it still takes ~15s before the page is usable on my MacBook Pro with a fast connection.<p>It&#x27;s just a bunch of text and some picture galleries. It loads like it&#x27;s an IDE.
vlovich123将近 5 年前
Parkinson&#x27;s law at work.<p>Employees building the web pages are rewarded for doing &quot;work&quot;. Work typically means adding code, whether it&#x27;s features, telemetry, &quot;refactoring&quot; etc. More code is generally slower than no code.<p>That&#x27;s why you see something like Android Go for entry-level devices &amp; similar &quot;lite&quot; versions targeting those regions. These will have the same problem too over time because even entry-level devices gets more powerful over time.<p>The problem is that organizations don&#x27;t have good tools to evaluate whether a feature is worth the cost so there&#x27;s no back pressure except for the market itself picking alternate solutions (assuming those are options - some times they may not be if you&#x27;re looking at browsers or operating systems where generally a &quot;compatibility&quot; layer has been defined that everyone needs to implement).
oblio将近 5 年前
While I agree with the idea and I am not happy about slow apps, the truth is, it&#x27;s focused on technical details.<p>People don&#x27;t care about speed or beauty or anything else than the application helping them achieve their goals. If they can do more with current tech than they could with tech 10-20 years ago, they&#x27;re happy.
评论 #24053043 未加载
评论 #24052847 未加载
the_gipsy将近 5 年前
I&#x27;ve made a multiplayer webgame (<a href="https:&#x2F;&#x2F;qdice.wtf" rel="nofollow">https:&#x2F;&#x2F;qdice.wtf</a>) that is under 400kb <i>total</i> download on first load [1]. Even when booting into a running game it&#x27;s not much higher.<p>Load times and bloat are one of my pet peeves, that&#x27;s why I optimized a lot for this, although there is <i>still</i> room for improvement.<p>Everything is self hosted, no tracking bullshit, no external requests. I used Elm, which apart from being nice for learning FP, has a very small footprint compared to other DOM abstractions.<p>[1] Last time I looked, it might have grown a tiny bit due to UGC. I don&#x27;t have access to a computer rn.
throwaway0a5e将近 5 年前
To quote an exec at a major CDN:<p>&quot;wider pipe fit more shit&quot;<p>(yes he actually said that, to an entire department, the context was that people will fill the pipe up with junk if they&#x27;re not careful and it made more room to deliver value by not sucking)
评论 #24053339 未加载
jrnichols将近 5 年前
Of course not. People remain convinced that the internet will cease to exist without advertisements all over the place. Web pages are now 10mb+ in size, making 20 different DNS calls, all of which ad latency. And for what? To serve up advertisements wrapped around (or laying themselves over) the content that we came to read in the first place.<p>Maybe i&#x27;m just old, but I fondly remember web pages that loaded reasonably fast over a 56k modem. these days, if I put anything on the web, I try to optimize it the best I can. Text only, minimal CSS, no javascript if at all possible.<p>I hope more people start doing that.
kbuchanan将近 5 年前
To me it’s more evidence that increased speed and reduced latency is not where our real preferences lie: we may be more interested in the <i>capabilities</i> of the technology, which have undoubtedly improved.
评论 #24053585 未加载
评论 #24052469 未加载
flyGuyOnTheSly将近 5 年前
Wages constantly increase due to economic prosperity (on average, I realize they have dwindled in the past 50 odd years), and every single year the majority of people have nothing in their savings accounts.<p>It&#x27;s been that way since the dawn of time. [0]<p>This is a human economy problem, not a technological one imho.<p>If you give a programmer a cookie, she is going to ask for a glass of milk.<p>[0] <a href="https:&#x2F;&#x2F;www.ancient.eu&#x2F;article&#x2F;1012&#x2F;ancient-egyptian-taxes--the-cattle-count&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.ancient.eu&#x2F;article&#x2F;1012&#x2F;ancient-egyptian-taxes--...</a>
joshspankit将近 5 年前
With respect for the people who talk about the technologies involved, server-side vs client, bandwidth vs latency, etc, etc, etc.<p>I don’t think any of that is <i>really</i> the core of it.<p>Humanity sent spaceships to the moon with way less power than a smart <i>watch</i>.<p>After watching tech evolve over my lifetime, the real issue feels like it’s about the psychological choice:<p><i></i>When more power is available we fill it with either less efficient code, more layers of abstraction, or more features.<i></i><p>(Besides outliers) this seems to be true no matter what the tech, and is especially obvious on the web.
ilaksh将近 5 年前
Here&#x27;s an idea I posted on reddit yesterday. Seemed like it was shadowbanned or just entirely ignored.<p># Problem<p>Websites are bloated and slow. Sometimes we just want to be able to find information quickly without having to worry about the web page freezing up or accidentally downloading 50MB of random JavaScript. Etc. Note that I know that you can turn JavaScript off, but this is a more comprehensive idea.<p># Idea<p>What if there was a network of websites that followed a protocol (basically limiting the content for performance) and you could be sure if you stayed in that network, you would have a super fast browsing experience?<p># FastWeb Protocol<p>* No JavaScript<p>* Single file web page with CSS bundled<p>* No font downloads<p>* Maximum of 20KB HTML in page.<p>* Maximum of 20KB of images.<p>* No more than 4 images.<p>* Links to non-fastweb pages or media must be marked with a special data attribute.<p>* Total page transmission time &lt; 200 ms.<p>* Initial transmission start &lt; 125 ms. (test has to be from a nearby server).<p>* (Controversial) No TLS (https for encryption). Reason being that TLS handshake etc. takes a massive amount of time. I know this will be controversial because people are concerned about governments persecuting people who write dissenting opinions on the internet. My thought is that there is still quite a lot of information that in most cases is unlikely to be subject to this, and in countries or cases where that isn&#x27;t the case, maybe another protocol (like MostlyFastWeb) could work. Or let&#x27;s try to fix our horrible governments? But to me if the primary focus is on a fast web browsing experience, requiring a whole bunch of expensive encryption handshaking etc. is too counterproductive.<p># FastWeb Test<p>This is a simple crawler that accesses a domain or path and verifies that all pages therein follow the FastWeb Protocol. Then it records its results to a database that the FastWeb Extension can access.<p># FastWeb Extension<p>Examines links (in a background thread) and marks those that are on domains&#x2F;pages that have failed tests, or highlights ones that have passed tests.
draaglom将近 5 年前
Original data here:<p><a href="https:&#x2F;&#x2F;httparchive.org&#x2F;reports&#x2F;loading-speed?start=earliest&amp;end=latest&amp;view=list" rel="nofollow">https:&#x2F;&#x2F;httparchive.org&#x2F;reports&#x2F;loading-speed?start=earliest...</a><p>The degree to which desktop load times are stable over 10 years is in itself interesting and deserves more curiosity than just saying &quot;javascript bad&quot;<p>Plausible alternate hypotheses to consider for why little improvement:<p>* Perhaps this is evidence for control theory at work, ie website operators are actively trading responsiveness for functionality and development speed, converging on a stable local maximum?<p>* Perhaps load times are primarily determined by something other than raw bandwidth (e.g. latency, which has not improved as much)?<p>* Perhaps this is more measuring the stability of the test environment than a fact about the wider world?<p><a href="https:&#x2F;&#x2F;httparchive.org&#x2F;faq#what-changes-have-been-made-to-the-test-environment-that-might-affect-the-data" rel="nofollow">https:&#x2F;&#x2F;httparchive.org&#x2F;faq#what-changes-have-been-made-to-t...</a><p>If this list of changes is accurate, that last point is probably a significant factor -- note that e.g. there&#x27;s no mention of increased desktop bandwidth since 2013.
StopHammoTime将近 5 年前
While I don&#x27;t disagree about this problem, most of the comments in this thread are lacking significant context and ignoring obvious problems with server side rendering.<p>Wordpress is arguably the best and most prominent example of SSR. It is horrible, and a vanilla install of Wordpress generally returns content in 2-3 seconds.<p>While Javascript adds bloat to the initial page load, generally it reduces significantly (or eliminates entirely) further page loads on a domain. For example, if I have an Vue app, it might take an extra second to load but then it will never have to load any client-side assets again (technically).<p>The other thing that makes most of these arguments is that they are disingenuous when it comes to making arguments about payloads and computing. It takes may take a significant amount of processing power to generate a json payload, but it most certainly will take an ever larger amount to generate all of the scaffolding that goes with a normal HTML page. Redrawing the HTML on each page load also increases overall network traffic, duplicates load across every page on the service (see Wordpress, again), and centralises front-end complexity in the backend.
seangrogg将近 5 年前
I do feel that this is a multifaceted problem.<p>On one hand, yes, end-user expectations have gone up. Back in the early naughts it was perfectly fine to wait ~8 seconds for an image to load block by block, kicking around the layout and content as it did so - and that was the status quo. It was fine. Nowadays if I don&#x27;t get all icons and thumbnail-ready images near-immediately I assume something is wrong at some layer.<p>Another factor is how things are going over the wire. It&#x27;s easy to point to web developers and say &quot;Why not use SSR everywhere?&quot; while they&#x27;ll point back and say &quot;Client-side rendering lets the server scale better&quot;. As with most such complaints, the truth is often somewhere in the middle - SSR should be aggressively used for static content but if you have a non-trivial computation that scales linearly it is worth considering offloading to the client, especially if you&#x27;re running commodity hardware.<p>Then there&#x27;s the question of what we&#x27;re doing. It very much used to be the case that most everything I did was over an unsecured connection and virtually all interactions resulted in page navigation&#x2F;refresh - never anywhere close to being below my perception. Nowadays, many actions are below my perception (or at least eagerly evaluated such that it seems they are) while non-trivial actions are often going through SSL, requests balanced across multiple app servers, tokens being authed, and eldritch horrors are invoked by name in UTF-8 and somehow it all gets back to me around the same time as those page refreshes were back in the day.<p>This most certainly isn&#x27;t to say that we don&#x27;t have room for improvement: we most certainly do. But like most systems, as the capabilities improve so, seemingly, do the requirements and interactions that need to be supported over it.
sriku将近 5 年前
On a parallel front, it feels like something similar has happened with computers too. Laptops have gotten better (more cores, more RAM, SSD) over the years, but still the more frequent interaction occurrence seems to be me waiting for the computer to respond because every tiny application or website now consumes hundreds of GB of RAM .. and memory pressure.
mbar84将近 5 年前
The worst case of this for me was a completely static site which, sans images) loaded in under 100ms on my local machine. I inlined styles, all images with width and height so there is no reflow when the image arrives, no chaining of css resources, deferred execution of a single javascript bundle, gzip, caching, the works. Admittedly it was a simple page, but hey, if I can&#x27;t do it right there, where then?<p>Anyway, it all went to s<i></i>t as soon as another guy was tasked with adding share buttons (which I have never once in my life used and am not sure anybody else has ever used).<p>I won&#x27;t optimize any pages over which I don&#x27;t have complete control. Maybe if a project has a CI&#x2F;CD setup that will catch any performance regressions, but other than that, too much effort, thankless anyway, and on any project with multiple fronted devs, the code is a commons and the typical tragedy is only a matter of time.
redoPop将近 5 年前
Load times in this article are attributed to httparchive.org, which gathers data using WebPageTest. [1] By default, WebPageTest emulates a 5Mbps down &#x2F; 1Mbps up connection speed for all tests, to provide a more stable basis for performance comparisons. httparchive.org&#x27;s load times therefore aren&#x27;t affected by improvements in general network speed.<p>Am I missing something here? httparchive.org is not an appropriate source for the comparison this article makes. A large repository of RUM data would be needed for that comparison.<p>Counterintuitively, the stability of page load times in httparchive.org suggests that page performance hasn&#x27;t improved or worsened enough to make much difference on a 5Mbps connection.<p>[1] <a href="https:&#x2F;&#x2F;httparchive.org&#x2F;faq#how-is-the-data-gathered" rel="nofollow">https:&#x2F;&#x2F;httparchive.org&#x2F;faq#how-is-the-data-gathered</a>
mc32将近 5 年前
Network latency. Bandwidth is the new MHz.
评论 #24051492 未加载
评论 #24053123 未加载
johannes1813将近 5 年前
This reminds me a lot of Freakonomics podcast episode (<a href="https:&#x2F;&#x2F;freakonomics.com&#x2F;2010&#x2F;02&#x2F;06&#x2F;the-dangers-of-safety-full-transcript&#x2F;" rel="nofollow">https:&#x2F;&#x2F;freakonomics.com&#x2F;2010&#x2F;02&#x2F;06&#x2F;the-dangers-of-safety-fu...</a>) where they discuss different cases where increased safety measures just encouraged people to take more risk, resulting in the same or even increased numbers of accidents happening. A good example is that as football helmets have gotten more protective, players have started hitting harder and leading with their head more.<p>Devs have been given better baseline performance for free based on internet speeds, and adjust their thinking around writing software quickly vs. performantly accordingly, so we stay in one place from an overall performance standpoint.
评论 #24052849 未加载
评论 #24052827 未加载
greyhair将近 5 年前
Get over it. The modern web sucks. My 2003 Thinkpad T41 was never a raging powerhouse, but is was &quot;usable&quot;. It is no longer usable. Nothing in the hardware has changed, but the software, the browsers, and the web at large have all changed drastically.<p>I do embedded system co-design. I write software for a living, but I work closely with the hardware teams, including (at my last employer) ASIC features. I have to flop hats between &#x27;software&#x27; and &#x27;hardware&#x27; all the time. And there are clearly times that I throw the software team under the bus.<p>&quot;Hey the last product ran in 128MB, but memory is cheap, so can we have 4GB this time? We want to split all the prior pthreads across containers!&quot;<p>You think I am joking?<p>Browsers and web content have done the same.<p>&quot;But look at all the shiny features?&quot;<p>I don&#x27;t want the shiny. I just want the content, and fast.
zoomablemind将近 5 年前
Another factor is a wider use of all sorts of CMS (WordPress etc) for content presentation, combined with often slower&#x2F;underpowered shared hosting and script heavy themes.<p>On some cheap hosters it may take a second just to startup the server instance, that&#x27;s before any of the outgoing requests are done!
评论 #24052153 未加载
评论 #24053624 未加载
bbulkow将近 5 年前
I am surprised to not see a product reason.<p>There is one.<p>Engagement falls off when there is a delay in experience past a certain point. Usually considered around 100ms to 150ms with extreme drop off at a second or higher. This has to go with human perception and can be measured through a&#x2F;b analysis and similar.<p>Engagement does not get better if you go faster past that point. Past that point, you should have a richer experience, more things on the page, whatever you want. or reduce cost by spending less on engineering, certainly don&#x27;t spend more money on a &#x27;feature&#x27; (speed) that doesn&#x27;t return money.<p>Ad networks are run on deadline scheduling. Find the best ad in 50 Ms, don&#x27;t find any old ad as quick as possible.<p>Haven&#x27;t others been involved with engagement analysis found the same?
rutherblood将近 5 年前
there&#x27;s a really funny thing that happens with websites on phones. i try clicking something sometime after the page loads and just when i click it, bam! something new loads up on the page, all these elements shift up or down and what I just clicked on isn&#x27;t at its place anymore and i end up clicking on something completely different. All this happens in split of a second just between the moments when I look and decide to click on something and before my finger actually does the tap. This is so so common across the mobile web. Such a stupid little thing but also highly annoying. Even if we could collectively solve this one little thing, i would consider our UI sensibility to have improved over the years.
评论 #24057192 未加载
mostlystatic将近 5 年前
HTTP Archive uses an emulated 3G connection to measure load times, so of course faster internet for real users won&#x27;t make the reported load times go down.<p>Am I missing something?<p><a href="https:&#x2F;&#x2F;httparchive.org&#x2F;faq" rel="nofollow">https:&#x2F;&#x2F;httparchive.org&#x2F;faq</a>
mdavis6890将近 5 年前
There are three main contributing factors:<p>- Users are not the customers, so there&#x27;s little point in optimizing for their experience, except to the extent that it impacts the number of users your customers reach with ads.<p>- Users do not favor faster websites, so as long as you meat a minimum performance bar so they don&#x27;t leave before the ads load, there&#x27;s little to gain from optimizing the speed.<p>- For users that do care about load-time, it&#x27;s hard to know before visiting a page whether it&#x27;s fast or not, and by that point the publisher has already been paid to show you the ads.<p>A helpful solution would be to show the load time as a hover-over above links, so that you can decide not to visit pages with long load times.
swyx将近 5 年前
Wirth&#x27;s Law reborn: &quot;Software is getting slower more rapidly than hardware is becoming faster.&quot;<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Wirth%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Wirth%27s_law</a>
Tomis02将近 5 年前
The problem of course isn&#x27;t the internet &quot;speed&quot; but latency. ISPs advertise hundreds of Mbps but conveniently forget to mention latency, average packet loss and other connection quality metrics.
评论 #24053768 未加载
XCSme将近 5 年前
Could it also be that servers resources in general are lower and that there are more clients&#x2F;instance than before?<p>With all this virtualization, Docker containers and really cheap shared hosting plans it feels like there are thousands of users served by a single core from a single server. Whenever I access a page that is cached by Cloudflare it usually loads really fast, even if it has a lot of JavaScript and media.<p>The problem with JavaScript usually occurs on low-end devices. On my powerful PC most of the loading time is spent waiting for the DNS or server to send me the resources.
wuxb将近 5 年前
You will find gems just by checking the third-party cookies associated to those websites. I can see cookies from dexdem.net, doubleclick.net, and facebook.com on my chase.com account home page.
thewileyone将近 5 年前
I used to support a web-based enterprise system used worldwide. After the business management escalated a complaint that the system was slow, I technically showed that them all the bells and whistles, doohickeys, widgets, etc that they insisted on as features, took up nearly 10MB to download, while Amazon, in comparison, took less than 1MB.<p>This was taken under advisement and then we got new feature requests the next day that would just add more crap to the download size. But they never complained again.
rootsudo将近 5 年前
I remember reading a story of how engineers at Youtube were receiving more tickets&#x2F;complaints about connectivity in Africa after reducing the page loading time.<p>They were confused and thinking w&#x2F; the bandwidth limitations and previous statistics it didn&#x27;t make sense, something about previously not having useful statistics.<p>Turns out due to reducing the page size, they finally were able to have African users load the page, but not the video.<p>I thought that was interesting, and puts it right at that email at lightspeed copypasta.
climate-code将近 5 年前
This is an example of Jevon&#x27;s Paradox - increases in the efficiency of use of a resource lead to increases in consumption of that resource - <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jevons_paradox" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jevons_paradox</a><p>I wrote about this here - <a href="https:&#x2F;&#x2F;adgefficiency.com&#x2F;jevons-paradox&#x2F;" rel="nofollow">https:&#x2F;&#x2F;adgefficiency.com&#x2F;jevons-paradox&#x2F;</a>
ErikAugust将近 5 年前
Part of the reason I created Trim (<a href="https:&#x2F;&#x2F;beta.trimread.com" rel="nofollow">https:&#x2F;&#x2F;beta.trimread.com</a>) was simply the realization that I didn’t want to load 2-7 MB of junk just to read an article.<p>Trim allows you to often chop off 50% - 99% of the page weight without using any in-browser JavaScript.<p>Example: <a href="https:&#x2F;&#x2F;beta.trimread.com&#x2F;articles&#x2F;31074" rel="nofollow">https:&#x2F;&#x2F;beta.trimread.com&#x2F;articles&#x2F;31074</a>
k__将近 5 年前
I&#x27;m thinking about this problem often lately.<p>Just a few weeks ago I saw a size comparison of React and Preact pages. While Preact is touted as a super slim React alternative, in real-life tests the problem were the big components and not the framework.<p>This could imply that we need to slim down code at a different level of the frontend stack. Maybe UI kits?<p>This could also imply that frontend devs simply don&#x27;t know how to write concise code or don&#x27;t care as much as they say.
malwarebytess将近 5 年前
It&#x27;s a hard pill to swallow that 20 years since I started using the internet websites perform worse on vastly superior hardware, especially on smartphones.
wilg将近 5 年前
Is it really that surprising that developers would rather spend a performance budget on adding new features instead of further improving performance?
ZainRiz将近 5 年前
This sounds a lot like the old argument of developers not being careful about how much memory&#x2F;cpu they use. Engineers have been complaining about this since the 70s!<p>As hardware improves, developers realize that computing time is way cheaper than developer time.<p>Users have a certain latency that they accept. As long as the developer doesn&#x27;t exceed that threshold, optimizing for dev time usually pays off.
HumblyTossed将近 5 年前
Because the focus is now on time to market and developer speed and anything else anyone can think of <i>except</i> the end user experience.
b0rsuk将近 5 年前
Website Obesity Crisis, a witty talk by Maciej Cegłowski (2015): <a href="https:&#x2F;&#x2F;www.webdirections.org&#x2F;blog&#x2F;the-website-obesity-crisis&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.webdirections.org&#x2F;blog&#x2F;the-website-obesity-crisi...</a><p>I would post the text version, but somehow the CDN is down.
KingOfCoders将近 5 年前
Same with most computers. Recently migrated from an iMacPro to a Ryzen 3900X with Linux. Linux feels so much faster then the iMac, which when I use it now feels very sluggish for a 8core Xeon 32gb machine. This just shows how computers are kept slow so you want to upgrade &lt;&#x2F;foilhatoff&gt;
EGreg将近 5 年前
Oh yeah try this URL on your mobile phone:<p><a href="https:&#x2F;&#x2F;yang2020.app&#x2F;events" rel="nofollow">https:&#x2F;&#x2F;yang2020.app&#x2F;events</a><p>It has lazyloading of images, components, memoizing the tabs, batching requests, the works. Actually it can be made a lot faster using browser caching.
jorblumesea将近 5 年前
The issue at its core is HTML. It was not designed for complex, rich interfaces and interactivity that modern web users want. So JS is used. Which is slow and needs to be loaded.<p>The heavy use of JS is basically just hacking around the core structure of the internet, HTML&#x2F;DOM problems.
sreekotay将近 5 年前
I think this misses the point. Latency becomes the dominate factor very very quickly. Increases to webpage speed only really help for large file downloads (and sometimes not those) and increased user&#x2F;device concurrency.<p>Edit: this also explains the REVERSE trend in mobile.
andirk将近 5 年前
&quot;Progressive Enhancement&quot; touts using the basic building blocks first and growing from there, but it hasn&#x27;t been updated to explain when and where to offload the computations. Are there any best practices in circulation about balancing the workload?
darkhorse13将近 5 年前
Part of the problem is that modern JS frameworks make it incredibly easy to mess up performance. I have seen mediocre devs (not bad, but not great) make a mess of what should be simple sites. Not blaming the frameworks, but it is still a problem to be addressed.
hpen将近 5 年前
I think most software is built with some level of tolerance for performance, and the stack &#x2F; algorithms &#x2F; feature implemented are chosen to meet that tolerance. Basically, as hardware gets faster, it&#x27;s seen as a way to make software cheaper.
wintorez将近 5 年前
I think we need to start differentiating between webpage speeds and web application speeds. Namely, a webpage would work if I disable the JavaScript in my browser, but a web application would not. By this definition, web page speeds has improved a lot.
zzo38computer将近 5 年前
Try to avoid CSS, JavaScript, animations of any kind, and especially inline pictures and videos. This can reduce the time needed to load it greatly. (There are times where the things I listed are useful, but they should generally be avoided.)
fouc将近 5 年前
How dare you! You dang web developers developing with your fancy high speed internets. Stop that.<p>Turn on network throttling, make it about 1mbps (that&#x27;s 125 KB&#x2F;s, which is insanely fast!).<p>Also turn off asset caching. Always experience the REAL speed of your dang website!<p>Thanks :)
Konohamaru将近 5 年前
Niklaus&#x27;s Law strikes again.
评论 #24051508 未加载
评论 #24052021 未加载
jungletime将近 5 年前
This would be a useful metric to use in ranking websites. The bloat of a web page seems to be inversely related to the value of the contents. The best websites have little in the way of graphics, but are information dense.
michaelcampbell将近 5 年前
What&#x27;s the old adage; software gets slower faster than hardware gets faster?
teabee89将近 5 年前
There&#x27;s a fancy name for this effect: Jevons paradox <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jevons_paradox" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jevons_paradox</a>
andy_ppp将近 5 年前
This is a phenomenon of all progress though right? The more roads you have the more cars you get, the faster a computer the slower the software. Andy’s law: the faster something gets the lazier humans can become.
MrStonedOne将近 5 年前
I blame tcp startup.<p>and the the fact that chrome hasn&#x27;t added https&#x2F;3 in mainline as even a flag even though the version that their sites use has been enabled by default on mainline chrome for years.
peetle将近 5 年前
Despite an increase in speed, people insist on adding more to the web.
ffggvv将近 5 年前
this isn’t that surprising when you see they mean “throughput” and not “latency” when they talk about speed.<p>webpages aren’t super large files so it would depend more on the latency of the request not Mbps
评论 #24051873 未加载
emptyparadise将近 5 年前
I wonder how much worse things will get when 5G becomes widespread.
评论 #24051497 未加载
calebm将近 5 年前
&quot;A task will expand to consume all available resources.&quot;
dariosalvi78将近 5 年前
Server side rendering: page loads super fast, cool. You click on a menu: wait for the new page to come. Click on another button: wait. Click, wait, click, wait, click, wait...
snow_mac将近 5 年前
Blame React, Angular or any of the other javascript libraries....
sirjaz将近 5 年前
The larger problem is that the web was never meant to be used the way we use it. We should be making cross-platform apps that use simple data feeds from remote sources
Jeaye将近 5 年前
Title should be &quot;Despite an increase in Internet speed, webpage speeds have not improved&quot;, since webpage speeds have not acted in spite of internet speed.
azinman2将近 5 年前
It’s much like the problem of induced demand in transportation: more capacity brings more traffic. More JavaScript. More ad networks. More images. More frameworks.
correct_horse将近 5 年前
Blinn&#x27;s law says: As technology advances, rendering time remains constant. Usually applied to computer (i.e. 3D) graphics, but seems applicable here too.
sirjaz将近 5 年前
The problem is that we are trying to make websites do what they were never meant to. We should be making cross-platform apps that use simple data feeds.
bilater将近 5 年前
But could the argument be made that we are loading a shit load more content so even though it feels slower you&#x27;re getting a richer UX to work with?
评论 #24052961 未加载
jll29将近 5 年前
As I told people in &#x27;93, it will all go downhill from here (when folks started using GIFs instead of &quot;&lt;hr&gt;&quot; tags...)
WalterBright将近 5 年前
Compiler speeds haven&#x27;t improved much, either. The reason is simple - as computer speeds improved, we asked the compilers to do more.
JJMcJ将近 5 年前
There are still sites with very simple HTML&#x2F;JS&#x2F;CSS, and they load so fast it&#x27;s almost like magic.
perfunctory将近 5 年前
Every time I see a headline like this one I have to think about two things - Jevons paradox and climate change.
MattGaiser将近 5 年前
We discuss the every growing size of web pages here regularly, so this is not all that surprising.
pmarreck将近 5 年前
In spite of massive increases in processing power, boot times have also not improved
sushshshsh将近 5 年前
Really? My text only websites that I&#x27;ve written and hosted for myself are really snappy. I wouldn&#x27;t know the feeling.<p>All I needed to do was spend a weekend scraping everything I needed so that I could self host it and avoid all the ridiculous network&#x2F;cpu&#x2F;ram bloat from browsing the &quot;mainstream&quot; web
MangoCoffee将近 5 年前
&gt; webpage speeds have not improved<p>we keep abusing it beyond what a web (html) page supposed to do.
dainiusse将近 5 年前
The same as smartphone - cpu&#x27;s improved but Battery still holds one day
tekkiweb将近 5 年前
Try to reduce the size of visuals and meda like images, videos, etc
jozzy-james将近 5 年前
just for a little snark to defend client side use - lemme know when you find a responsive bin packing algorithm i can do server side that doesn&#x27;t choke out the dom
anticensor将近 5 年前
Not a news: Wirth&#x27;s law has been known for a long time.
zwaps将近 5 年前
To give a negative example I recently came across, check out <a href="https:&#x2F;&#x2F;www.scmp.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.scmp.com&#x2F;</a><p>Now, it&#x27;s pretty much a normal news website in that it shows a long list of articles, some pictures and then text.<p>I am running a standard laptop computer given to me by my company. My internet connection is pretty fast. Even with ads blocked on the entire website, that thing is slooow.<p>1. The pictures have an effect where they are rendered in increasing quality over time, supposedly so you see them earlier. This doesn&#x27;t work, as they load much more slowly than normal HTML pictures that load instantly given my internet connection.<p>2. The scrolling is more than sluggish. This is, in part, because the website only loads new content after you scroll down. So instead of having a website that loads and where you can just scroll, which would make TONS of sense for a website where you quickly want to check the headlines, you have this terrible experience where every scrolling lags and induces a new &quot;loading screen&quot;.<p>3. If you click on an article, it is loaded as a single page app with an extra loading screen, which is somehow slow for some reason.<p>4. Once in the article, the scrolling disaster continues. But now even the text loads slowly while you scroll. How can you not just have the text load instantly? It&#x27;s a news website. I want to read! I don&#x27;t want to scroll, wait for the load, and then continue to read.<p>5. There is a second scrolling bar besides my browser&#x27;s scrolling bar. Why? Who thought that&#x27;s a good idea? The scrolling bar&#x27;s top button disappears behind the menu bar of the website. Why?<p>6. To use this website, one needs to scroll through the whole article to get it to load, then scroll back up, then read. Still, each time the menu bar changes size due to scrolling, my computer gets sluggish.<p>7. Javascript Pop ups. Great.<p>8. Every time your mouse cursor moves accidentally over any element of the website, gigantic pop ups show up out of nowhere and you can&#x27;t continue reading. Annoying!<p>This website presents news. It&#x27;s not better at it than earlier ones, it&#x27;s worse. None of the things make the experience any better and it gives no more benefit to reading news than older, plain html news websites. The reading experience is an unmitigated disaster for no reason whatsoever. Who greenlit this? Why?<p>If you are a web developer, you work in a business where the state of the art has notably gotten worse. A lot worse. At this stage, I would be seriously worried about the reputation of the profession if I were you. Sad!
cozzyd将近 5 年前
Oh but they have... assuming you leave ublock enabled :)
nicbou将近 5 年前
I spent a decent amount of time making my website how I want the web to be: fast, straightforward and unintrusive.<p>Making it fast was pretty easy. Remove anything that isn&#x27;t directly helping the user, compress and cache everything else, and use HTTP2 Server Push for essential resources. There were other optimisations, but that took me below the 500ms mark. At ~300ms, it starts feeling like clicking through an app - instant.<p>(it&#x27;s <a href="https:&#x2F;&#x2F;allaboutberlin.com" rel="nofollow">https:&#x2F;&#x2F;allaboutberlin.com</a>)<p>However, there&#x27;s no point in serving slimy GDPR notices, newsletter prompts and SEO filler text at lightning speed. Those add a lot more friction than an extra 500ms of load time.
rayrrr将近 5 年前
Moore&#x27;s Law + Parkinson&#x27;s Law = Stasis
qazpot将近 5 年前
Because the web sucks while Internet does not.
giantg2将近 5 年前
Quite frankly, this is bigger than the server vs client comments I&#x27;ve seen. This is not some new phenomenon. The efficiency of code and architecrure has declined over time for at least the last 30 years. As compute and storage costs have come down dramatically, the demand for labor has gone up. Who decides what&#x27;s really important in a project - the business. That comes down to cost. If you can save money by using cheap hardware and cheap architecture, then save money by using your human resources for output vs efficient code...
innocentoldguy将近 5 年前
<i>cough</i> JavaScript <i>cough</i>
mfontani将近 5 年前
Ads rule everything around me
scoot_718将近 5 年前
Just block javascript.
jiggawatts将近 5 年前
An observation I&#x27;ve made over decades is that people stop optimising when things get &quot;good enough&quot;. That threshold is typically 200ms-2s, depending on the context. After that, developers or infrastructure people just stop bothering to fix issues, even if things are 1,000x slower than they &quot;should be&quot; or &quot;could be&quot;.<p>Call this the <i>performance perceptibility threshold</i>, or PPT, for want of a better term.<p>There&#x27;s a bunch of related problems and effects, but they all seem to come back to PPT one way or another.<p>For example, languages like PHP, Ruby, and Python are all notoriously &quot;slow&quot;, many times slower than the equivalent program written in C#, Java, or whatever. When they were first used to write websites with minimal logic, basically 90% HTML template with a few parameters pulled from a database, this was <i>okay</i>, because the click-to-render time was dominated by slow internet and slow databases of the era. There was, a decade ago, an acceptable trade-off between developer-friendliness and performance. But inevitably, feature-creep set it, and now enormous websites are entirely written in PHP, with 99% of the content dynamically generated. With rising internet speeds and dramatic performance improvements in databases, PHP &quot;suddenly&quot; became a huge performance pain point.<p>In that scenario, the root cause of the issue is that the attitude that &quot;PHP&#x2F;Python&#x2F;Ruby&quot; is <i>acceptable</i> because lightweight code using them falls under the PPT is a false economy. Eventually people will want a lot more out of them, they&#x27;ll want heavyweight applications, and then having locked into the language is now a mistake that cannot be unwound.<p>The most absurd example of this is probably Python -- designed for quick and dirty lightweight scripting -- used for big data and machine learning, some of the most performance intensive work currently done on computers.<p>Similarly, I see astonishingly wasteful network architectures, especially in the cloud. Wind the clock back just 10 years, and network latencies were vastly lower than mechanical drive random seek times. Practically &quot;any&quot; topology would work. Everything split into subnets. Routers everywhere. Firewalls between everything. Load balancers on top of load balancers. Applications broken up into tier after tier. The proxy talking to the app layer talking to a broker talking to a service talking to a database talking to remote storage. Nobody cared, because sum fell under the PPT. I&#x27;ve seen apps with 1&#x2F;2 second response times to a trivial query, but that&#x27;s still &quot;acceptable&quot;. Multiply that by the 5 or so roundtrips for TCP+TLS for every layer, because security must be end-to-end these days, and its not uncommon to see apps starting to approach the 2 second mark.<p>These days, typical servers have anywhere between 20 to 400 Gbps NICs with latencies measured in tens of microseconds, yet apps are responding 10,000x slower even when doing no processing. Why? Because everyone involved has their own little problem to solve, and nobody cares about the big picture as long as the threshold isn&#x27;t exceeded. HTTPS was &quot;easy&quot; for a bunch of web-devs moving into full-stack programming. Binary RPC is &quot;hard&quot; and they didn&#x27;t bother, because for simple apps it makes &quot;no difference&quot; as both fall under the PPT.<p>Answer me this: How many HTTPS client programming libraries (not web browsers!) actually do TCP fast open <i>and</i> TLS 1.3 0-RTT handshakes? How many do that by default? Name a load balancer product that turns those features on by default. Name a reverse proxy that does that by default.<p>Nobody(1) turns on jumbo frames. Nobody does RDMA, or SR-IOV, or cut-through switching, or ECN, or whatever. Everybody has firewalls for no reason. I say no reason, because if all you&#x27;re doing is doing some ACLs, your switches can almost certainly do that at wire-rate with zero latency overheads.<p>It always comes back to the PPT. As long as a design, network, architecture, system, language, or product is under, people stop caring. They stop caring even if 1000x better performance is just a checkbox away. Even if it is something they have already paid for. Even if it&#x27;s free.<p>1) I&#x27;m generalising, clearly. AWS, Azure, and GCP actually do most of that, but then they rate limit anyway, negating the benefits for all but the largest VM sizes.
pea将近 5 年前
So this has gotten to the point for me where it is a big enough burning painpoint where I would pay for a service which provided passably fast versions of the web-based tools which I frequently have to use.<p>In my day-to-day as a startup founder I use these tools where the latency of every operation makes them considerably less productive for me (this is on a 2016 i5 16GB MBP):<p>- Hubspot<p>- Gmail (with Apollo, Boomerang Calendar, and HubSpot extensions)<p>- Intercom (probably the worst culprit)<p>- Notion (love the app - but it really seems 10x slower than a desktop text editor should be imo)<p>- Apollo<p>- LinkedIn<p>- GA<p>- Slack<p>The following tools I use (or have used) seem fast to me to the point where I&#x27;d choose them over others:<p>- Basecamp<p>- GitHub (especially vs. BitBucket)<p>- Amplitude<p>- my CLI - not being facetious, but using something like <a href="https:&#x2F;&#x2F;github.com&#x2F;go-jira&#x2F;jira" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;go-jira&#x2F;jira</a> over actual jira makes checking or creating an issue so quick that you don&#x27;t need to context switch from whatever else you were doing<p>I know it sounds spoiled, but when you&#x27;re spending 10+ hours a day in these tools, latency for every action _really_ adds up - and it also wears you down. You dread having to sign in to something you know is sluggish. Realistically I cannot use any of these tools with JS disabled, best option is basically to use a fresh Firefox (which you can&#x27;t for a lot of Gmail extensions) with uBlock. I tried using Station&#x2F;Stack but they seemed just as sluggish as using your browser.<p>It&#x27;s probably got a bunch of impossible technical hurdles, but I really want someone to build a tool which turns all of these into something like old.reddit.com or hacker news style experience, where things happen under 100ms. Maybe a stepping stone is a way to boot electron in Gecko&#x2F;Firefox (not sure what happened to positron).<p>The nice things about tools like Basecamp is that because loading a new page is so fucking fast, you can just move around different pages like you&#x27;d move around the different parts of one page on an SPA. Browsing to a new page seems to have this fixed cost in people&#x27;s minds, but realistically it&#x27;s often quicker than waiting for a super interactive component to pull in a bunch of data and render it. Their website is super fast, and I think their app is just a wrapper around the website, but is still super snappy. It&#x27;s exactly the experience I wish every tool I used had.<p>IMO there are different types of latency - I use some tools which aren&#x27;t &quot;fast&quot; for everything, but seem extremely quick and productive to use for some reason. For instance, IntelliJ&#x2F;PyCharm&#x2F;WebStorm is slow to boot - fine. But once you&#x27;re in it, it&#x27;s pretty quick to move around.<p>Can somebody please build something to solve this problem!
评论 #24058718 未加载
staycoolboy将近 5 年前
throughput vs latency.<p>If I want to download a 1GB file, I do a TLS handshake once, and then send huge TCP packets. I can get almost 50MB&#x2F;s from my AWS S3 bucket on my 1GB fiber, so it takes ~20secons.<p>However, If I split that 1GB up into 1,000,000 1KB files, I incur 1,000,000 the handshake penalty, plus all of the OTHER overhead with nginx&#x2F;apache and file system or whatever is serving the request, so my bandwidth is significantly lower. I just did an SCP experiment and got 8MB&#x2F;s average download speed and cancelled the download.<p>The problem here is throughput is great with few big files, but hasn&#x27;t improved with lots of little files.