TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

3 Problems AWS Needs to Address

136 pointsby aaronwhiteabout 13 years ago

20 comments

Smerityabout 13 years ago
The ability for S3 and CloudFront to properly handle GZIP compressed files would further encourage the use of S3+Cloudfront for static websites. As a host S3 + CloudFront have arbitrary scalability, good performance across the globe and is pay as you go.<p>With GZIP compression, bandwidth drops but more importantly load times can decrease significantly. "It takes several round trips between client and server before the two can communicate at the highest possible speed [and for broadband users] the number of round trips is the larger factor in determining the time required to load a web page"[2]. There was a graph depicting the non-linear impact file size increases have on load times but I can't find it... =[<p>In the Google article on compression, a 175% increase in a page's size (non-GZIP version of Facebook.com) results in a 414% increase in load time on DSL. Load time does not increase linearly with file size and hence why GZIP compression is so important for performant websites!<p>[1]: <a href="http://aws.typepad.com/aws/2011/02/host-your-static-website-on-amazon-s3.html" rel="nofollow">http://aws.typepad.com/aws/2011/02/host-your-static-website-...</a><p>[2]: <a href="https://developers.google.com/speed/articles/use-compression" rel="nofollow">https://developers.google.com/speed/articles/use-compression</a>
ComputerGuruabout 13 years ago
It's a little-known fact that CloudFront supports GZip just fine, so long as you're using pull from custom origin (like most people are).<p>You just need to configure your origin servers to serve GZip <i>even to HTTP 1.0</i> (which is what CF requests will come as) and set the "Vary: Accept-Encoding" header to prevent users of old IE versions from having GZip'd content they don't support stuffed down their throats.<p>For example, this is my nginx configuration which serves both GZip'd and non-GZip'd versions of the same objects via CF. The second and third lines are the most important for correct AWS CF GZip distribution:<p><pre><code> gzip on; gzip_vary on; gzip_http_version 1.0; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/png; gzip_disable "MSIE [1-6]\."; </code></pre> Note that "image/png" is only in there because Google PageSpeed is very stupid and marks not GZipping PNG files as a "bug" because I can save "up to 1%" by employing GZip on PNGs.
评论 #3947743 未加载
RoboTeddyabout 13 years ago
Missing support for Cross-Origin Resource Sharing headers is a big problem for some applications. For example, drawing images to a canvas from s3/cloudfront will unavoidably taint your canvas. (<a href="https://developer.mozilla.org/en/CORS_Enabled_Image" rel="nofollow">https://developer.mozilla.org/en/CORS_Enabled_Image</a>)<p>Right now I'm proxying image requests to s3 through nginx, which is a terrible workaround.<p>The AWS forums has a topic on the issue started in 2009 (~200 replies so far...): <a href="https://forums.aws.amazon.com/thread.jspa?threadID=34281" rel="nofollow">https://forums.aws.amazon.com/thread.jspa?threadID=34281</a>
评论 #3948807 未加载
评论 #3947309 未加载
jorgeortiz85about 13 years ago
<p><pre><code> S3 has eleven nines of durability. </code></pre> The author will find, to his dismay, that durability is not the same thing as availability.
评论 #3947416 未加载
flytabout 13 years ago
This is less "AWS" and more "S3/CloudFront".<p>there are many other product features that EC2/R53/ELB/etc could use, but calling this AWS is a little too broad.
评论 #3947201 未加载
ww520about 13 years ago
Actually I would like to see S3 support custom SSL certificate. That would be an awesome addition to make S3 a great static page server.
评论 #3948785 未加载
mistercowabout 13 years ago
&#62; You could break your CSS into multiple files, but this is in direct opposition to one of the tenants of website optimization: minimize the number of HTTP requests.<p>Am I missing something here? Your fonts were going to be in a separate file anyway, right?
akoumjianabout 13 years ago
I tweeted the same thing to that account and got no response. I'm glad you did. the Access-Control-Allow-Origin header has been a heavily requested feature since 2009: <a href="https://forums.aws.amazon.com/thread.jspa?threadID=34281&#38;start=175&#38;tstart=0" rel="nofollow">https://forums.aws.amazon.com/thread.jspa?threadID=34281&#38...</a><p>One example of how fundamental this is: you cannot currently perform a direct AJAX upload to an s3 bucket from a web application hosted on an ec2 instance.<p>There is a postMessage hack that will work with small files, and of course you can use a proxy, but you'd think it would be a common scenario to want to upload files directly to S3.
评论 #3947399 未加载
diminishabout 13 years ago
"...Someone monitoring the @awscloud account opened a trouble ticket to my email address asking for clarification" support through twitter is going mainstream. It is like praying loud ad getting a response.
评论 #3947332 未加载
yummybearabout 13 years ago
The lack of CORS support have been known by Amazon for years, but they still have chosen not to fix it. There's a long running thread on their support forums somewhere where they start by saying they'll look into it. I believe this was years ago.
melvinmtabout 13 years ago
Cloudfront actually does support gzip encoding if you use Custom Origin, just not with S3.
评论 #3948232 未加载
23davidabout 13 years ago
These issues have been known to Amazon and to serious AWS users for a long time. Why do you expect that this time they will actually do something? It will take more than a simple twitter response from the AWS team to believe that they actually will make changes to fix the situation...
bsimpsonabout 13 years ago
We've been hosting our gzipped JavaScript via S3/CloudFront, and have had no problems serving to IE7:<p><a href="http://libraries.netshelter.net/javascript/netshelter/library/1.4.2.min.jgz" rel="nofollow">http://libraries.netshelter.net/javascript/netshelter/librar...</a>
atechieabout 13 years ago
Also SQS should accept utf-8 in message body rather than a restricted set of characters.
评论 #3947957 未加载
评论 #3947646 未加载
spullaraabout 13 years ago
You can use S3/Cloudfront for compressed assets as long as your main page is dynamic. It can just generate different URLs for assets based on whether the browser supports gzip or not. See bagcheck.com for an example.
jwrabout 13 years ago
These are valid points and the same ones I've encountered when using S3 and CloudFront. I am actually amazed that gzip encoding <i>still</i> isn't supported — people have been complaining about this for years.
driverdanabout 13 years ago
On a related note, do not use S3 on the web, use CloudFront. S3's performance is highly variable and latency tends to be high. Serving files from S3 and not CloudFront is foolish and will slow your site down.
ceejayozabout 13 years ago
I'd like to see Micro instances available in Virtual Private Cloud.<p>In the forums, an Amazon rep promised it'd be available within 2011. No luck, though.
malandrewabout 13 years ago
They also need websockets support over ELBs.
评论 #3954039 未加载
hypervisorabout 13 years ago
Only three problems?