Thank you for listening! The original plan was insane. The new one is sane. As I pointed out here <a href="https://twitter.com/dvassallo/status/1125549694778691584" rel="nofollow">https://twitter.com/dvassallo/status/1125549694778691584</a> thousands of printed books had references to V1 S3 URLs. Breaking them would have been a huge loss. Thank you!
Still doesn't help with domain censorship. This was discussed in-depth in the other thread from yesterday, but TLDR, it's a lot harder to block <a href="https://s3.amazonaws.com/tiananmen-square-facts" rel="nofollow">https://s3.amazonaws.com/tiananmen-square-facts</a> than <a href="https://tiananmen-square-facts.s3.amazonaws.com" rel="nofollow">https://tiananmen-square-facts.s3.amazonaws.com</a> because DNS lookups are made before HTTPS kicks in.
This is interesting for a few reasons. IMHO, the original deprecation plan was reasonable. Not generous, but reasonable. Especially compared to what other cloud providers (eg. Google Cloud) have done. It did seem like a diversion from their normal practice of obsessively supporting old stuff for as long as possible, but it really wasn't too bad.<p>Responding to feedback, publicly, and explaining what they were trying to do and why they needed to do it, is incredibly refreshing.<p>This seems like a big PR win for AWS. I'm left trusting and liking them more, not less.
> Bucket Names with Dots – It is important to note that bucket names with “.” characters are perfectly valid for website hosting and other use cases. However, there are some known issues with TLS and with SSL certificates. We are hard at work on a plan to support virtual-host requests to these buckets, and will share the details well ahead of September 30, 2020.<p>I’m mystified how they’re planning on doing this. Anybody care to speculate?
For anyone still confused to why AWS dominates the cloud market, it's because they're willing to grandfather features with a reasonable sunset horizon.
Malloc for the internet: "We launched S3 in early 2006. Jeff Bezos’ original spec for S3 was very succinct – he wanted malloc (a key memory allocation function for C programs) for the Internet. From that starting point, S3 has grown to the point where it now stores many trillions of objects and processes millions of requests per second for them. Over the intervening 13 years, we have added many new storage options, features, and security controls to S3."
It's nice to see that instead of deprecation support for the old paths will continue for all buckets created on or before the cut-off date of Sept 30, 2020.<p>So if you don't want to change, you can continue using the old paths. Just might limit access to some new features coming later that are dependent on the virtual host sub domains.
This is a great step forward. Particularly changing the rules a little so that old buckets won’t break after a certain date.<p>Thank you for taking the time to write this up Jeff.
Okay probably a dumb question but why can't they just have an automatic redirect from the path style to the virtual hosted ones under the hood? People get both options up front while they can work with the one they like.
"In this example, jbarr-public and jeffbarr-public are bucket names; /images/ritchie_and_thompson_pdp11.jpeg and /jeffbarr-public/classic_amazon_door_desk.png are object keys."<p>I think this should be:<p>"In this example, jbarr-public and jeffbarr-public are bucket names; /images/ritchie_and_thompson_pdp11.jpeg and /classic_amazon_door_desk.png are object keys."
Kind of tangential, but is Bezos a programmer type? I thought he came from banking or the big 4. I’m curious if the “malloc for the internet” bit is verbatim.
It seems to me that adding a 301 redirect from the old URL to the new would not unresonably stress the resources of AWS?
It seems perfectly resonable to update the library access, but breaking old URLs seems unessesary.
They could even add a second of latency to incentivise people who can update their links.
Pre-signed urls still come back from the S3 SDK as a V1 path style. I'm assuming this either changes at some point, or that will continue to work?
I still don't get why there was such an uproar about this: Amazon should just issue a "301 Moved Permanently" and be done with it.<p>If your app for some arcane reason doesn't understand an HTTP status code that's been around for 20 years... your code is bad and you should feel bad.