One important implication is that collateral freedom techniques [1] using Amazon S3 will no longer work.<p>To put it simply, right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to <a href="https://" rel="nofollow">https://</a> s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com. With this change — dictators see my domain name and block requests to it right away.<p>I don't know if they did it on purpose or just forgot about those who are less fortunate in regards to access to information, but this is a sad development.<p>This censorship circumvention technique is actively used in the wild and loosing Amazon is no good.<p>1 <a href="https://en.wikipedia.org/wiki/Collateral_freedom" rel="nofollow">https://en.wikipedia.org/wiki/Collateral_freedom</a>
What kind of company deprecates a URL format that's still recommended by the Object URL in the S3 Management Console?<p><a href="https://www.dropbox.com/s/zzr3r1nvmx6ekct/Screenshot%202019-05-03%2019.32.48.png?dl=0" rel="nofollow">https://www.dropbox.com/s/zzr3r1nvmx6ekct/Screenshot%202019-...</a><p>There are so, SO many teams that use S3 for static assets, make sure it's public, and copy that Object URL. We've done this at my company, and I've seen these types of links in many of our partners' CSS files. These links may also be stored deep in databases, or even embedded in Markdown in databases.<p>This will quite literally cause a Y2K-level event, and since all that traffic will still head to S3's servers, it won't even solve any of their routing problems.<p>Set it as a policy for new buckets, if you must, if you change the Object URL output and have a giant disclaimer.<p>But don't. Freaking. Break. The. Web.
Amazon explicitly recommends naming buckets like "example.com" and "www.example.com" : <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html" rel="nofollow">https://docs.aws.amazon.com/AmazonS3/latest/dev/website-host...</a><p>Now, it seems, this is a big problem. V2 resource requests will look like this: <a href="https://example.com.s3.amazonaws.com/.." rel="nofollow">https://example.com.s3.amazonaws.com/..</a>. or <a href="https://www.example.com.s3.amazonaws.com/.." rel="nofollow">https://www.example.com.s3.amazonaws.com/..</a>.<p>And, of course, this ruins https. Amazon has you covered for * .s3.amazonaws.com, but not for * .* .s3.amazonaws.com or even * .* .* .s3.amazonaws... and so on.<p>So... I guess I have to rename/move all my buckets now? Ugh.
Does anyone have insight on why they're making this change? All they say in this post is "In our effort to continuously improve customer experience". From my point of view as a customer, I don't really see an experiential difference between a subdomain style and a path style - one's a ".", the other's a "/" - but I imagine there's a good reason for the change.
Does the "you are no longer logged in" screen not infuriate anyone besides me? There doesn't seem any purpose to it just redirecting you to the landing page when you were trying to access a forum post that doesn't even require you be logged in.<p>Absolutely mind boggling with as much as they pay people they do something so stupid and haven't changed it after so long.
This is going to break so many legacy codebases in ways I can't even imagine.<p>Edit: Could they have found a better place to announce this than a forum post?
I wonder how they’ll handle capitalized bucket names. This seems like it will break that.<p>S3 has been around a long time, and they made some decisions early on that they realised wouldn’t scale, so they reversed them. This v1 vs v2 url thing is one of them.<p>But another was letting you have “BucketName” and “bucketname” as two distinct buckets. You can’t name them like that today, but you could at first, and they still work (and are in conflict under v2 naming).<p>Amazons own docs explain that you still need to use the old v1 scheme for capitalized names, as well as names containing certain special characters.<p>It’d be a shame if they just tossed all those old buckets in the trash by leaving them inaccessible.<p>All in, this seems like another silly, unnecessary, depreciation of an API that was working perfectly well. A trend I’m noticing more often these days.<p>Shame.
One of the weird peculiarities of path-style API requests was that it meant CORS headers meant nothing for any bucket pretty much. I wrote a post about this a bit ago [0].<p>I guess after this change, the cors configuration will finally do something!<p>On the flip side, anyone who wants to list buckets entirely from the client-side javascript sdk won't be able to anymore unless Amazon also modifies cors headers on the API endpoint further after disabling path-style requests.<p>[0]: <a href="https://euank.com/2018/11/12/s3-cors-pfffff.html" rel="nofollow">https://euank.com/2018/11/12/s3-cors-pfffff.html</a>
A similar removal is coming in just 2 months for V2 signatures: <a href="https://forums.aws.amazon.com/ann.jspa?annID=5816" rel="nofollow">https://forums.aws.amazon.com/ann.jspa?annID=5816</a><p>This could be just as disruptive.<p>Difficult to say that they will actually follow through, as the only mention of this date is in the random forum post I linked.
Amazon is proud that they never break backwards compatibility like this. Quotes like the container you are running on Fargate will keep running 10 years from now.<p>Something weird is going on if they don’t keep path style domains working for existing buckets.
Is there a deprecation announcement that does not include the phrase "In our effort to continuously improve customer experience"?<p>Edit: autotypo
I was already planning a move to GCP, but this certainly helps. Now that cloud is beating retail in earnings, the ‘optimizations’ come along with it. That and BigQuery is an amazing tool.<p>It’s not like I’m super outraged that they would change their API, the reasoning seems sound. It’s just that if I have to touch S3 paths everywhere I may as well move them elsewhere to gain some synergies with GCP services. I would think twice if I were heavy up on IAM roles and S3 Lambda triggers, but that isn’t the case.
This is most likely to help mitigate the domain being abused for browser security due to the same-origin policy. This is very common when dealing with malware, phishing, and errant JS files.
`In our effort to continuously improve customer experience` , what's the actual driver here, I don't see how going from two to one option and forcing you to change if you are in the wrong one improves my experience.
There are millions of results for "<a href="https://s3.amazonaws.com/"" rel="nofollow">https://s3.amazonaws.com/"</a> on GitHub: <a href="http://bit.ly/2GUVjDi" rel="nofollow">http://bit.ly/2GUVjDi</a>
I see a problem when using the s3 library to other services that support s3 but only have some kind of path style access like minio or ceph with no subdomains enabled. it will break once their java api removes the old code.
AWS API is an inconsistent mess. If you don't believe me try writing a script to tag resources. Every resource type requires using different way to identify it, different way to pass the tags etc. You're pretty much required to write different code to handle each resource type.
Hm. I had a local testing setup using an S3 standin service from localstack and a Docker Compose cluster, and path-style addressing made that pretty easy to set up. Anyone else in that "bucket?" Suggestions on the best workaround?
Commercial platform breaks things people have built on it for "the sake of continuously improving customer experience. "<p>Also: see photos of your favorite celebrity walking their dog and other news at 11.
<a href="https://github.com/search?q=%22https%3A%2F%2Fs3.amazonaws.com%2F%22&type=Code" rel="nofollow">https://github.com/search?q=%22https%3A%2F%2Fs3.amazonaws.co...</a><p>Over a million results (+250k http). This is going to be painful.
TL;DR<p>Migrate<p>from: s3.amazonaws.com/<bucketname>/key<p>to: <bucketname>.s3.amazonaws.com/key<p>no later than: September 30th, 2020
For other folks looking for announcement feeds, see <a href="https://forums.aws.amazon.com/rss.jspa" rel="nofollow">https://forums.aws.amazon.com/rss.jspa</a> - announcements are the asterisks.
How does this impact CloudFront origin domain names? I have an s3 bucket as a CF origin and the format the AWS CF Console auto-completes to is:<p><bucket>.s3.amazonaws.com<p>Do I need to change my origin to be, Origin domain name: s3.amazonaws.com, Origin Path: <bucket><p>This is a sneaky one that will bite lots of folks as it is NOT clear.
"In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format. Customers should update their applications"<p>How does forcing customers to rewrite their code to confirm to this change, improve customer experience?
IMO, this is an improvement - it makes it clear that the bucket is global and public, whereas with the path you could believe that it was only visible when logged into your account.<p>It also helps people understand why the bucket name is restricted in it's naming.
Always confused me how they had two different ways of retrieving the same object. Glad that they're sticking to the subdomain option. Sucks to go back and check for old urls though. This change might break a good chunk of the web.
One way to do this without breaking existing applications would be to charge more for the path style requests for a while. Then deprecate once enough people have moved away from it, so that less people are outraged by the change.
> <i>In our effort to continuously improve customer experience,</i> [feature x] <i>is being retired</i><p>In this case, the most highly improved experience I can think of eould be that of sundry nefarious entities monitoring internet traffic.
Does anyone know if this will affect uploads? We are getting an upload URL using s3.createPresignedPost and this returns (at least currently) a path-style url...
The title is misleading. Path style request "/foo/bar/file.ext" are still supported.<p>What changes is that the bucket name must be in the hostname.
this looks to be largely resolved: <a href="https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/" rel="nofollow">https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-...</a>
Does that mean people still have tons of public-by-mistake s3 buckets because of their clumsy UI, and they just gave up and are swiping what's left under the rug?
I'm kind of shocked at some of the responses here... everything from outrage, to expressing dismay at how many things could break, to how hard this is to fix, to accusing Amazon of all kinds of nefarious things.<p>How hard is it for 99% of the developers and technical leaders here to search your codebase for s3.amazonaws.com and update your links in the next <i>18 months</i>?