i find this very light on the actual "diving deep" part promised in the title. theres a lot of self congratulatory chest thumping, not a lot of technical detail. Werner of course doesnt owe us any explanation whatsoever. i just dont find this particularly deep.
Recent S3 consistency improvements are welcome, but S3 still falls behind Google GCS until they support conditional PUTs.<p>GCS allows object to be replaced conditionally with `x-goog-if-generation-match` header, which sometimes can be quite useful.
Here's what I take away from this post:<p>> We built automation that can respond rapidly to load concentration and individual server failure. Because the consistency witness tracks minimal state and only in-memory, we are able to replace them quickly without waiting for lengthy state transfers.<p>So this means that the "system" that contains the witness(es) is a single point of truth and failure (otherwise we would lose consistency again), but because it does not have to store a lot of information, it can be kept in-memory and can be exchanged quickly in case of failure.<p>Or in other words: minimize the amount of information that is strictly necessary to keep a system consistent and then make that part its own in-memory and quickly failover-able system which is then the bar for the HA component.<p>Is that what they did?
Anyone else still seeing consistency problems w/S3 & EMR? The latest AWS re:Invent made it sound like this would be fixed but as of yesterday I was still using emrfs to correct S3 consistency problems.
AWS fixed S3 consistency in December 2020:<p><a href="https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-s3-now-delivers-strong-read-after-write-consistency-automatically-for-all-applications/" rel="nofollow">https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-s3...</a>
So it is both available and consistent (but perhaps only in read your own writes way?). What is then with resilence to network partitions, referring to CAP theorm? Did they build super reliable global network, so this is never a real issue?
Can someone elaborate on this Witness system OP talks about?<p>I'm picturing a replicated, in-memory KV store where the value is some sort of version or timestamp representing the last time the object was modified. Cached reads can verify they are fresh by checking against this version/timestamp, which is acceptable because it's a network+RAM read. Is this somewhat accurate?
Would love a dive (hopefully deep) into IAM, the innards of that must be some impressive wizardry. Surprising there isn't more around about the amazing technical workings of these foundational AWS products.
I'm confused...did you fix the caching issue in S3 or not?<p>The article seems to explain why there is a caching issue, and that's understandable, but it also reads as if you wanted to fix it. I would think the headliner and bold font if it was actually fixed.<p>For those curious, the problem is that S3 is "eventually consistent", which is normally not a problem. But consider a scenario where you store a config file on S3, update that config file, and redeploy your app. The way things are today you can (and yes, sometimes do) get a cached version. So now there would be uncertainty of what was actually released. Even worse, some of your redeployed apps could get the new config and others the old config.<p>Personally, I would be happy if there was simply an extra fee for cache-busting the S3 objects on demand. That would prevent folks from abusing it but also give the option when needed.