<i>sigh</i>. My team is facing all these issues. Drowning in data. Crazy S3 bill spikes. And not just S3 - Azure, GCP, Alibaba, etc since we are a multi-cloud product.<p>Earlier, we couldn't even figure out lifecycle policies to expire objects since naturally every PM had a different opinion on the data lifecycle. So it was old-fashioned cleanup jobs that were scheduled and triggered when a byzantine set of conditions were met. Sometimes they were never met - cue bill spike.<p>Thankfully, all the new data privacy & protection regulations are a <i>life-saver</i>. Now, we can blindly delete all associated data when a customer off-boards or trial expires or when data is no longer used for original purpose. Just tell the intransigent PM's that we are strictly following govt regulations.
I have caused billing spikes like this before those little warnings were invented and it was always a dark day. They are really a life saver.<p>Lifecycle rules are also welcome. Writing them yourself was always a pain and tended to be expensive with list operations eating up that api calls bill.<p>----<p>Once I supported an app that dumped small objects into s3 and begged the dev team to store the small objects in oracle as BLOBS to be concatenated into normal-sized s3 bjects after a reasonable timeout where no new small objects would reasonably be created. They refused (of course) and the bills for managing a bucket with millions and millions of tiny objects were just what you expect.<p>I then went for a compromise solution asking if we could stitch the small objects together after a period of time so they would be eligible for things like infrequent access or glacier but, alas, "dev time is expensive you know" so N figure s3 bills continue as far as I know.
The AWS horror stories never cease to amaze me. It's like we're banging our heads against the wall expecting a different outcome each time. What's more frustrating, the AWS zealots are quite happy to tell you how you're doing it wrong. It's the users fault for misusing the service. The reality is, AWS was built for a specific purpose and demographic of user. It's now complexity and scale makes it unusable for newer devs. I'd argue, we need a completely new experience for the next generation.
I did the back-of-the-envelope math once. You get a Petabyte of storage today for $60K/year if you buy the hardware (retail disks, server, energy). It actually fits into the corner of a room. What do you get for $60K in AWS S3? Maybe a PB for 3 months (w/o egress).<p>If you replace all your hardware every year, the cloud is 4x more expensive. If you manage to use your getto-cloud for 5 year, you are 20x cheaper than Amazon.<p>To store one TB per person on this planet in 2022, it would take a mere $500M to do that. That's short change for a slightly bigger company these days.<p>I guess by 2030 we should be able to record everything a human says, sees, hears and speaks in an entire life for every human on this planet.<p>And by 2040 we should be able to have machines learning all about human life, expression and intelligence to slowly making sense of all of this.
Your website renders as a big empty blue page in Firefox unless I disable tracking protection (and in my case, since I have noscript, I have to enable javascript for "website-files.com", a domain that sounds totally legit).
Off topic: for people with a "million billion" objects, does the S3 console just completely freeze up for you? I have some large buckets that I'm unable to even interact with via the GUI. I've always wondered if my account is in some weird state or if performance is that bad for everyone. (This is a bucket with maybe 500 million objects, under a hundred terabytes)
I had a similar issue at my last job. Whenever a user created a PR on our open source project artifacts of 1GB size consisting of hundreds of small files would be created and uploaded to a bucket. There was just no process that would ever delete anything. This went on for 7 years and resulted in a multi-petabyte bucket.<p>I wrote some tooling to help me with the cleanup. It's available on Github: <a href="https://github.com/someengineering/resoto/tree/main/plugins/aws/resoto_plugin_aws/cmd/" rel="nofollow">https://github.com/someengineering/resoto/tree/main/plugins/...</a>
consisting of two scripts, s3.py and delete.py.<p>It's not exactly meant for end-users, but if you know your way around Python/S3 it might help. I build it for a one-off purge of old data. s3.py takes a `--aws-s3-collect` arg to create the index. It lists one or more buckets and can store the result in a sqlite file.
In my case the directory listing of the bucket took almost a week to complete and resulted in a 80GB sqlite.<p>I also added a very simple CLI interface (calling it virtual filesystem would be a stretch) that allows to load the sqlite file and browse the bucket content, summarise "directory" sizes, order by last modification date, etc. It's what starts when calling s3.py without the collect arg.<p>Then there is delete.py which I used to delete objects from the bucket, including all versions (our horrible bucket was versioned which made it extra painful). On a versioned bucket it has to run twice, once to delete the file and once to delete the then created version, if I remember correctly - it's been a year since I built this.<p>Maybe it's useful for someone.
I'm confused about prefixes and sharding:<p>> The files are stored on a physical drive somewhere and indexed someplace else by the entire string app/events/ - called the prefix. The / character is really just a rendered delimiter. You can actually specify whatever you want to be the delimiter for list/scan apis.<p>> Anyway, under the hood, these prefixes are used to shard and partition data in S3 buckets across whatever wires and metal boxes in physical data centers. This is important because prefix design impacts performance in large scale high volume read and write applications.<p>If the delimiter is not set at bucket creation time, but rather can be specified whenever you do a list query, how can the prefix be used to influence where objects are physically stored? Doesn't the prefix depend on what delimiter you use? How can the sharding logic know what the prefix is if it doesn't know the delimiter in advance?<p>For example, if I have a path like `app/events/login-123123.json`, how does S3 know the prefix is `app/events/` without knowing that I'm going to use `/` as the delimiter?
The rationale for using cloud is so often that it saves you from complexity. It really undermines the whole proposition when you find out that the complexity it shields you from is only skin deep, and in fact you still need a "PhD in AWS" anyway.<p>But as a bonus, now you face huge risks and liabilities from single button pushes and none of those skills you learned are transferrable outside of AWS so you'll have to learn them again for gcloud, again for azure, again for Oracle ....
DON'T PRESS THAT BUTTON.<p>The egress and early deletion fees on those "cheaper options" killed a company that I had to step in and save.
Here's an article about Shopify running into the S3 prefix rate limit too many times, and tackling it: <a href="https://shopify.engineering/future-proofing-our-cloud-storage-usage" rel="nofollow">https://shopify.engineering/future-proofing-our-cloud-storag...</a>
As a web developer who has never used anything except locally-hosted databases, can someone explain what kind of system actually produces billions or trillions of files which each need to be individually stored in a low-latency environment?<p>And couldn't that data be stored in an actual database?
I've never been in this situation, but I do wish you could query files with more advanced filters on these blob storage services.<p>- But why SageMaker?<p>- Why do some orgs choose to put almost everything in 1 buckets?
Can someone explain what happened in the end? From my understanding nothing happened (they deprioritizod the story for fixing it) and they are still blowing through the cloud budget.
Just avoid the cloud. You get a Ceph storage with the performance of Amazon S3 at the price point of Amazon S3 Glacier in any Datacenter worldwide deployed if you want. There are companies that help you doing this.<p>Feel free to ask if you need help.
Though it doesn't address the problem in TFA, I recommend setting up billing alerts in AWS. Doesn't solve their issue, but they would have at least known about it sooner.
Each time a developer does something on a cloud platform, that moment the platform might start to profit for two reasons: vendor lock-in and accrued costs in the long term regardless of the unit cost.<p>Anything limitless/easiest has a higher hidden cost attached.
On this topic, it's always surprising to me how few people even seem to know about different storage classes on S3...or even intelligent tiering (which I know carries a cost to it, but allows AWS to manage some of this on your behalf which can be helpful for certain use-cases and teams).<p>We did an analysis of S3 storage levels by profiling 25,000 random S3 buckets a while back for a comparison of Amazon S3 and R2* and nearly 70% of storage in S3 was StandardStorage which just seems crazy high to me.<p>* <a href="https://www.vantage.sh/blog/the-opportunity-for-cloudflare-r2" rel="nofollow">https://www.vantage.sh/blog/the-opportunity-for-cloudflare-r...</a>
The minimum size of objects in cheaper storage types is 128KiB.<p>Given the article quotes $100k to run an inventory (and $100k/month in standard storage) it's likely most of your objects are smaller than 128KiB and so probably wouldn't benefit from cheaper storage options (although it's possible this is right on the cusp of the 128KiB limit and could go either way).<p>Honestly, if you have a $1.2m/year storage bill in S3 this would be the time to contact your account manager and try to work out what could be done to improve this. You probably shouldn't be paying list anyway if just the S3 component of your bill is $1.2m/year.
I had to chuckle at this article because it reminded me of some of the things I've had to do to clean up data.<p>One time I had to write a special mapreduce that did a multiple-step-map to converted my (deeply nested) directory tree into roughly equally sized partitions (a serial directory listing would have taken too long, and the tree was really unbalanced to partition in one step), then did a second mapreduce to map-delete all the files and reduce the errors down to a report file for later cleanup. This meant we could delete a few hundred terabytes across millions of files in 24 hours, which was a victory.
We solved the problem of deleting old files early in our development process, as we wanted to avoid situations such as this one.<p>While developing GitFront, we were using S3 to store individual files from git repositories as single objects. Each of our users was able to have multiple repositories with thousands of files, and they needed to be able to delete them.<p>To solve the issue, we implemented a system for storing multiple files inside a single object and a proxy which allows accessing individual files transparently. Deleting a whole repository is now just a single request to S3.
One of the biggest pains is that cloud services rarely mention what they don't do.<p>I think it's really sad, because when I don't see docs clearly stating the limits, I assume the worst and avoid the service.
I was at a presentation where HERE technologies told us that they went from being on the top ten (or top five) S3 users (by data stored) to getting off of that list. This was seen as a big deal obviously.