If I were in your shoes I'd still host it on AWS, unless your shoes have a problem with the AWS bill, but then you run into other problems:<p>- Paying for physical space and facilities<p>- Paying people to maintain it<p>- Paying for DRP/BCP<p>- Paying periodically since it doesn't last forever so it'll need replacements<p>But if you were to have to move out of AWS but Azure and GCP aren't options, you can do: Ceph and HDDs. Dual copies of files so you have to lose three drives for any specific file to have (only those files) dataloss. Does not come with versioning or full IAM-style access control or webservers for static files (which you get 'for free' with S3).<p>HDDs don't need to be in servers, they can be in drive racks, connected with SAS or iSCSI to servers. This means you only need a few nodes to control many harddisks.<p>A more integrated option would be (As suggested) back blaze pod-style enclosures, or storinator type top loaders (supermicro has those too). It's generally 4U rack units for 40 to 60 3.5" drives, which again generally comes to about 1PB per 4U. A 48U rack holds 11 units when using side-mounted PDUs, a single top-of-rack switch and no environmental monitoring in the rack (and no electronic access control - no space!).<p>This means that for redundancy you'd need 3 racks of 10 units. If availability isn't a problem (1 rack down == entire service down) you can do 1 rack. If availability is important enough that you don't want downtime for maintenance, you need at least 2 racks. Cost will be about 510k USD per rack. Lifetime is about 5 to 6 years but you'll have to replace dead drives almost every day at that volume, which means an additional 2000 drives over the lifespan, perhaps some RAM will fail too, and maybe one or two HBAs, NICs and a few SFPs. That's about 1.500.000 spare parts over the life of the hardware, not including the racks themselves, not including power, cooling or physical facilities to locate them.<p>Note: all of the figures above are 'prosumer' class and semi-DIY. There are vendors that will support you partially, but that is an additional cost.<p>I'm probably repeating myself (and others) here, but unless you happen to already have most of this (say: the people, skills, experience, knowledge, facilities, money upfront and money during its lifecycle), this is a bad idea and 10PB isn't nearly enough to do by yourself 'for cheaper'. You'd have to get into the 100PB or more arena to 'start' with this stuff if you need to get all of those externalities covered as well (unless it happens to be your core business, which from the opening post it doesn't seem to be).<p>A rough S3 IA 1Z calculation shows a worst-case cost of about 150.000 USD monthly, but at that rate you can get a lot of cost savings, and with some smart lifecycle configuration you can get that down as well. This means that doing it yourself vs. letting AWS do it makes AWS half as expensive.<p>Calculation as follows:<p>DIY: at least 3 racks to match AWS IA OneZone (you'd need 3 racks on 3 different locations, a total of 9 racks to have 3 zones but we're not doing that as per your request) which means the initial starting cost is a minimum of 1.530.000 and combined with a lifetime cost of at least 1.500.000, over 5 years, if we're lucky, so about 606.000 per year, just for the contents of racks that you have to already have.<p>Adding to this, you'd have some average colocation costs, no matter if you have an entire room, a private cage or a shared corridor. That's at least 160U and in total at least 1400VA per 4U (or roughly 14A at 120V). That amount of power is what a third of a normal rack might use on its own! Roughly, that will boil down to a monthly racking cost of 1300USD per 4U if you use one of those colocation facilities. That's another ~45k per month, at the very least.<p>So no-personnel colocated can be done, but doing all that stuff 'externally' is expensive, about 95.500 every month, with no scalability, no real security, no web services or load balancing etc.<p>That means below-par features gets you a rough saving of 50k monthly if you didn't need any personnel and nothing breaks 'more' than usual. And you'd have to not use any other features in S3 besides storage. And if you use anything outside of the datacenter you're located (i.e. if you host an app in AWS EC2, ECS or a lambda or something) and you need a reasonable pipe between your storage and the app, that's a couple of K's per month you can add, eating into the perceived savings.