This is interesting ...<p>For the longest time we tried to convince people that they should have an off-amazon archive of their S3 data ... we even ran an ads to that effect in 2012[1].<p>The (obvious) reason this isn't compelling is the cost of egress. It's just (relatively) too expensive to offload your S3 assets to some third party on a regular basis.<p>So if R2 is S3 with no egress, suddenly there is a value proposition again.<p>Further, unlike in 2012, in 2021 we have <i>really great tooling</i> in the form of 'rclone'[2][3] which allows you to move data from cloud to cloud without involving your own bandwidth.<p>[1] The tagline was "Your infrastructure is on AWS and your backups are on AWS. You're doing it wrong."<p>[2] <a href="https://rclone.org/" rel="nofollow">https://rclone.org/</a><p>[3] <a href="https://www.rsync.net/resources/howto/rclone.html" rel="nofollow">https://www.rsync.net/resources/howto/rclone.html</a>
Really curious to see how this goes. If they live up to the following paragraph, that's pretty game-changing:<p>>> This cheaper price doesn’t come with reduced scalability. Behind the scenes, R2 automatically and intelligently manages the tiering of data to drive both performance at peak load and low-cost for infrequently requested objects. We’ve gotten rid of complex, manual tiering policies in favor of what developers have always wanted out of object storage: limitless scale at the lowest possible cost.<p>The amount of effort it takes to understand and account for S3 Intelligent Tiering is somewhat mind-blowing so to get rid of all of that (and the corresponding fees) would be really nice and TheWayThingsShouldBe™ for the customer -- on top of that most users just don't even know S3 Intelligent Tiering exists so it'll be great if Cloudflare just handles that automatically.<p>We at <a href="https://vantage.sh/" rel="nofollow">https://vantage.sh/</a> (disclosure, I'm the Co-Founder and CEO) recently launched a cross-provider cost recommendation for CloudFront Egress to Cloudflare which was really popular and I can imagine doing something similar for S3 -> R2 once it is live and we are able to vet it.
I know the Cloudflare team hangs out here, so thanks, and great job! This was absolutely necessary for my line of work. Couple of quick questions/confirmations:<p>* R2 will support the same object sizes as S3? We have 500GB+ objects and could go to a 1TB per object.
* R2 will support HTTP Range GETs, right?<p>Egress bandwidth for objects on S3 is the biggest line item on the AWS bill for a company I work for, by an order of magnitude, and this will just wipe it off for the most part.
Sounds great (nearly too-good-to-be-true great). Wonder how the SLA will look like. I have been using gcs, s3 and firestore - and their actual reliability varies significantly, while advertised slas are similar. For instance, with firestore one has to implement a pretty lenient expotential backoff in case of a timeout, and if the backoff results in the object being retrieved in, say, 2 minutes -- thats still ok as per gcs sla. It obviously makes it hard to use it for user-facing stuff, such as chatbots, where you can't afford to wait that long. In my anecdotal experience of using firestore for about 10 million operations per day, we will usually have a problem like that every few days, and that means user-noticeable failure. It would be great to read more on cloudflare's approach to reliability defined as "99%" percentile max latencuy. Can't wait to give it a try with our workloads.
Having done hundreds of TCO analyses for customers moving object storage providers, this seems like it carves out a very interesting niche for Cloudflare. R2's higher storage costs (roughly triple) also make it a more manageable threat to specialized solutions like Storj DCS, Wasabi and Backblaze B2.<p>At Taloflow (<a href="https://www.taloflow.ai" rel="nofollow">https://www.taloflow.ai</a>), (disclosure: I'm the CEO/Cofounder) we provide buying insights for cloud object storage (and soon other IaaS/PaaS). We will definitely be adding Cloudflare R2 to the mix.
As someone who took up making travel videos as a hobby, this is definitely on my radar.<p>Video files are large, although ~20 cents per video streamed for a small website is manageable (S3, Cloud Storage, Azure...), it's the potential for abuse that could drive my bill up that terrifies me, which is why I decided to stick to Hetzner VMs with their 20TB of free egress.
Not sure why this is more appealing than Wasabi? As far as I can see, Wasabi is cheaper, has great speeds, fantastic S3 compatibility, their dashboard is a joy to use so what is the actual "special" thing here? I mean sure, it's a good thing to have more competition but the way everyone here is describing the situation makes it seem as if Cloudflare is going to be the cheapest & the best.
Nice to see someone flipping the script and encroaching on AWS' territory rather than vice a versa<p>Taking the have-a-much-better-product route to siphoning use from AWS is particularly ambitious. I hope it works out. AWS has had it a little too easy for too long
Interesting pricing considering Backblaze is another Bandwidth Alliance member and they only charge $0.005/GBmonth (vs. $0.015/GBmonth). B2 + CloudFlare gives you a similar deal at a third the cost.
> <i>As transformative as cloud storage has been, a downside emerged: actually getting your data back... When they go to retrieve that data, they're hit with massive egress fees that don't correspond to any customer value — just a tax developers have grown accustomed to paying.</i><p>Strategy Letter V, commoditize your competitor's advantages!<p>> <i>We’ve gotten rid of complex, manual tiering policies in favor of what developers have always wanted out of object storage: limitless scale at the lowest possible cost.</i><p>Cloudflare has a clear strategy: Be the simplest cloud platform to deploy to. It has been a breeze as a small dev shop adopting their tech. AWS started with the startups, but have since long struggled to keep up that simplicity in face of supporting what must be a dizzying array of customer requirements. Remains to be seen how Cloudflare fares in that regard. I like my Golang better than Rust.<p>> <i>Cloudflare R2 will include automatic migration from other S3-compatible cloud storage services. Migrations are designed to be dead simple.</i><p>Taking a leaf out of Amazon Data Migration Service and its free transfers from elsewhere into RedShift/RDS/Aurora/OpenSearch. Niice.<p>> <i>...we designed R2 for data durability and resilience at its core. R2 will provide 99.999999999% (eleven 9’s) of annual durability, which describes the likelihood of data loss... R2 is designed with redundancy across a large number of regions for reliability.</i><p>S3 goes upto 16 9s with cross-region replication... and so wondering why R2's still at 11 9s? May be the mutli-region tiering is just acceleration (ala S3 Accelerated Buckets) and not replication?<p>> <i>...bind a Worker to a specific bucket, dynamically transforming objects as they are written to or read from storage buckets.</i><p>This is huge, if we could open objects in append-mode. Something that's expensive to do in S3 (download -> append -> upload) even after all these years.<p>> <i>For example, streaming data from a large number of IoT devices becomes a breeze with R2. Starting with a Worker to transform and manipulate the data, R2 can ingest large volumes of sensor data and store it at low cost.</i><p>Okay, where do I sign-up?<p>> <i>R2 is currently under development...</i><p>Oh.
Yes! This is what I was predicting about a week ago on a HN: <a href="https://news.ycombinator.com/item?id=28564387" rel="nofollow">https://news.ycombinator.com/item?id=28564387</a>
Amazing, this is exactly what I was looking for, an S3 compatible provider with no egress cost ... And reliable. I can't wait to try Cloudflare R2!
Very exciting. Object storage is getting really competitive and i love the naming scheme alliance - S3, B2 (Backblaze) and now R2, who will do one with a “1”?<p>On a serious note i’m wondering about the signed urls and ACL capabilities of the cloudflare offering cause this is something we use.<p>I’m also interested does R2 replace S3 and CloudFront at the same time? That’d be nice and one headache less.
Very generous offer from Cloudflare. I signed up.<p>The main question: how can Cloudflare make this into a sustainable business?<p>* cost/gb is cheaper or same as s3, gcp, azure<p>* no egress charges to customers, but they still have to pay for transit when they cross an AS!<p>what is the hidden angle Cloudfare is using here?
I see the PM for this product is here, a few things we find useful with S3:<p>- Daily inventory report of the contents of the bucket in parquet format<p>- Signed URLs<p>- Lifecycle policies based on tags (but to be honest just a policy that isn't restricted to suffix/prefix would be amazing)<p>- Bucket policies (restrict object key structure)<p>Lots of these work well enough in AWS but are often lacking in some regards with annoying restrictions.<p>Looks like an amazing product, good luck!
>providing zero-cost egress for stored objects — no matter your request rate<p>What's the catch? Imagine a few cases. Let's assume s3 volume rate of $50/TB.<p>-I post 1gb video file on Reddit. 100k downloads / month: $5k<p>-I make a 1gb desktop app. I have 100k downloads / month: $5k<p>-I post 100gb data file on Github. 10K downloads / month: $50k.<p>Would I pay $0 on R2? And would there be throttling / rate-limiting?<p>[Edit: Added more realistic examples]
Wow, with global replication by default, this looks absolutely perfect for what I'm currently building, even before taking costs into account.<p>I'm hoping this means what I think it means, that write latencies will be minimal across the globe, since writes will be persisted and ack'd at the closest region and then eventually consistently propagated to other regions?<p>If so, curious what would happen in a scenario where a region requests an object that has been persisted at another region but not yet propagated? Will it result in a 404 or is the system smart enough to route the request to the region that has the file at the cost higher latency?<p>From my research so far into S3's cross region replication, the latter behavior doesn't seem possible out of the box since requests have to specify a single region (S3 experts, please do correct me if I'm wrong), so I'm hoping CloudFlare with its deep expertise in managing a global network can differentiate here. Even if it's not offered out of the box, due to the lack of egress costs, it's a lot more feasible to build in this behavior in the application layer with R2 by just racing requests across several regions and taking the one that resolves first (or at all), so very promising regardless.<p>Also, would love to hear some numbers on what kinds of write latency to expect. From my experience so far, S3 writes for tiny files in a single region take on the order of 50ms ish even for clients in close physical proximity, which is serviceable for my use case, but seems higher than it needs to be (and every little bit I can shave off on latency helps tremendously for what I'm building). Really looking forward to seeing what the CloudFlare solution is capable of here.<p>Lastly, S3 didn't advertise and guarantee strong read-after-write consistency for same region read/write until late last year. Will R2 offer this out of the gate?
Really interesting, will R2 support lifecycle rules like S3 does? We write around 90 million files per month to S3, if we could replace that with R2 and have the files automatically expire after 30 days that'd be a pretty amazing price reduction for us.
Could you provide some more details on the storage system here?<p>Is it built on Ceph's S3 compatibility?<p>Your durability numbers imply erasure coding. Is that the case?
Curious to see how this compares to Bunny.net Edge Storage (they are working on S3 support as well): <a href="https://bunny.net/edge-storage/" rel="nofollow">https://bunny.net/edge-storage/</a>
This is fantastic, I only need now a Cloudflare RDMS to run my entire business on Cloudflare.<p>(Workers KV is great but there's a ton of times that you just need an actual relational database)
I recently did a pricing comparison of cloud object storage services for my article "How to Create a Very Inexpensive Serverless Database" (<a href="https://aws.plainenglish.io/very-inexpensive-serverless-database-6ed6df489ab6" rel="nofollow">https://aws.plainenglish.io/very-inexpensive-serverless-data...</a>). It describes using object storage as an inexpensive serverless key-value database.<p>Although egress (outbound network) can be a significant part of object storage expenses, if you are reading and writing small objects, per-request expenses can be much bigger. Cloudflare indicates that for low request rates there won't be any request fees, but doesn't state what they will charge for high request rates.<p>My article points out that the best deal when working with high request rates is to use services that don't charge per request such as DigitalOcean, Linode, and Vultr. If it's S3 that you want, even Amazon has recently joined the budget club with Lightsail Object Storage which has monthly plans of $1, $3, and $5 (250 GB storage and 500 GB egress) with no per-request fees.
<i>> R2 is designed with redundancy across a large number of regions for reliability. We plan on starting from automatic global distribution and adding back region-specific controls for when data has to be stored locally, as described above.</i><p>Does that mean automatic caching across regions? Low-latency read access everywhere without an extra CDN in front of it?
Am I reading this right that this is pretty much aiming to be a full competitor to S3? Or is this a more limited special purpose tool? I didn't really follow what Cloudflare is doing, so I only know them as a CDN. Are private buckets possible with this?<p>We all knew that the big players are really maximising their profits on the egress charges, so I can see that this left some potential for someone to step in. No egress charges at all still sound a bit too good to be true, but it would be nice as that's just one parameter less to think about.<p>Another interesting aspect are CloudFlare Workers. As far as I can tell they're not a full replacement for something like AWS Lambda if e.g. I need to do a bit heavier stuff on the data in R2. Being able to do heavier processing close to the actual data would be really interesting as well.
Most cloud object storage can be a good off-site backup. But accessing your data fast and cheap is not easy.<p>That is why SeaweedFS added a gateway to remote object store. <a href="https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage" rel="nofollow">https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remot...</a> , which asynchronously write local changes to the cloud. If there are enough local capacity, there should be no egress cost.<p>Hopefully, this can change the common pattern, to really treat the cloud object store as a backup.
There is one major reason S3 remains the king of storage for mobile media uploads: bucket notifications. Does R2 implement this feature? If so, I’m going to have to run some experiments with this...
This is really a game changer for storage. Well done Cloudflare.<p>How will other providers respond to this now?<p>AWS, GC, and others do not really pay for egress charges themselves. Those super high egress charges are pretty ridiculous.
Will we be able to serve files from a R2 bucket through a subdomain like static.project.com/filename WITHOUT using a worker that wastes money on every request for no reason?
> Our object storage will be extremely inexpensive for infrequent access and yet capable of and cheaper than major incumbent providers at scale.<p>How frequent is infrequent? In our case it's "never unless other backups fail" and for that S3 Glacier Deep Archive is still cheaper ($0.00099 per GB).
I'm not a software developer so please pardon my ignorance, doesn't this put them at basically a "core complete modern cloud provider" - if you fully bought into architecting against them, with their workers product and all, could you fully build and run for CF?
Nice, and it seems to be quite cheap as well.
It's unfortunate, tho, they don't talk about data residency: where are servers located? Where will my data be copied?
I have been doing work on ensuring my app is compatible with the many Object Storage services. Would be great to get access to this early and make it compatible too.
I’m curious what read and write request prices will be.<p>Like egress pricing, S3 starting at $5/million writes and $0.40 per million reads feels excessive.
For those who don't want to wait, there's DigitalOcean Spaces (<a href="https://www.digitalocean.com/products/spaces/" rel="nofollow">https://www.digitalocean.com/products/spaces/</a>).<p>Disclaimer: I haven't used it, but planning to, since I already use their VPS.
$15 is too expensive, especially if you want to follow proper 3-2-1 it'll cost you $45/TB<p>What happens to your Object Storage buckets when Cloudflare has an outage? - <a href="https://filebase.com/blog/what-happens-when-my-cloud-goes-down/" rel="nofollow">https://filebase.com/blog/what-happens-when-my-cloud-goes-do...</a>
What's the censorship policy?<p>Is this going to be content-neutral, like Cloudflare was when fronting ISIS websites?<p>Or is this going to be fine-until-bad-PR, like when Cloudflare decided to stop hosting The Daily Stormer?<p>There is a special kind of lock-in when it comes to object storage, as generally you use something like this when the data is too big to store another copy of locally
or at another provider. It's not like you can easily maintain provider independence, and if Cloudflare decides one day that some of your UGC in a bucket isn't something they want to host, what happens then?<p>Is the data lost forever because your account is nuked? Is there a warning or grace period?<p>I am hesitant to put any large amount of data into a service without a crystal clear statement on this, so that I can know up front whether or not a business needs to maintain a second, duplicate object store somewhere else for business continuity.<p>If Cloudflare in practice is going to nuke the account the moment your site ends up hosting something objectionable, this DR requirement (a second provider that also stores all objects) needs to be factored into a customer's costs. (It may be that the bandwidth savings still make it worth it to use Cloudflare even with double storage.)