One thing I really like about CloudFlare is that they seem to have people who can correctly identify friction-points for developers and have a solid plan on how to solve them. Looking forward to messing with this!
I wish we had the same pricing model for Cloudflare Images. I have never understood the CF Images pricing model [1]<p>1. <a href="https://www.cloudflare.com/en-gb/products/cloudflare-images/#:~:text=Images%20are%20priced%20at%20%245%20per%20100%2C000%20images%20stored%20and%20%241%20per%20100%2C000%20images%20delivered%20%E2%80%94%20with%20no%20egress%20costs%20or%20additional%20charges%20for%20resizing%20and%20optimization" rel="nofollow">https://www.cloudflare.com/en-gb/products/cloudflare-images/...</a>.
I find it much easier to reason about pure storage-based pricing compared to storage and egress-based pricing. I can much easier limit how much people can store in my application than add something much harder to understand like transfer quotas. So independent of how R2 compares purely on price I think having a big entry with a much simpler pricing scheme is a win already.
While conceptually I love the idea of not having to explicitly set the region of an object I'm storing, I feel like (especially in a distributed team or product) this could end up with a mishmash of data distributed all over the place with a bunch of different and unpredictable access time and latency characteristics.<p>Maybe the solution here is "just make sure the asset is cached on the edge" but for first access there has still got to be some impact no?<p>I'd love to see some test/benchmarks on access latency for stuff uploaded by say a colleague or app hosted in the EU or Asia with me in the US.
Loving R2. I am having an issue of uploading larger files though, like 100MB+. The error I get is:<p>Unable to write file at location: JrF3FnkA9W.webm. An exception occurred while uploading parts to a multipart upload. The following parts had errors: - Part 17: Error executing "UploadPart" on {URL}<p>with the message:<p>"Reduce your concurrent request rate for the same object."<p>Is this an issue on my end or CloudFlare's? I'm not doing anything aggressive, trying to upload 1 video at a time using Laravel's S3 filesystem driver. It works great on smaller files.
Using the same price for read requests, regardless of size feels weird to me (S3 is the same for internal use). The cost to the provider of serving a 100kB file and a 100GB file must be quite different, so why price them the same to the user?
The automatic region thing is problematic for many companies.<p>I would much rather be able to explicitly choose this and know that customers data is where I told them it would be.
Does anyone from the R2 team happen to know if there's a roadmap ETA on this one yet?<p><a href="https://community.cloudflare.com/t/r2-per-bucket-token/411050/" rel="nofollow">https://community.cloudflare.com/t/r2-per-bucket-token/41105...</a><p>The fact that you can't separate data for prod and dev with a product that's in GA now is kind of nuts.
I really want to use this, but sadly the one thing that's missing is any sort of bucket access logging.<p>Unless I'm missing something with how this fits in with Cloudflare's other services.
I'm excited to see more details on how R2 data is going to be replicated across different data centers in the future. I had assumed this was already operational based on previous blog posts so I'm a little disappointed to learn that is still TBD. It's a major reason I chose R2 over S3 as I don't want to manage moving around data for different tenants myself.