Backblaze is also the founding member of Bandwidth Alliance, meaning getting those B2 via Cloudflare is essentially free.<p>So you are only paying for Storage. ( Correct me if I am on this one )<p>I wonder why doesn't <i>ALL</i> non HyperScale Cloud Vendors, like Linode and DO provide one click third party backup to B2. You should always store offsite backup somewhere. And B2 is perfect.
Just a year ago B2 couldn't do server-side file copying.[1] If you wanted to rename or move a file you had to re-upload the whole thing (not great for large multi-gigabyte files)! That ruled them out of consideration for storing my personal backups.<p>Glad to see they've since fixed that, and with this update are clearly continuing to improve ergonomics. I'll have to give B2 a fresh look.<p>[1]: <a href="https://github.com/Backblaze/B2_Command_Line_Tool/issues/175" rel="nofollow">https://github.com/Backblaze/B2_Command_Line_Tool/issues/175</a>
Here is the reasoning why they didn't have "s3 compatibility" before: <a href="https://www.backblaze.com/blog/design-thinking-b2-apis-the-hidden-costs-of-s3-compatibility/" rel="nofollow">https://www.backblaze.com/blog/design-thinking-b2-apis-the-h...</a>
Swank. One of the reasons I'm not using Backblaze is because I couldn't find a way to generate a private url which allowed secure upload from the browser. It only allowed (so far as I can tell) a private url that had access to an entire bucket. If they've got an S3 compatibility layer now, this problem is solved. I'm gonna invest some time on this tomorrow.
This is huge because it means you can use things like S3 Fuse to mount your storage. Which means you can use it to extend your local disk, or run your own backups, or whatever.<p>Amusingly the price to store 1.2TB of data is the same as the cost of their backup plan, so if your disk is smaller than that, you could save a few bucks running your own backups. Until you have to restore (from what I can tell restores are free on their backup plans but would cost money on the S3 plan).
I migrated a client from Cloudinary($1k+ /mo) to B2, a Go+ImageMagick program running in DigitalOcean and Cloudflare CDN for a total of $60 /mo. It’s been running for two years now, B2 has been incredibly reliable.
As a developer that supports B2 (I write ExpanDrive) I think it’s great that they are moving on from an API that doesn’t expose any extra value.<p>That being said, I wish B2 performance was better. Throughput is dramatically slower than S3.
Used B2 heavily until recently as a origin server for a CDN.
Few weeks ago we saw a spike in 502 / 504 responses.<p>When I contacted their customer supported , I was pointed to the following URL where they explain in detail how they handle these errors
<a href="https://www.backblaze.com/blog/b2-503-500-server-error/" rel="nofollow">https://www.backblaze.com/blog/b2-503-500-server-error/</a><p>Essentially they are not considered as errors and expect the client to retry loading the file. This approach won't work in our use case.
Huh, I thought it already had this! Must have mixed it up with a different object storage service (maybe DigitalOcean?).<p>I've been using B2 for backup storage for some personal projects. It doesn't necessarily do anything "better" than S3 from what I've seen, but never having to log into AWS's dashboard is a reward enough on its own.<p>They do have a command-line client that's a quick PIP install, so you can do something like:<p><pre><code> b2 upload-file bucket-name /path/to/file remote-filename
</code></pre>
Which is, of course, nice for backups.
Ooo, even more reason to set-up a NextCloud instance now! Previously, it wasn't really practical to set-up B2 as external storage because you'd need to also set up a compat layer.
Now that we're talking about B2, has or is anyone using them for latency-sensitive small-file object storage? I'm about to take the plunge and set up benchmarks, my use-case is that I want to store and serve ~ 500k small files (30b-1MB) per day to website visitors. So far B2 support has told me that it shouldn't be a problem, and early benchmarking indicates the same, just curious if anyone had stories from the trenches.
Does Backblaze offer strong consistency for files?<p>The killer feature of Google Cloud Storage in my eyes is its ability to be strongly consistent, if you set the right HTTP headers. This is not possible for Amazon S3, which is always eventually consistent and makes it unusable for many use cases where you need to be able to guarantee that customers will always see the newest version of a file.
I've looked at B2 from time to time, but doing database blob storage over S3 or to disk and backing up database and files over rsync made us stick to our existing technology (eg Transip cloud storage which also charged 10 EUR/month per 2TB). One thing we didn't look forward to was having to reimplement cataloging and garbage collection for all of disk, S3 and B2, so we just stuck to a rsync hardlinking solution (which makes incremental backups painless)<p>having access to primary storage and cheap backup storage using the same S3 API will make us reconsider that and will probably make it worth the effort to dump our rsync-based solution for B2.
I would absolutely love to replace my use of S3 with B2 as a backup for data stored elsewhere. Personally, I would much rather this storage to to a service that only does storage, rather than everything else that AWS does, so I don't have to worry about anything strange happening in a cloud service I don't use every day.<p>When they first launched B2, I inquired about ability to enter into a BAA (Business Associates Agreement) for HIPAA compliance and was told that it wasn't "on the roadmap". It sounds like B2 has come a long way on the compliance side. Would be great if they were open to this.
Actually excited by this. I was benchmarking S3 vs B2 vs others 2 years ago and I had to give up on B2 because its implementation for performance was so much more difficult. (88 lines vs 36 lines for all others in Ruby)
Can Amazon actually patent their API (per the google vs oracle case) - basically like prevent other vendors to provide S3 APIs so that Amazon can lock in users.<p>I am not a lawyer. So this is a genuine / dumb question.
It reminds me a moment three years ago, when I asked Dropbox to make their API similar to Google Drive, as they basically provide the same service. <a href="https://github.com/dropbox/dropbox-api-spec/issues/3#issuecomment-320685313" rel="nofollow">https://github.com/dropbox/dropbox-api-spec/issues/3#issueco...</a><p>It is just awful to see, how everyone tries to reinvent the wheel and not to be compatible with anyone else.
This is great news... there are lots more good clients for S3 than B2, and implementing one is less than trivial because of some special considerations B2 had in the beginning (namely: uploading directly to a pod.)<p>I see this isn't available for old buckets, is there a straightforward way to duplicate a bucket to make it compatible or do you have to use something like rclone?
I'm curious what their load balancing layer looks like. There's alot of interesting options. (Disclaimer: I've worked in the CDN and the storage space in the past)<p>If their load balancer is smart enough it can call the dispatcher, and make use of something like <a href="https://zaiste.net/nginx_x_accel_header/" rel="nofollow">https://zaiste.net/nginx_x_accel_header/</a> to figure out where to forward the request. Unfortunately this still requires uploads be proxied through the dispatcher.<p>You could get crazy and involve a CDN (akamai or cloudflare or fastly) that could do some smart logic, especially if you can emit your dispatcher as a lookup table that's updated frequently. I don't know what bandwidth costs would be for that though. Probably high.<p>It's an interesting problem space and I'd love to talk to these folks about it.
This is great. Their current API requires you to identify a unique host to send data to, so you’re constantly performing a metric ton of DNS queries. Until I white listed the base domain it was the #1 client of my Pihole installation by multiple orders of magnitude.
That's awesome, but I really want to see lightning fast response times and TTFB... Second pain point is the number of retries needed for uploading a large batch of small files. Those are the main reasons I'm still considering migrating away. I really wish I shouldn't as otherwise I love the pricing and the philosophy.<p>Edit: also think DigitalOcean Spaces and B2 might be better off merging together, or Spaces being a whitelabel B2 in disguise (both are part of BWA).
Okay dumb it down for Monday Me. Does this mean I can read from and write to my B2 storage using AWS S3 libraries (like the CLI, Python, and Node libs)?
I have a decent sized music collection consisting a lot of lossless vinyl rips that I've made from my record collection. It totals around 200gigs at the moment but is growing weekly. I've been looking for somewhere to back this all up in the cloud and backblaze is looking most promising at the moment. Anyone here have any thoughts on where I should go with this?
This is wonderful news for me. I host a video on demand site Codemy.net and all the original source videos are on backblaze. Originally I had to write a library to connect to the backblaze api. Now I look forward to using the existing aws client libraries, one less thing I have to maintain.
I remember when they had all their servers in one room and the redundancy boiled down to erasure encoding in single servers.<p>They've been doing incredible work in the open (storage server design, hardware reliability data, etc) and I'm really happy they've grown to where they are today.
I was actually looking at B2 vs S3 literally 2 days ago and went S3 for the universal API. Luckily, it was a personal project and I I can probably migrate everything very quickly. This is a killer feature, and I bet this will convince a lot of people to move to Backblaze.
Given all the Cloudflare discussion - Cloudflare webinar with Backblaze coming up next week: <a href="https://www.brighttalk.com/webcast/14807/405472" rel="nofollow">https://www.brighttalk.com/webcast/14807/405472</a>
Great news. Only a few days ago I was trying to figure out ways to use minio to use backblaze as mattermost cloud storage which needs to be s3 compatible. I expect that will work straight now. Have anyone already tried this integration?
Every day I have to use something like 4gb of data to let Backblaze sync. This is despite the fact that I might only have created/changed 100mb worth of files since the previous day's sync.
Is there a cheap s3 compatible service that is less reliable? I dont want to pay for redudancy Eg its my backups I can handle a 3% chance that my data is lost as long as I find out about it.
happy customer of backblaze. Love how transparent they are with everything (especially the Harddisk statistics) and how the CEO takes time to respond to a lot questions only confirms how down to earth they are.