TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Bup – towards the perfect backup

287 pointsby hachiyaover 10 years ago

21 comments

mappuover 10 years ago
A shoutout for attic <a href="https://attic-backup.org/" rel="nofollow">https:&#x2F;&#x2F;attic-backup.org&#x2F;</a><p>Attic is one of the new-generation hash-backup tools (like obnam, zbackup, Vembu Hive etc). It provides encrypted incremental-forever (unlike duplicity, duplicati, rsnapshot, rdiff-backup, Ahsay etc) with no server-side processing and a convenient CLI interface, and it <i>does</i> let you prune old backups.<p>All other common tools seem to fail on one of the following points<p>- Incremental <i>forever</i> (bandwidth is expensive in a lot of countries)<p>- Untrusted remote storage (so i can hook it up to a dodgy lowendbox VPS)<p>- Optional: No server-side processing needed (so i can hook it up to S3 or Dropbox)<p>If your backup model is based on the old&#x27; original + diff(original, v1) + diff(v1, v2).. then you&#x27;re going to have a slow time restoring. rdiff-backup gets this right by reversing the incremental chain. However, as soon as you need to consolidate incremental images, you lose the possibility of encrypting the data (since encrypt(diff()) is useless from a diff perspective).<p>But with a hash-based backup system? All restore points take constant time to restore.<p>Duplicity, Duplicati 1.x, and Ahsay 5 don&#x27;t support incremental-forever. Ahsay 6 supports incremental-forever at the expense of requiring trust in the server (server-side decrypt to consolidate images). Duplicati 2 attempted to move to a hash-based system but they chose to use fixed block offsets rather than checksum-based offsets, so the incremental detection is inefficient after an insert point.<p>IMO Attic gets everything right. There&#x27;s patches for windows support on their github. I wrote a munin plugin for it.<p>Disclaimer: I work in the SMB backup industry.
评论 #8621792 未加载
评论 #8621762 未加载
评论 #8622117 未加载
评论 #8622175 未加载
评论 #8622607 未加载
评论 #8623452 未加载
评论 #8622358 未加载
评论 #8621379 未加载
评论 #8621445 未加载
williamsteinover 10 years ago
I&#x27;ve long been a huge fan up bup, and have even contributed some code. I might be by far their single biggest user, since I host 96748 bup repositories at <a href="https://cloud.sagemath.com" rel="nofollow">https:&#x2F;&#x2F;cloud.sagemath.com</a>, where the snapshots for all user projects are made using bup (and mounted using bup-fuse).<p>Elsewhere in this discussion people not some shortcomings of bup, namely not having its own encryption and not having the ability to delete old backups. For my applications, lack of encryption isn&#x27;t an issue, since I make the backups locally on a full-disk encrypted device and transmit them for longterm storage (to another full disk encrypted device) only with ssh. The lack of being able to easily delete old backups is also not an issue since (1) I don&#x27;t want to delete them (I want a complete history), and (2) the approach to deduplication and compression in bup makes it extremely efficient space wise, and it doesn&#x27;t get (noticeably) slower as the number of commits gets large; this is in contrast to ZFS, where performance can degrade dramatically if you make a large number of snapshots, or other much less space efficient approaches where you <i>have</i> to regularly delete backups or you run out of space.<p>In this discussion people also discuss ZFS and deduplication. With SageMathCloud, the filesystem all user projects use is a de-duplicated ZFS-on-Linux filesystem (most on an SSD), with lz4 compression and rolling snapshots (using zfssnap). This configuration works well in practice, since projects have limited quota so there&#x27;s only a few hundred gigabytes of data (so far less than even 1TB), but the machines have quite a lot of RAM (50+GB) since they are configured for lots of mathematics computation, running IPython notebooks, etc.
评论 #8623441 未加载
rlpbover 10 years ago
I wrote a very similar tool before I knew about bup - ddar (<a href="https://github.com/basak/ddar" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;basak&#x2F;ddar</a> - with more documentation at <a href="http://web.archive.org/web/20131209161307/http://www.synctus.com/ddar/" rel="nofollow">http:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20131209161307&#x2F;http:&#x2F;&#x2F;www.synctus...</a>).<p>Others have complained here that bup doesn&#x27;t support deleting old backups. ddar doesn&#x27;t have such an issue. Deleting snapshots work just fine (all other snapshots remain).<p>I think the underlying difference is that ddar uses sqlite to keep track of the chunks, whereas bup is tied to git&#x27;s pack format, which isn&#x27;t really geared towards large backups. git&#x27;s pack files are expected to be rewritten, which works fine for code repositories but not for terabytes of data.
评论 #8623726 未加载
femtoover 10 years ago
Is there anything out there that does continuous incremental backups to a remote location (like obnam, attic, ...) but allows &quot;append only&quot; access. That is, you are only allowed to add to the backup, and the network protocol inherently does not allow past history to be deleted or modified? Pruning old backups might be allowed, but only using credentials that are reserved for special use.<p>Obnam, attic and similar use a normal read&#x2F;write disk area, without any server side processing, so presumably an errant&#x2F;malicious user is free to delete the entire backup?
评论 #8622007 未加载
评论 #8623290 未加载
beagle3over 10 years ago
Haven&#x27;t seen this mentioned - but, since bup de-duplicates chunks (and thus may take very little space - e.g., when you backup a 40GB virtual machine, each snapshots takes little more than the actual changes inside the virtual machine), every byte of the backup is actually very important and fragile, as it may be referenced from thousands of files and of snapshots. This is of course true for all dedupping and incremental backups.<p>However, bup goes one step farther and has builtin support for &quot;par2&quot; which adds error correction - in a way, it efficiently re-duplicates chunks so that whichever one (or two, or however many you decide) break, you can still recover the complete backup.
评论 #8623258 未加载
derekp7over 10 years ago
I was wondering if someone&#x27;s done a side-by-side comparison of the various newer open-source backup tools? Specifically, I&#x27;m looking for performance, compression, encryption, type of deduplication (file-level vs. block-level, and dedup between generations only vs. dedup across all files). Also, the specifics of the implementation, since some of the tools don&#x27;t really explain that too well, along with any unique features.<p>The reason I ask, is I had a difficult time finding a backup tool that suited my own needs, so I wrote and open-sourced my own (<a href="http://www.snebu.com" rel="nofollow">http:&#x2F;&#x2F;www.snebu.com</a>), and now that some people are starting to use it in production I&#x27;d like to get a deeper peer review to ensure quality and feature completeness. (I actually didn&#x27;t think I&#x27;d be this nervous about people using any of my code, but backups are kind of critical so it I&#x27;d like to ensure it is done as correct as possible).
评论 #8623234 未加载
uint32over 10 years ago
Like any good hacker I got tired of other solutions that didn&#x27;t quite match my needs and made my own dropbox-like backup&#x2F;sync using only rsync, ssh and encfs.<p><a href="https://github.com/avdd/rsyncsync" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;avdd&#x2F;rsyncsync</a><p>Not polished, but it&#x27;s working for me.<p><pre><code> - only runs on machines I control - server requirement is only rsync, ssh and coreutils - basic conflict detection - encfs --reverse to encrypt locally, store remotely - history is rsnapshot-style hard links - inspect history using sshfs - can purge old history </code></pre> shell aliases showing how I use it are in my config repository<p>encfs isn&#x27;t ideal but it&#x27;s the only thing that does the job. Ideally I&#x27;d use something that didn&#x27;t leak so much, but it doesn&#x27;t exist.
评论 #8622493 未加载
xorcistover 10 years ago
I tried some backup software (of the rdiff variety, not the amanda variety) last year when I set up a small backup server for friends and family.<p>Obnam and bup seemed to work mostly the way I wanted to but obnam was by far the most mature tool, so this is what I chose in the end.<p>On the plus side, it provides both push and pull modes. Encryption and expiration works. The minus points are no Windows support, and some horror stories about performance. Apparently it can slow to a crawl with many files. I haven&#x27;t run into that problem despite hundreds of gig in the backup set, but most are large files.<p>On the whole it&#x27;s been very stable and unobtrusive during the time I&#x27;ve used it, but I haven&#x27;t used it in anger yet. So a careful recommendation for obnam from me.
franoleover 10 years ago
Does anyone use zpaq[1]? It has compression, deduplication, incremental backup, encryption, backup versioning (unlike bup, with the ability to delete old ones), and its written un C++. But im not sure about performance over network and how its compare with bup or rsync.<p>[1] <a href="http://mattmahoney.net/dc/zpaq.html" rel="nofollow">http:&#x2F;&#x2F;mattmahoney.net&#x2F;dc&#x2F;zpaq.html</a>
mynegationover 10 years ago
Deleting old backups and the lack of encryption is what stopped me from using bup.
评论 #8621472 未加载
评论 #8621406 未加载
评论 #8622254 未加载
jlebarover 10 years ago
Adding a plug for git-annex. <a href="https://git-annex.branchable.com/" rel="nofollow">https:&#x2F;&#x2F;git-annex.branchable.com&#x2F;</a><p>git annex is for more than just backups. In particular, it lets you store files on multiple machines and retrieve them at will. This lets you do backups to e.g. S3, but it also lets you e.g. store your mp3 collection on your NAS and then easily copy some files to your laptop before leaving on a trip. Any changes you make while you&#x27;re offline can be sync&#x27;ed back up when you come back online.<p>You can prune old files in git-annex [1], and it also supports encryption. git-annex deduplicates identical files, but unlike Attic &amp;co, it does not have special handling of incremental changes to files; if you change a file, you have to re-upload it to the remote server.<p>git-annex is actively developed, and I&#x27;ve found the developer to be really friendly and helpful.<p>[1] You can prune the old files, but because the metadata history -- basically, the filename to hash mapping -- is stored in git, you can&#x27;t prune that. In practice you&#x27;d need to have a pretty big repository with a high rate of change for this to matter.<p><i>Edited for formatting.</i>
评论 #8628918 未加载
eliover 10 years ago
Is there an easy way to have the backups encrypted at rest? That&#x27;s a nice feature of Duplicity. I don&#x27;t have to worry about someone hacking my backup server or borrowing my USB drive having access to my data.
评论 #8621415 未加载
keehunover 10 years ago
This seems like a fantastic tool, and I would love to try this out. And, it&#x27;s free!<p>My personal obstacle in using a tool like bup is the back-up space. I could definitely use this for on-site&#x2F;external storage devices, but I also like to keep online&#x2F;cloud copies. I currently use CrashPlan for that which affords me unlimited space. If CrashPlan would let me use their cloud with bup, wow, I would switch in a heartbeat. Perhaps cloud backup tools could learn some tricks from bup.
评论 #8621231 未加载
zannyover 10 years ago
If you want a fantastic graphical frontend for bup, there is kup, which is a kde app: <a href="http://kde-apps.org/content/show.php/Kup+Backup+System?content=147465" rel="nofollow">http:&#x2F;&#x2F;kde-apps.org&#x2F;content&#x2F;show.php&#x2F;Kup+Backup+System?conte...</a><p>It is really easy to set up what folders to backup and where, and I use it whenever a backup is simply take all files from X, do the rolling backups at Y, and done.
rcthompsonover 10 years ago
If you&#x27;re considering using it, keep in mind the limitations: <a href="https://github.com/bup/bup/blob/master/README.md#things-that-are-stupid-for-now-but-which-well-fix-later" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;bup&#x2F;bup&#x2F;blob&#x2F;master&#x2F;README.md#things-that...</a><p>The one most likely to be a showstopper seems to be: &quot;bup currently has no way to prune old backups.&quot;
评论 #8621119 未加载
labianchinover 10 years ago
I&#x27;ve been using duply <a href="http://duply.net/" rel="nofollow">http:&#x2F;&#x2F;duply.net&#x2F;</a> for a while. It is a simple frontend for duplicity <a href="http://duplicity.nongnu.org/" rel="nofollow">http:&#x2F;&#x2F;duplicity.nongnu.org&#x2F;</a>. I find it very easy to setup. It also provides encrypted backups trough GPG.
konradbover 10 years ago
There&#x27;s also Burp which is worth a look <a href="http://burp.grke.org/index.html" rel="nofollow">http:&#x2F;&#x2F;burp.grke.org&#x2F;index.html</a><p>Looking at <a href="http://burp.grke.org/burp2/08results1.html" rel="nofollow">http:&#x2F;&#x2F;burp.grke.org&#x2F;burp2&#x2F;08results1.html</a> it seems it can outperform Bup in some situations.
fragmedeover 10 years ago
&gt; That is a dataset which is already deduplicated via copy-on-write semantics (it was not using ZFS deduplication because you should basically never use ZFS deduplication).<p>Can someone more experienced with ZFS say why?
评论 #8621424 未加载
评论 #8621295 未加载
0x0over 10 years ago
This looks very interesting as a replacement for rdiff-backup. Hopefully the missing parts aren&#x27;t too far away (expire old backups, restore from remote).
jshbover 10 years ago
Can this new tool do incremental realtime disk image backup like Acronis True Image?
评论 #8622090 未加载
greensoapover 10 years ago
Given that old backups cannot be remove, isn&#x27;t backuppc a better solution?
评论 #8621528 未加载