Install rdiff-backup and librsync on RedHat Enterprise Linux 4

We’ve just installed rdiff-backup on some of our older RedHat Enterprise Linux 64-bit servers. We decided to install the .tar.gz-distributions and you can build your own rpms from these source packages (I won’t cover that here). There are a few pitfalls you’ll have to avoid.

First: librsync needs to be built with the 64-bit lib directory as the standard linkage, and needs to be compiled as a shared module.

To do this, provide configure with a few hints:

./configure LDFLAGS=-L/usr/lib64 --enable-shared --enable-lib64

Then make and make installldconfig to be sure that your LD cache has been updated (make install should do this, but .. just to be sure.)

The “.. can not be used when making a shared object; recompile with -fPIC” error for librsync will occur if you build it without –enable-shared, while “/usr/lib/libpopt.so: could not read symbols: File in wrong format” will occur if you build the library with the standard 32 bit libraries.

After this rdiff-backup compiled as it should:

python setup.py install

After rdiff-backup builds it python module and other small items, you might have to run ldconfig again (if you get “ImportError: librsync.so.1: cannot open shared object file: No such file or directory”, try this):

ldconfig

If you still get the above error, check that the directory where librsync was installed is present in either one of the files in /etc/ld.so.conf.d/ or in /etc/ld.so.conf itself. After adding the path to the library directory (which by default will be /usr/local/lib), run ldconfig again.

rdiff-backup should now (hopefully) work as expected.

Ready for 2010: Check Your Backups

As usual, this is something you should to at least once each month or each week – go into your backups and check that they’re actually running and doing what they should be doing. If you think you’re doing backups – but you’re unable to restore from the backup, you’re not doing backups!

As Jeff Atwood can attest: DO NOT TRUST THAT OTHER PEOPLE ARE DOING YOUR BACKUPS – not even your hosting provider even if you pay them to do this for you. At least something good came out of that story – I think quite a few folks checked their own backups in the days after the disaster happened. The only person who really has any great interest in your backup working is YOU. Sure, people may lose your business because of ruining your backups (and they should have a great interest in making their backups work), but the person who’ll be greatly affect the loss of your backup will be you (and other people caught in the same shitstorm).

So here’s a couple of quick things you could do to maybe do things a bit better than last year:

  1. Revisit all backup-scripts and run them manually to check that they actually work. Maybe your SSH-keys have expired (debian blacklisted SSH-keys generated with a bogus key generation function) (been there, done that), your SSH-daemon has switched to another port and you’ve not updated your scripts (been there, done that) or several other causes. Run them manully first, so that you’re catching any errors. It is not enough to think that cron (or Scheduled Tasks) would tell you if anything weird happened.
  2. Make your backups travel out from the physical location of the server. This might be your house, a co-location facility, your own dedicated server room, etc., but it will never be guarded against an attack by the natural forces such as a large fire, a flood, a train or an airplane hitting the building. Get it out of the building, and get it out of the city. Purchase online backup storage if needed, it’s dirt cheap at the moment and will still be in the future. $5-$20 a month for keeping all your data safe? It’s not even worth to spend any time thinking about it. Get it running, NOW!
  3. Check that there’s enough free disk space on the server you’re making your backups to. Doesn’t help if everything works today if you’re going to run out of space in 6 days. Make the server send you a notice through E-mail and SMS if it’s getting anyewhere close to 80% of total capacity. Install munin, nagios and/or cacti to keep track of your servers. I’ve become a fan of the easiness of munin, so if you haven’t taken a closer look at it, do it now.
  4. Check that your warning systems when a backup fails actually work. You could create a small file in the location you’re backing up with random content and then checking the md5sums of the files after the backup job has run. Make the alarm bells go off when things doesn’t match.
  5. Make a static backup on an USB drive. USB-drives have also become dirt cheap, get a 1TB one and make a backup right now. Store the drive somewhere else than your regular backups, such as your home. Run the backup each month. This will at least make you do the backup manually and might save your ass when you discover that your regular backup job didn’t backup what you thought it backed up.
  6. Backup jobs only back up what you tell them to back up. If you’re not backing up a complete system (and be sure to check that the backup includes files that might be kept open by the system, such as the hard drive image files of virtualized servers), you might have installed new applications that contain user generated files, new database storage locations etc, during the year. Be sure to check that you actually back up the things that are worth something to you. Maybe your new image application stores it metadata in another location than your previous one (Lightroom vs Aperture perhaps). Check it.
  7. Do NOT write an article about backup routines and backup suggestions. This will result in catastrophic hardrive failure and backups that don’t work. You WILL jinx yourself. Oh sod.

If you need a good suggestion for backup software, take a look at rdiff-backup.

I have probably forgotten something very important here, so I’m trusting you guys to be my backup (yeah, right). Leave a comment with any good advice!

Now go out and create a random file somewhere on the locations you’re backing up.

Read all the articles in the Ready for 2010-series.