Ready for 2010: Upgrade Critical Software

You might remember than WordPress installation you did a couple of years ago and that you’ve been ignoring the “upgrade now” message for almost as long. This “Ready for 2010” message is sponsored by the “Get Your Software Up To Date” foundation.

Before starting down this road, it could be a good idea making sure your backups work. :-)

Yes. You should do these things all year around, but at least this should be a perfect occasion to take the time to check out that old server you installed just to test stuff at home, the server where you’re hosting your private blog, your mail server, etc. We geeks have some sort of weird ability to contract a couple of servers in strange places, forgetting them – but still using them for something on a daily basis.

Update and Upgrade Your Distribution

Spend a couple of hours getting the distribution up to date. For Linux-based servers this usually involves using the package systems update manager, such as apt-get or aptitude on debian or Ubuntu-based servers, up2date on Red Hat, the SUSE update manager etc. On Windows-based servers you’ll be running the update manager, getting all the recent fixes and patches into the core library of tools.

If you’re feeling a bit adventurous you should also consider upgrading your distribution to a newer version if one is available. This will make newer versions of the software you’re running available, get new features into your applications and other Good Things. It might however break a few existing features, such as the layout of certain configuration files, new default values for some settings and other, small stuff. Be sure to set aside a couple of hours for this, so don’t do this just as you’re leaving the country for a couple of months.

Find – and Upgrade – Installed Web Applications

It’s very important to keep Web applications you’ve installed, such as WordPress, up to date. As their nature makes them available through the internet, they’re often a preferred vector for automatic attacks against your server. As soon as a remote exploit has been found, you’ll start noticing attempts to break your server in your web server’s access logs. Usually the attacks require some sort of mitigating factor that requires a particular configuration, but there’s always someone who get in trouble because of just that factor. That might be you!

WordPress (and several other web applications) also contain an automagical update mode, where you can update the software simply by clicking a link in the admin interface. Be sure to spend half an hour getting the automagical update to work where it’s available, and do it now!

Upgrade Embedded Software

Our lives are filled with devices that run any sort of embedded software. Your mobile phone, your digital camera, your wireless router, your TV, your media player, your game consoles, your network attached storage disk (NAS), etc.

Check out the manufacturer’s website for the different devices (or if you’re a nerd like .. well, me, you might have exchanged the firmware with alternative firmwares) and check if there are any updates available. Several devices are also able to update themselves, so be sure to just log in to the device (and discover that you don’t remember the password you set a couple of years ago) and check if you can just click a button to make everything go Happy Happy Joy Joy.

You might also have several applications installed on your mobile phone – check for updates and any critical fixes. You do not want your mobile phone to leak private details out onto the world wide web or through Bluetooth.

Any other issues one should be sure to cover when doing this? Leave a comment below!

Read all the articles in the Ready for 2010-series

Ready for 2010: Check Your Backups

As usual, this is something you should to at least once each month or each week – go into your backups and check that they’re actually running and doing what they should be doing. If you think you’re doing backups – but you’re unable to restore from the backup, you’re not doing backups!

As Jeff Atwood can attest: DO NOT TRUST THAT OTHER PEOPLE ARE DOING YOUR BACKUPS – not even your hosting provider even if you pay them to do this for you. At least something good came out of that story – I think quite a few folks checked their own backups in the days after the disaster happened. The only person who really has any great interest in your backup working is YOU. Sure, people may lose your business because of ruining your backups (and they should have a great interest in making their backups work), but the person who’ll be greatly affect the loss of your backup will be you (and other people caught in the same shitstorm).

So here’s a couple of quick things you could do to maybe do things a bit better than last year:

  1. Revisit all backup-scripts and run them manually to check that they actually work. Maybe your SSH-keys have expired (debian blacklisted SSH-keys generated with a bogus key generation function) (been there, done that), your SSH-daemon has switched to another port and you’ve not updated your scripts (been there, done that) or several other causes. Run them manully first, so that you’re catching any errors. It is not enough to think that cron (or Scheduled Tasks) would tell you if anything weird happened.
  2. Make your backups travel out from the physical location of the server. This might be your house, a co-location facility, your own dedicated server room, etc., but it will never be guarded against an attack by the natural forces such as a large fire, a flood, a train or an airplane hitting the building. Get it out of the building, and get it out of the city. Purchase online backup storage if needed, it’s dirt cheap at the moment and will still be in the future. $5-$20 a month for keeping all your data safe? It’s not even worth to spend any time thinking about it. Get it running, NOW!
  3. Check that there’s enough free disk space on the server you’re making your backups to. Doesn’t help if everything works today if you’re going to run out of space in 6 days. Make the server send you a notice through E-mail and SMS if it’s getting anyewhere close to 80% of total capacity. Install munin, nagios and/or cacti to keep track of your servers. I’ve become a fan of the easiness of munin, so if you haven’t taken a closer look at it, do it now.
  4. Check that your warning systems when a backup fails actually work. You could create a small file in the location you’re backing up with random content and then checking the md5sums of the files after the backup job has run. Make the alarm bells go off when things doesn’t match.
  5. Make a static backup on an USB drive. USB-drives have also become dirt cheap, get a 1TB one and make a backup right now. Store the drive somewhere else than your regular backups, such as your home. Run the backup each month. This will at least make you do the backup manually and might save your ass when you discover that your regular backup job didn’t backup what you thought it backed up.
  6. Backup jobs only back up what you tell them to back up. If you’re not backing up a complete system (and be sure to check that the backup includes files that might be kept open by the system, such as the hard drive image files of virtualized servers), you might have installed new applications that contain user generated files, new database storage locations etc, during the year. Be sure to check that you actually back up the things that are worth something to you. Maybe your new image application stores it metadata in another location than your previous one (Lightroom vs Aperture perhaps). Check it.
  7. Do NOT write an article about backup routines and backup suggestions. This will result in catastrophic hardrive failure and backups that don’t work. You WILL jinx yourself. Oh sod.

If you need a good suggestion for backup software, take a look at rdiff-backup.

I have probably forgotten something very important here, so I’m trusting you guys to be my backup (yeah, right). Leave a comment with any good advice!

Now go out and create a random file somewhere on the locations you’re backing up.

Read all the articles in the Ready for 2010-series.

ssh_exchange_identification: Connection closed by remote host

Suddenly encountered the error message ssh_exchange_identification: Connection closed by remote host while ssh-ing into one of the machines facing the public side of the almighty internet today. A quick Google search turned up an article saying that the problem usually solves itself. The reason for this is simple: as this is a box that’s available on the general internet, from time to time a storm of SSH connection requests hits us as other compromised servers attempt to break in. When this happens, sshd may go into defensive mode and just refuse the connections instead of trying to handle them. This is good. This is also the reason why it “just suddenty works again”, since the attack may subside or some resources gets freed up.

There may of course be other reasons for this error, but if the machine is reachable through other measures, answers ping and worked an hour ago, this may be the cause. Guess it’s time to move the public ssh port to something else than 22.

A Redirect Does Not Stop Execution

This is just a public service announcement for all the inexperienced developers who are writing redirects in PHP by issuing a call to header(“Location: <new url>”) to do their redirect. I see the same mistake time over and over again, and just to try to make sure that people actually remember this:

A Call to Header (even with Location:) Does NOT Stop The Execution of the Current Application!

A simple example that illustrates this:


if (empty($_SESSION['authed']))

if (!empty($_POST['text']))
    /* insert into database */

/* Do other admin stuff */

The problem here is that the developer does not stop script execution after issuing the redirect. While the result when testing this code will be as expected (a redirect happens when the user is not logged in and tries to access the link). There is however a gaping security hole here, hidden not in what’s in the file, but what’s missing. Since the developer does not terminate the execution of the script after doing the redirect, the script will continue to run and do whatever the user asks of it. If the user submits a POST request with data (by sending the request manually), the information will be inserted into the database, regardless of wether the user is logged in or not. The end result will still be a redirect, but the program will execute all regular execution paths based on the request.