A Gentle Introduction to Gearman and its Concepts

Gearman (an anagram for “Manager”) is a system for farming out work units to several different servers (or several processes on one server), allowing the calling code to do something completely different while the task is performed. Gearman is not intended for inter-process communication, but is a way to tell other processes that there are work available, and letting these processes (called workers) grab a piece of work for themselves.

One of the common themes that show up at the gearman IRC channel on freenode is an attempt to understand what gearman is and how everything fits together. I’ll try to explain the different concepts and what the different responsibilities of a working gearman infrastructure are. There’s also a “Getting Started” guide on the Gearman web site with a bit of example code and installation instructions, so you might want to keep that open in another tab. So here we go: a simple gearman tutorial explaining the concepts and not just throwing example code your way.

There are three core components of a gearman installation. These are a client (someone requesting a task to be performed), a worker (someone performing a task) and the server (which coordinates tasks between clients and workers). All these three components need to be running for you to be able to something useful with gearman. It’s worth noting that I’ll use name “task” for a single item to be performed, you’ll also see this named ‘function’ (which is the name of the actual function the task asks to be performed – a server offers several “functions” that a client can call). Some APIs might also refer to a “task” as a collection of functions to be called. I’ll use the first definition; a task is a call to a function on the server, together with the data for the task and a task identifier. Several subsequent tasks will call the same function.

I’ll go a bit more in detail about each of these components, but it’s important that you understand how everything is interconnected first. An exchange of messages between the different parts can be illustrated as follows:

client -> server: ask server to perform a task
server acknowledges request and assigns an identificator to the request
server -> all workers: tell workers registered for the task that there is work to be performed
worker -> server: I'll perform the task you just told us about
server -> worker: ok, go ahead, here's the information about the task.
worker -> server: here's the result of the task performed
server -> client: here's the result of the task you asked me to get someone to do for you

The idea behind the server telling all the workers that there are work available is to let the worker that responds fastest to actually get the task, as it’s assumed that this is the worker with the least load on the server it’s running on (as it responds quickly, the server is not busy doing other things). As I wrote above, the worker is the piece of code actually doing the work – the worker performs the task that a client has submitted to the gearman server.

You’ll find that most of Gearman is designed according to the same principle – keep stuff simple. The server only needs to keep track of which workers perform which functions, and then let the workers grab a task when it becomes available.

The Gearman Client

In Gearman the client is the piece of code that connects to the server and asks for a task to be performed. This can be a dynamic web page (running in python, ruby, PHP, perl or another language with a suitable Gearman library), a completely application that connects to Gearman, a worker (to submit a new task or to divide the current task into several smaller tasks to be performed by other workers) or a combination of the above. The important part is that this is simply a client – it has a task that needs to be handled, and it’ll ask the Gearman server to find someone who can perform the task.

The client can be run in synchronous (blocking) or asynchronous (non-blocking) mode. The first will make the client wait until the task has been performed by a worker (and if no worker is available, it’ll wait indefinitely or until reaching a timeout in the client), while the latter will simply fire-and-forget the task to the Gearman server (the server will confirm that the task has been received) and then go on its merry way afterwards. The Gearman server will provide a task identification value which the asynchronous client can use to query the current state of the task it asked to be performed (as long as the actual worker provide such updates).

A small example of how a client might work (using PHP):

addServer('localhost', 4730); 

$arguments = array(
    'url' => 'http://www.example.com/',
);

$client->addTaskBackground('fetchURL', json_encode($arguments));

$client->runTasks();

This will submit a request to a Gearman server running on the same machine as the script, asking for the function “fetchURL” to be run, and including an array of arguments to the function (you could simply include just the URL, but I find that this way is easier to extend in the future – and using JSON for data exchange makes the worker code more programming language independent). This code uses addTaskBackground to submit the task to be performed in an asynchronous manner. We’re not interested in the result of this task in this particular piece of code – the worker will either provide the result through other means (storing it in a database, in memcache, call an API function telling us that it’s finished) or perhaps we’re not interested in the result at all, just that we’ve attempted to perform the task. If you’re using the synchronous interface, the data returned from the worker will be returned to your code as the return value from the client.

As you can see, the client code is very, very simple. There is no actual work being performed here, we’re just telling the server that we’d like some work to be performed for us.

The Gearman Worker

The Gearman worker is where all the actual work (.. who’d guess) is performed. This is the application that receives a notice that it has to wake up and do a bit of hard work, and which actually goes out and does just that. What kind of work it does depends on what you’re using Gearman for, but a couple of use cases could be to resize an image into smaller sizes (such as thumbnails), to convert an uploaded video into another format for a specific device, sending notification emails, updating an internal search engine such a Solr and quite a few other tasks. As long as the task is not important for the application to continue running (no need for waiting for an E-mail to be delivered if you’re going to show a “Your information has been saved” message), then Gearman (and other alternative message queues) is a valid solution.

You’ll run each worker as its own process. A worker can perform several different functions (although you should (usually) stay away from multi-threading to perform them at the same time). This means starting several copies of the same worker if you want to allow for more than one worker performing a task at the same time (i.e., if you want to send 30 e-mails in parallel), you’ll start each worker as separate processes (30 workers in that case). There are several daemons and frameworks that can help you manage the number of processes available depending on server and task load, such as supervisord and GearmanManager (a PHP daemon). Another possible solution is to use screen to start several workers, which also will allow you to attach to the output of any worker at any time.

How the worker performs its work is up to the worker itself. In most cases you’ll have to write a bit of code to expose your code as a Gearman function (so that clients can submit tasks to perform that function), but this code will usually just instantiate the worker framework from the Gearman library you’re using, letting you register what functions you’ll be able to perform and attaching callbacks telling the library what part of your own code should be called when a request to perform a task arrives.

A simple example modified from the Gearman Getting Started guide:

addServer("localhost", 4730);
$worker->addFunction("fetchURL", "fetch_url");

while ($worker->work()); 

function fetch_url($job)
{
    $arguments = json_decode($job->workload());

    if (!empty($arguments['url']))
    {
        print("Fetching " . $arguments['url'] . "\n");
        return file_get_contents($arguments['url']);
    }
}

The $worker->work() method call will wait until a work arrives, then execute the callback as defined in the addFunction call. addFunction instructs the worker to tell the gearman server that this worker is able to perform any tasks calling the “fetchURL” function. The callback provided to the library (“call this PHP function (‘fetch_url’) when tasks want to call ‘fetchURL'”) will then receive the job object containing information about the job (task) to be performed. The workload() method returns the workload – the information we included in addition to which function to call in the client example. The server receives the workload from the client and then sends it to the worker together with the task information.

Since our client calls the server using the asynchronous interface it’ll not wait for the worker to return the web page contents, but by using ->do() or one of the other foreground methods in the PHP Gearman library.

The Gearman Server

The Gearman Server used is usually the C version of the server. There’s also a PERL version, but these days the C server is the one being actively developed. There’s not much to say about the server, you usually just start it and let it run by itself, doing what it was supposed to do all along.

I’ve got one simple suggestion if you’re just playing around with Gearman for the first time: start the server with the -vvv option. This will make gearmand a lot noisier, and will allow you to see clients registering themselves with the server, pinging the server and getting a bit more information about what’s happening inside the server process.

You’ll also want to provide an IP address that the gearman server should bind to – by default it binds to all interfaces, and since gearmand does not have any authentication built in by default, you don’t want to expose your server to the whole world.

Here’s an example of how we start gearmand at one of our servers:

screen -d -m -S gearmand /usr/local/sbin/gearmand -L 127.0.0.1 -p 4730 -vvv

You can drop the part related to screen if you just want to play with gearmand:

/usr/local/sbin/gearmand -L 127.0.0.1 -p 4730 -vvv

If you have gearmand in your path and not in the same location as us, drop /usr/local/sbin :-) This will bind gearmand to your localhost and use the default port (earlier the default port was something other than 4730, so we provide it just in case).

Making it all come together

The easiest way to play around with gearman is to simply open three terminal windows: one for gearmand with logging turned on, one for your worker and its output and the last window for a client sending a task request to gearmand (you can use the ‘gearman’ binary for this, just be sure to include any data in an appropriate format). As you submit a task for a function that the worker has registered, you should see it pick it up and then start processing the task as soon as possible. After a while (depending on how you’ve implemented your worker and what function it performs) the result should appear in your client.

Our production setups usually use a web application (PHP or python/django) as the client in the above scenario. The functions are usually long running tasks, such as analysing GPS paths, encoding videos and downloading files or internal web site analytics (where we just want to get things logged and not wait for the actual logging to complete). The web application submits a request to gearmand as soon as a file has been received, with a payload of the path to the file to be processed. The workers perform their function and then store the information back into the database or to disk, then usually call a web service to tell the web application that the work has been performed and any internal state can be updated to include (and show) the result of the task.

Message queues (such as Gearman) has become one of the core technologies behind many modern web applications (and non-web applications for that matter), so there’s really no reason to avoid at least playing around a bit with it and adding another possible tool to your future options.

Solr: Replication not starting?

After upgrading our Solr-servers from 1.4.1 to 4.0-trunk (to be sure we were ready for the next version), I had trouble with getting replication to start again. It worked perfectly back with 1.4.1, but after upgrading to 4.0-trunk, it simply wouldn’t start.

I had to upgrade the machines individually (to allow the current index to continue serve requests), I removed the replication and then directed all the traffic to the slave. After updating the master (which worked after actually remembering to clean out the old webapps from Tomcat and adding a few new settings) and reindexing, most of the traffic were directed to it, and the slave were upgraded to the new Solr-version. I turned on replication again, updated the configuration file with the needed settings and started the slave. Nothing happened. Weird.

Time to debug!

On any slaves there’s a “replication.properties” file in the data directory ($SOLRHOME/data) which contain information about the current replication status. This file were created, indicating that at least the replication was attempting to run. If you open the file in a text editor (or just cat it), you should be able to read a bit of meta information about the replication state.

replicationFailedAtList=1311072270004,1311072240006..
timesFailed=11

Seems like it’s trying, but for some reason it doesn’t work. First thing to check would be to grep for replication in the log on both the master and the slave, and see if there’s any requests being made at all. There might be, but the replication still doesn’t start.

Try fetching the current state yourself to see what response the master is serving. You can do this by using “GET” or “wget” or “curl” to make an HTTP request to the master Solr-server from the slave together with the URL from “masterUrl” in the requestHandler for /replication from solrconfig.xml:

GET http://example.com/solr/replication?command=indexversion

This should respond with something close to:

<?xml version="1.0" encoding="UTF-8"?>
<response>
  <lst name="responseHeader">
    <int name="status">0
    <int name="QTime">0
  </lst>
  <long name="indexversion">1310994445934
  <long name="generation">2
</response>

If “indexversion” is 0, this means that the master hasn’t triggered a replication yet, which may seem weird if you’ve just started the server and the slave doesn’t have any data at all.

The reason might be that the master has not been instructed to actually trigger a replication event (and unless a replication event has been triggered, the indexversion will be 0):

<requestHandler name="/replication" class="solr.ReplicationHandler">
  <lst name="master">
    <str name="replicateAfter">commit
    <str name="replicateAfter">startup
    <str name="replicateAfter">optimize

If you only have “commit” in the above list, a replication event will not be triggered unless you’ve actually performed a commit after the slave has connected for the first time. If you add “startup”, the replication will also be triggered when the master starts up (so that any connecting slaves will start replicating right away).

To fix the issue without restarting any nodes, issue a single commit to the master and watch as the slaves start replicating. To issue a commit through curl:

curl http://example.com/solr/update -H "Content-Type: text/xml" --data-binary '<commit />'

nginx and rewriting based on GET-parameter (URL-parameters/arguments)

Update: see the comment below from Alan Orth about how to implement this in a much cleaner way now!

When rewriting URLs in Apache through mod_rewrite, you have the possibility of using RewriteCond to only apply rewrites if the original resource has been called with a particular argument in the URL (such as “/file?oid=..”).

The solution in nginx was however a bit different, but thanks to Rewriting URL-params in nginx I got on the right track from the start.

In nginx this information is available through the $args variable, which will contain the complete query string. In Will’s example above he’ll replace the query string, but I were interested in inserting a specific parameter instead (and include the previous query string, so I couldn’t just do the “set $args ..” that he does in the example).

My first try was to simply use $1 in the rewrite destination, but this didn’t work – as rewrite will reset the captured patterns from the previous regular expression (since the rewrite source also is a regular expression). But by introducing my own, temporary variable I were able to save the value from the matching regular expression (for the GET parameter) and use it in my rewrite destination.

The following example shows how I ended up solving the issue. This will rewrite the URL only if the “oid” parameter is found at the beginning of the query string when the URL is requested, and the location = /oldURL limits the rewrite to requests for the old resource.

location = /oldURL {
    if ($args ~ "^oid=(\d+)") {
        set $key1 $1;
        rewrite ^.*$  /newURL?param1=foo¶m2=bar&key1=$key1 last;
    }
}

This will rewrite a request for /oldURL?oid=123&what=cheese to /newURL?param1=foo&param2=bar&key1=123&oid=123&what=cheese — if you want to exclude the previous arguments, you can either just set $args directly to key1=$1 and just use param1=foo and param2=bar in the rewrite destination:

        set $args key1=$1;
        rewrite ^.*$  /newURL?param1=foo¶m2=bar last;

This might be cleaner, depending on what you’re trying to do.

SEVERE: Error in xpath:java.lang.RuntimeException: solrconfig.xml missing luceneMatchVersion

One of the things that changed from Solr 1.4.1 to 1.5+ was the introduction of a parameter to tell Solr / Lucene which kind of compability version its index files should be created and used in.

Solr now refuses to start if you do not provide this setting (if you’re upgrading a previous installation from 1.4.1 or earlier). The fix isn’t really straight forward, and you’ll probably have to recreate your index files if you’re just arriving at the scene with Solr / Lucene 3.2 and 4.0. Solr 3.0 (1.5) might be able to upgrade the files from the 2.9 version, but if you’re jumping from Lucene 2.9 to 4.0, the easiest solution seems to be to delete the current index and reindex (set up replication, disable replication from the master, query the slave while reindexing the master, etc.. and you’ll have no downtime while doing this!).

You’ll need to add a parameter to your solrconfig.xml file as well in the <config> section.

LUCENE_CURRENT 

Other valid values are LUCENE_30, LUCENE_31, LUCENE_32 and LUCENE_40. These values represent specific versions of the index structure, while LUCENE_CURRENT will use the version depending on which particular release of Lucene you’re using. The version format will be upgraded automagically between most releases, so you’ll probably be fine by using LUCENE_CURRENT. If you however are trying to load index files that are more than one version older, you may have to use one of the other values. If you want to avoid any possible surprises when updating your Solr installation, you probably want to set this to one of the versioned values.

Updating a Solr Analysis Plugin from 1.4.1 (Lucene 2.9) to Solr / Lucene 4.0 (current trunk)

Three years and a couple of weeks ago I wrote a post about how to get started writing a simple Solr Analysis Plugin to handle incoming tokens and modifying them in place when an update is requested.

Since then the whole version number structure of Solr has changed (and is now in sync with the underlying Lucene version), and not surprisingly, the current API has also been updated. This means that a few small changes are required to get your analysis plugins running on the current trunk of Lucene and Solr.

The main change is that the previously named TermAttribute is now named CharTermAttribute, this means that any imports will have to change:

- import org.apache.lucene.analysis.tokenattributes.TermAttribute; 
+ import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; 

Any declarations of TermAttributes will need to be CharTermAttributes instead:

- private TermAttribute termAtt; 
+ private CharTermAttribute termAtt; 
  public NorwegianNameFilter(TokenStream input) 
  { 
      super(input); 
-     termAtt = (TermAttribute) addAttribute(TermAttribute.class); 
+     termAtt = input.getAttribute(CharTermAttribute.class); 
  } 

We now fetch the attribute from the current TokenStream (not sure if the old way I did it has been deprecated, but this seems to be the suggested way now). We also change any references to TermAttribute.class to CharTermAttribute.class.

The actual TermAttribute interface has also changed, meaning we’ll have to change a few of the old method calls:

- termAtt.setTermLength(this.parseBuffer(termAtt.termBuffer(), termAtt.termLength())); 
+ termAtt.setLength(this.parseBuffer(termAtt.buffer(), termAtt.length())); 

.setTermLength() => .setLength()
.termBuffer => .buffer()
.termLength => .length()

The methods will behave in the same manner as in the previous API, .buffer() will retrieve a char array (char[]) which is the current buffer of the actual term which can you modify in place, while length() and setLength() retrieves the current length of the buffer (the buffer can be larger than the part used) and sets the new length of the buffer (if you’re collapsing characters).

The new implementation of our analysis filter skeleton:

package no.derdubor.solr.analysis;

import java.io.IOException;
import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.TokenFilter;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;

public class NorwegianNameFilter extends TokenFilter
{
    private CharTermAttribute termAtt;

    public NorwegianNameFilter(TokenStream input)
    {
        super(input);
        termAtt = input.getAttribute(CharTermAttribute.class);
    }

    public boolean incrementToken() throws IOException
    {
        if (this.input.incrementToken())
        {
            termAtt.setLength(this.parseBuffer(termAtt.buffer(), termAtt.length()));
            return true;
        }
        
        return false;
    }
    
    protected int parseBuffer(char[] buffer, int bufferLength)
    {

    }
}

Complete Example of Tracking Start and End of Drag Operations with jQuery and Table Rows

After writing my previous post about “Keeping track of indexes while dragging in jQuery UI Sortable” I received a question about how this would work for sortable table rows. The code I included in the previous post should actually be enough to make everything work (.. as long as I understood the question correctly). I’ve created a minimal example as a complete implementation of the code from the previous post, and I’ve also included it as a live demo here.

This is based on jQuery 1.5.1 and jQuery UI 1.8.13. I’ve simply used the distribution versions of this example (you probably want to build your own UI bundle using the interface on the jQuery UI website).

Here’s a live demo that you can play with.

Inline javascript and WordPress could possibly be better friends.

The complete source for the minimal (but complete) example is included here:



    
        
        jQuery UI Example Page
        
        
        
        
    
    
        
Dragging from index
None
Hovering index
None
Keys Values
Key 1 Value 1
Key 2 Value 2
Key 3 Value 3
Key 4 Value 4
Key 5 Value 5
Key 6 Value 6

Getting Mac OS X-fonts to Work on Windows (Through Ubuntu)

This will be a fairly technical post, so if you’re not in the mood or don’t know what a bash script or Ubuntu is, this might not be the post for you.

Useful Ubuntu packages: fontforge (install with apt-get install fontforge)
Useful PERL Libraries: Mac::AppleSingleDouble (install with cpan install Mac::AppleSingleDouble)

First; Mac OS X stores the actual font data as a secondary data stream / file that’s attached to the actual font that the user can see. This requires the user sending the font to zip the contents before sending, so that both files can be included. The interesting files will be located in the __MACOSX folder and not in the actual folder containing the font (this folder will only contain the files with a 0 byte size). These files in __MACOSX will be the ones we’re going to town with.

Doing “file” on any file from this directory will probably yield “AppleDouble encoded Macintosh file”.

You can extract the actual contents of these files with the following awesome PERL snippet from a post at superuser:

perl -MMac::AppleSingleDouble -e 'for(@ARGV) {
    $a = new Mac::AppleSingleDouble($_);
    if(open $f, ">", $_.".rsrc") {
        binmode $f;
        print $f $a->get_entry(2);
        close $f;
    }
}' FILENAME_HERE

This will create a FILENAME_HERE.rsrc file in the current directory. This is the actual file that were stored as side channel data, without the other metadata associated with it.

In my case the resulting file was indicated to be a “Mac OSX datafork font, PostScript” by “file”, and after this I tried two things – one worked, one failed.

The first that failed:
In the archive there was also two other small datafiles. I tried extracting these and renaming the files to .pfb and .pfm (to use them as PostScript Type 1 fonts on Windows). This did not work. I’m not really sure where this failed at the moment, as I didn’t really feel like digging myself further down into the Windows Font subsystem. I also tried running the files through fondu, but didn’t get anywhere (it seems I really should run this under Mac OS X to be able to generate the proper .pfb and .afm files).

The second that worked (sort of):
fontforge is able to import a PostScript/dfont file without the associated metadata file. This might break kerning and a few other issues (fontforge will guess, so it won’t be half-bad), but will give you a resulting TTF file that actually works and can be used for most of the things that you’d use the font for. It was good enough for our use. Simply start fontforge, open the file and select file -> Generate Font afterwards.

Hopefully I won’t have to do this any time soon again. Fonts. Prrffftttt.

Keeping track of indexes while dragging in jQuery ui.sortable

While you’re handling a drag operation in the jQuery UI Sortable handler, there’s a few small issues that might almost drive you insane. These two solutions are based on information from various places on the net (such as a few discussions on Stack Overflow).

If you want to keep track of which location the item you’re dragging originated from, you can attach to the ‘start’ event in the Sortable configuration constructor. Use this event to attach the origin index as a data element:

start: function(event, ui) {
    ui.item.data('originIndex', ui.item.index());
},

If you want to find out what index the user is currently hovering above (to change the numbering of a list instantly), you can use the same method on the placeholder element (available as ui.placeholder, so to fetch the current index the user is hovering over, call ui.placeholder.index()). There is however a gotcha’ here, as the original element is still present in the list – just allow to float freely around (the dragging action). We handle this by fetching the origin index and subtracting one if we’re after the location we started at.

You can handle these events in the ‘change’ event handler:

change: function (event, ui) {
    var originIndex = ui.item.data('originIndex');
    var currentIndex = ui.placeholder.index();

    if (currentIndex > originIndex)
    {
        currentIndex -= 1;
    }

    // do magic
}

rsyslogd stuck at eating 100% (or more) CPU after upgrading to Ubuntu Natty Narwhal

This might also happen after upgrading to maverick, so don’t ignore the explanation even if you’re a version or two behind (.. or reading this at a much later time and we’ve all switched to implants).

Apparently the reason for rsyslogd getting stuck is a mismatch between how the kernel provides access to rsyslogd and what rsyslogd expects. If rsyslogd fails to get access to elements in the proc file system (/proc/kmsg was suggested in a bug thread), it locks up and spews out error messages at a great rate.

From /var/log/syslog

Apr 29 08:04:08 ubuntu kernel: Cannot read proc file system: 1 - Operation not permitted.
Apr 29 08:05:08 ubuntu kernel: last message repeated 13208405 times
Apr 29 08:06:08 ubuntu kernel: last message repeated 13297682 times
Apr 29 08:07:08 ubuntu kernel: last message repeated 14241325 times
Apr 29 08:08:09 ubuntu kernel: last message repeated 14397034 times
Apr 29 08:08:43 ubuntu kernel: last message repeated 7302035 times

Yes, that’s about 62 million error messages in less than 5 minutes. This demands quite a bit of CPU.

The reason for this is that the kernel API changed somewhere between the current Ubuntu version (2.6.38 in Natty) (and possibly the one in Maverick) and the one I was running (2.6.31). When rsyslogd runs under the latter, everything goes haywire. The solution is to make sure your kernel is upgrade to the most recent version – and that you’re actually running it.

First, stop rsyslogd to make your system a bit more responsive again:

sudo service rsyslogd stop

Updating Ubuntu should already have installed the newest kernel versions, but you might have told Ubuntu to use the existing configuration file instead of overwriting it when you updated (I almost do that automagically, which left me a couple of kernel versions behind). You can re-run this process and get grub to use an updated kernel version:

sudo update-grub

This might ask you again about whether you want to overwrite the current configuration file, and will also allow you to inspect the differences between the currently installed file and the one that update-grub wants to install. See if there are any significant changes (pay attention to information such as which partitions to use for booting), and if looks OK – allow the file to be replaced.

update-grub will then update your boot sequence with the new configuration file, and after rebooting (press ESC if you need to see the grub menu to make any changes), your new kernel should be running smoothly and rsyslogd should hopefully behave properly again.

Ubuntu Natty and Native Linux Spotify – “There is a problem with your sound card”

After updating to the most recent version of Ubuntu, The Natty Narwhal, Spotify decided it didn’t want to play music any longer. The only message in the UI itself was the usual “There is a problem with your sound card”, which isn’t very helpful if you want to actually try to find a solution.

Starting Spotify from the command line gave a few new messages, ending with:

E [snd:298] playbackError(12)

According to a a thread on getsatisfaction, this seems to be caused by a mismatch in different versions of the offline file cache (My naive guess is that Spotify uses the kernel identificator or some other settings about the machine in deciding the key to use for encrypting the offline storage – when this suddenly doesn’t match any longer, it refuses to play any music).

Closing Spotify, deleting the ~/.cache/spotify/offline.bnk file (~ expands to your home directory), solves the issue and allows Spotify to fill your ears yet again.

Update: This also seem to fix any issue where Spotify fails to get the external IP through upnp (at least that’s one of the error messages):

E [upnp:521] ip: error getting external ip 0
I [http:840] Result 404 Not Found