Whoisi – Social Aggregation

Just found out about whoisi.com through John Resig, and it’s quite a nifty little app. It aggregates several feeds in the context of an individual. The application does not require any login, and builds on the collection of all resources people are able to gather for one particular individual. I’ve collected the available feeds for myself over at my whoisi.com page, so that you can actually follow my flickr page, my twitter and my blog from one location. If you have any other resources where I’m contributing (maybe my youtube-feed?), feel free to add them.

I also suggest playing with the “random person” feature, I’ve had quite a bit of fun with that one today.

Number one feature: I don’t have to log in at Whoisi. Amazing. I just get a personalized link that I can email to myself for storage or simply bookmark it in my browser (or private on a bookmark site). No hassle. No email. No person information. Instant win.

You can read more about the technical implementation over at Christopher Blizzard’s blog.

Using Apache httpd as Your Caching Solution

In this article I’m going to describe a novel solution for making cached versions of dynamic content available, while attempting to strike a balance between flexibility, performance and the origin of dynamic content. This solution may not be suited for very dynamic content (where the updates are better triggered by rewriting the cached version when the content changes), but in those situations where the dynamic content may be built from a very large dataset on request from the users. I have two use cases detailing applications I’ve been involved in building where I have applied this strategy. This could also be implemented with a caching service in front of the main service, but will require the installation of a custom service and hardware etc. for that service.

The WMS Cache

WMS (Web Map Service) is an OGC (Open Geospatial Consortium) specification which details a common set of parameters for how to query a web service which returns a raster map image (a regular png/jpg/bmp file) for an area. The parameters include the bounding box (left,bottom,right,upper) and the layers (roads,rivers,etc) and the size of the resulting image. The usual approach is to add a caching layer in the WMS itself, so any generated image is simply stored to disk, and then checked if the disk exists before retrieve the data and rendering the image (and if it exists, just return the image data from disk instead). This will increase the rate of requests the WMS can answer and will take load off the server for the most common requests. We are still left with the overhead of parsing the request, checking for the cached file and most notably, loading our dynamic language of choice and responding to the request. An example of such a small and naive PHP application is included:

The next request which arrives with the identical set of GET-parameters, will be served with the overhead of loading PHP, parsing the PHP-script (which is less if you have APC or a similar cache installed), sorting the GET-parameters (so that bbox=..&x=.. is the same as x=..&bbox=..), serializing the response, checking that the file exists on disk (you could simplify this to just doing a read and checking if the read succeeded), copying the data from disk to memory and then outputting the data to the client (you could also use fpassthru() and friends which may be more optimized for simple reading and output of data, but that's not the main point here).

To relate this to our use case of the WMS, we need to take a closer look at how map services are used today. Before Google showed the world what a good map solution could look like with modern web technology, a map application presented an image to the user, allowed the user to click or drag the image to zoom or move, and then reloaded the entire page to generate the new image. If it took 0.5s to generate the image, that were not really a problem, as the data set is not updated very often and it is very easy to do these operations in parallel across a cluster. When Google introduced Google Maps, they loaded 9 visible images (tiles) in the first image, and then started loading other tiles in the background (so that when you scroll the map, it looks like the images are already in place). If you run an interface similar to Google Maps against a regular WMS, most WMS servers would explode and take the whole 42U rack with them. Not a very desirable situation. The easy solution if you have an unlimited set of resources, disk space and money is to simply generate all the available tiles up front, in the same way as Google has done it. This will require disk space for all the tiles, and will not allow your users to choose which layers then want included in the map (this will change as map services are starting to build each layer as a separate tile and then superimposing them in the user interface).

The problem is that most of us (actually, n - 1) are not Google, but most of us do not build map services either. For those of us who do, we needed a way of living somewhere in between of having to render our complete dataset to image tiles up front or running everything through the WMS. While working with Gunnar Misund at Østfold University College, I designed a simple scheme to allow compatible clients to fetch cached tiles automagically, while those tiles which did not exist yet, were generated on the fly from the background WMS. The idea was to let Apache httpd handle the delivery of already generated and cached content, while our WMS could serve those areas which were viewed for the very first time (or where the layer selection were new). It would not be as fast as Google Maps for non-cached content, but it wouldn't require us to run through our complete service to generate images either.

The solution was to let the javascript client request images through a custom URL:


(This is just an example, and does only contain the center point of the image). This is decomposed into:


This is all good as long as image.jpg exists in the local path provided, so that Apache can just serve the image as it is from the location. Apache httpd (or lighttpd and other "serve files fast!"-httpds) are able to serve these static files in large numbers (it's what they were written for, you know..) with a minimum overhead. The problem is what to do when the file actually does not exist, which will happen each time a resource is requested for the first time, and we do not have a cache yet. The solution lies in assigning a PHP-file as the handler for any 404 error (file not found). This is a well known trick used all over the field (such as handling www.php.net/functionname direct lookup). In PHP you can use $_SERVER['REQUEST_URI'] to get the complete path of the request that ended in the 404.

The .htaccess file of the application is as simple as cake:

ErrorDocument 404 /wms/handler.php

I've enclosed a simple specification which were written as a description of the implementation when the project was done in 2005.

Thumbnail generation

Generating thumbnails can also be transformed into the same problem set. In the case where you need several different sizes of thumbnails (and different rescales are needed for different applications), you can apply the same strategy. Instead of handing all the information to a resize script with the file name etc. as the argument, simply have the xsize and the ysize as part of the URL. If the file exists in the path, it's served directly with no overhead, otherwise the 404 handler is invoked as in the previous example. The thumbnail can then generated, saved in the proper location and the world can continue to rotate at it's regular pace.

This application can then be extended by adding new parameters in the url, such as the resize method, if the image should be stretched, zoomed and other options.


This is a very simple scheme that does not require any custom hardware or server software installed, and places itself neatly in between having a caching front end server between the client and the application and the hassle of generating the same file each and every time. It allows you to remove the overhead of invoking the script (PHP in this case) for each request, which means that you can serve files at a much greater rate and let your hardware do other, more interesting things instead.

Getting Scientific

In the latest edition of Birkebeiner’n, the norwegian magazine sent to all participants of Birkebeinerrittet, a novel way of applying the scientific method is described. The writer starts off with “[this technique] does not have a scientific proven effect on strength”, and then follow it up one sentence later with “since over 90% of athletes uses [the technique], we can conclude that it has an effect”.

So there you have it, as long as most people do it, it works.

PHP Vikinger Notes

Just a few notes from PHP Vikinger which were arranged by Derick Rethans in Norway today. Things went mostly smoothly and people in general seemed to have a very good time. These are just some of the random notes I made during the sessions.

All in all it was a good unconference, with a friendly and laid back tone and hopefully people got what they came for. Next time I’ll try to prepare a simple presentation on some interesting and hopefully not too familiar topic and actually contribute something too. We drove from Halden and Fredrikstad to Skien in the morning and back in the evening, which worked out quite OK, except for .. well, the lack of sleep in the morning. But everyone survived and managed to stay awake, so I conclude that the trip was a great success.

To sum it all up: a banana is a fruit and a tomato is a berry. You probably had to be there for that one.

Thanks for the unconference, and hopefully I’ll be able to attend more events in the future too.

UPDATE: Derick also has a writeup online from PHP Vikinger.

Derick and Sebastian Readying a Presentation

Two Books Down, One Up

I finished Sources of Power: How People Make Decisions a week or two ago, and after a bit of a reading hiatus for a week, I finally got started on Defensive Design for the Web from some of the guys at 37signals. Both books read very well and provided good insights into their subjects, and both has loads of examples that illustrates the points they’re trying to get across. For Defensive Design for the Web, this includes at least a hundred screenshots of different sites with comments and comparisons with successful sites in the same genre. Being a very practical book, I read the entire edition in a couple of hours, and while I’m not completely sure what I’ve taken away from it, I suggest reading it again from time to time to refresh your thoughts around the subject.

Anyways, after finishing these two books, I’ve now picked up Information Retrieval: Algorithms and Heuristics (2nd Edition) as my new reading material. This is much more algorithmic and theoretical than my previous books, so hopefully I’ll not get bored after a few chapters.

A Redirect Does Not Stop Execution

This is just a public service announcement for all the inexperienced developers who are writing redirects in PHP by issuing a call to header(“Location: <new url>”) to do their redirect. I see the same mistake time over and over again, and just to try to make sure that people actually remember this:

A Call to Header (even with Location:) Does NOT Stop The Execution of the Current Application!

A simple example that illustrates this:


if (empty($_SESSION['authed']))
    header('Location: http://example.com/');

if (!empty($_POST['text']))
    /* insert into database */

/* Do other admin stuff */

The problem here is that the developer does not stop script execution after issuing the redirect. While the result when testing this code will be as expected (a redirect happens when the user is not logged in and tries to access the link). There is however a gaping security hole here, hidden not in what’s in the file, but what’s missing. Since the developer does not terminate the execution of the script after doing the redirect, the script will continue to run and do whatever the user asks of it. If the user submits a POST request with data (by sending the request manually), the information will be inserted into the database, regardless of wether the user is logged in or not. The end result will still be a redirect, but the program will execute all regular execution paths based on the request.

There’s A Difference Between Being Inspired By and Outright Copying

A recently launched service that has gotten way too much attention in Norwegian press today is Qpsa.no – another “WHAT ARE YOU DOING NOW” service. Their business idea? Reimplementing Twitter, copying their look and defending it with “It’s in Norwegian!”.

First, let’s get this out of the way; I have absolutely no problem with people implementing similar services as other people, iterating concepts, being inspired by and in general, standing on the shoulders of giants. I do however have a slight problem with people directly copying other people success histories and passing them off as “revolutionizing social networks in Norway”. And although some news items has pointed out the link to Twitter, they have all failed to point out the fact that this is a blatant ripoff of the original service.

First, let’s start by looking at the main page:

Then we browse over to our international friends and discover their main page:

This seems way to similar to be a coincidence, so their “inspiration” seems quite obvious. In particular, notice the green box, the content of the page (nothing other than sign up). They’ve even managed to get quite a few birds in there too. The only thing they’re missing is the beautiful layout and look of Twitter, but hey, you can’t have it all. Or can you? On to the next comparison:

Versus our now known international friends yet again:

Hmm. This seems quite similar (thanks to Mister Noname for getting me a screenshot of his tweets updates). Guess it’s not really that much about actually trying to be original, but more about just copying what other people have created.

Their defense for creating the site: Twitter is not available in Norwegian, and Twitter is slow (Twitter doesn’t scale! [two bonus memepoints]). Yes. Twitter is slow from time to time, but this is where it gets even more interesting. Neither of the people behind the application are web developers, and obviously hasn’t given much thought about why Twitter is slow.

My guess is that hopefully Twitter will register a formal complaint or the people behind qpsa.no will get wiser and change their look. Maybe they’ll even try to actually build on the idea that created twitter, and create something that is worth checking out. The largest Norwegian community, Nettby has over 700.000 users (if we compare that to U.S. numbers, it would mean a US site with somewhere around 47 million active users), and could probably add this feature just as quick and with an established user base in those numbers, it would be a steam roller against a bird. A twittering little creature.

Bonus points for using “It’s in Norwegian!” as the main defense, then naming your service as a spanish phrase.

Support for Solr in eZ Components’ Search

The new release of eZ Components (2008.1) has added a new Search module, and the first implementation included is an interface for sending search requests and new documents to a Solr installation. An introduction can be found over at the eZ Components Search Tutorial. The new release of eZ Components requires at least PHP 5.2.1 (.. and if you’re not already running at least 5.2.5, it’s time to get moving. The world is moving. Fast.).

Writing a Solr Analysis Filter Plugin

Update: If you’re writing a plugin for a Solr-version after 1.4.1 or Lucene 3.0+, you should be sure to read Updating a Solr Analysis Plugin to Lucene 4.0 as well. A few of the method calls used below has changed in the new API.

As we’ve been working on getting a better result out of the phonetic search we’re currently doing at derdubor, I started writing a plugin for Solr to be able to return better search results when searching for norwegian names. We’ve been using the standard phonetic filter from Solr 1.2 so far, using the double metaphone encoder for encoding a regular token as a phonetic value. The trouble with this is that a double metaphone value is four simple letters, which means that searchwords such as ‘trafikkontroll’ would get the same meaning as ‘Dyrvik’. The latter being a name and the first being a regular search string which would be better served through an article view. TRAFIKKONTROLL resolves to TRFK in double metaphone, while DYRVIK resolves to DRVK. T and D is considered similiar, as is V and F, and voilá, you’ve got yourself a match in the search result, but not a visual one (or a semantic one, as the words have very different meanings).

To solve this, I decided to write a custom filter plugin which we could tune to names that are in use in Norway. I’ll post about the logic behind my reasoning in regards to wording later and hopefully post the complete filter function we’re applying, but I’ll leave that for another post.

First you need a factory that’s able to produce filters when Solr asks for them:


package no.derdubor.solr.analysis;

import java.util.Map;

import org.apache.solr.analysis.BaseTokenFilterFactory;
import org.apache.lucene.analysis.TokenStream;

public class NorwegianNameFilterFactory extends BaseTokenFilterFactory
    Map args;

    public Map getArgs()
        return args;

    public void init(Map args)
        this.args = args;

    public NorwegianNameFilter create(TokenStream input)
        return new NorwegianNameFilter(input);

To compile this example yourself, put the file in no/derdubor/solr/analysis/ (which matches no.derdubor.solr.analysis; in the package statement), and run

javac -6 no/derdubor/solr/analysis/NorwegianNameFilterFactory.java

(you’ll need apache-solr-core.jar and lucene-core.jar in your classpath to do this)

to compile it. You’ll of course also need the filter itself (which is returned from the create-method above):

package no.derdubor.solr.analysis;

import java.io.IOException;
import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.TokenFilter;
import org.apache.lucene.analysis.TokenStream;

public class NorwegianNameFilter extends TokenFilter
    public NorwegianNameFilter(TokenStream input)

    public Token next() throws IOException
        return parseToken(this.input.next());

    public Token next(Token result) throws IOException
        return parseToken(this.input.next());

    protected Token parseToken(Token in)
        /* do magic stuff with in.termBuffer() here (a char[] which can be manipulated) */
        /* set the changed length of the new term with in.setTermLength(); before returning it */
        return in;

You should now be able to compile both files:

javac -6 no/derdubor/solr/analysis/*.java

After compiling the plugin, create a jar file which contain your plugin. This will be the “distributable” version of your plugin, and should contain the .class-files of your application.

jar cvf derdubor-solr-norwegiannamefilter.jar no/derdubor/solr/analysis/*.class

Move the file you just created (derdubor-solr-norwegiannamefilter.jar in the example above) into your Solr home directory. This is where you keep your bin/ and conf/ directory (which contains schema.xml, etc). Create a lib directory in the solr home directory. This is where your custom libraries will live, so copy the file into this directory (lib/).

Restart Solr and check that everything still works as it should. If everything still seems normal, it’s time to enable your filter. In one of your <filter>-chains, you can simply append a <filter> element to insert your own filter into the chain:


Restart Solr again, and if everything still works as it should, you’re all set! Time to index some new data (remember that you’ll need to reindex the data for things to work as you expect, since no stored data is processed when you edit your configuration files) and commit it! Do a few searches through the admin interface to see that everything works as it should. I’ve used the “debug” option to .. well, debug .. my plugin while developing it. A very neat trick is to see what terms your filter expands to (if you set type=”query” in the analyzer section, it will be applied to all queries against that field), which will be shown in the first debug section when looking at the result (you’ll have to scroll down to the end to see this). If you need to debug things to a greater extend, you can attach a debugger or simply use the Good Old Proven Way of println! (these will end up in catalina.out in logs/ in your tomcat directory). Good luck!

Potential Problems and How To Solve Them

  • If you get an error about incompatible class versions, check that you’re actually running the same (or newer) version of the JVM (java -version) on your Solr search server that you use on your own development machine (use -5 to force 1.5 compatible class files instead of 1.6 when compiling).
  • If you get an error about missing config or something similiar, or that Solr is unable to find the method it’s searching for (generally triggered by an ReflectionException), remember to define your classes public! public class NorwegianNameFilter is your friend! It took at least half an hour until I realized what this simple issue was…

Any comments and followups are of course welcome!

Followup on The Missing Statistics in OpenX

After my previous post about the missing OpenX statistics because of crashed MySQL-tables, I got a very nice and helpful comment from one of the OpenX developers. To put it one single word: awesome. If you’re ever going to run a company and have to look after your customers (even if you release your project as open source), simply do that. People will feel that someone are looking out for them.

Anyways, as promised, this were supposed to be a follow up. We didn’t manage to get the impressions statistics back, but the missing clicks returned after repairing the tables. The tip from Arlen didn’t help either, but I have a few suggestions for how to make the script easier to use.

I were kind of perplexed about how I could give the dates for the time interval it was going to rebuild the statistics. The trick was to change two define()-s in the top of the code. Not very user friendly, so I made a small change to use $argc and $argv instead. That way I could do:

    php regenerateAdServerStatistics.php "2008-06-01 10:00:00" "2008-06-01 10:59:59"

instead of having to edit the file and changing the defines every time. After doing this simple change, I could also write a small helper script that ran the regenerateAdServerStatistics.php file for all the operation intervals within the larger interval (an operation interval is an hour, while my interval were several days).

So, here it is, regenerateForPeriod.php:


    $start = $argv[1];
    $end = $argv[2];

    $start_ts = strtotime($start);
    $end_ts = strtotime($end);

    if (!$start_ts || !$end_ts || ($start_ts >= $end_ts))
        exit("Invalid dates.");

    $current_ts = mktime(date('H', $start_ts), 0, 0, date('m', $start_ts), date('d', $start_ts), date('y', $start_ts));

    while($current_ts < $end_ts)
        system('php regenerateAdServerStatistics.php "' . date('Y-m-d H', $current_ts) . ':00:00" "' . date('Y-m-d H', $current_ts) . ':59:59"');
        $current_ts += 3600;

This runs the renegerateAdServerStatistics.php script for each operation interval. If your ad server uses a larger interval than 3600 seconds, change the setting to a more appropriate value. Before doing this, you’ll want to remove the sleep(10) and the warning in regenerateAdServerStatistics.php, so that you don’t have to wait 10 seconds for each invocation of the script. I removed the warning and sleep altogheter, but hopefully someone will commit a command line parameter to regenerateAdServerStatistics.php that removes the delay. I didn’t have time to clean up the code and submit an official patch today, but if there is interest, leave a comment and I’ll consider it.