Making Solr Requests with urllib2 in Python

When making XML requests to Solr (A fulltext document search engine) for indexing, committing, updating or deleting documents, the request is submitted as an HTTP POST containg an XML document to the server. urllib2 supports submitting POST data by using the second parameter to the urlopen() call:

f = urllib2.urlopen("http://example.com/", "key=value")

The first attempt involved simply adding the XML data as the second parameter, but that made the Solr Webapp return a “400 – Bad Request” error. The reason for Solr barfing is that the urlopen() function sets the Content-Type to application/x-www-form-urlencoded. We can solve this by changing the Content-Type header:

solrReq = urllib2.Request(updateURL, '')
solrReq.add_header("Content-Type", "text/xml")
solrPoster = urllib2.urlopen(solrReq)
response = solrPoster.read()
solrPoster.close()

Other XML-based Solr requests, such as adding and removing documents from the index, will also work by changing the Content-Type header.

The same code will also allow you to use urllib to submit SOAP, XML-RPC-requests and use other protocols that require you to change the complete POST body of the request.

Modifying a Lucene Snowball Stemmer

This post is written for advanced users. If you do not know what SVN (Subversion) is or if you’re not ready to get your hands dirty, there might be something more interesting to read on Wikipedia. As usual. This is an introduction to how to get a Lucene development environment running, a Solr environment and lastly, to create your own Snowball stemmer. Read on if that seems interesting. The receipe for regenerating the Snowball stemmer (I’ll get back to that…) assumes that you’re running Linux. Please leave a comment if you’ve generated the stemmer class under another operating system.

When indexing data in Lucene (a fulltext document search library) and Solr (which uses Lucene), you may provide a stemmer (a piece of code responsible for “normalizing” words to their common form (horses => horse, indexing => index, etc)) to give your users better and more relevant results when they search. The default stemmer in Lucene and Solr uses a library named Snowball which was created to do just this kind of thing. Snowball uses a small definition language of its own to generate parsers that other applications can embed to provide proper stemming.

By using Snowball Lucene is able to provide a nice collection of default stemmers for several languages, and these work as they should for most selections. I did however have an issue with the Norwegian stemmer, as it ignores a complete category of words where the base form end in the same letters as plural versions of other words. An example:

one: elektriker
several: elektrikere
those: elektrikerene

The base form is “elektriker”, while “elektrikere” and “elektrikerene” are plural versions of the same word (the word means “electrician”, btw).

Lets compare this to another word, such as “Bus”:

one: buss
several: busser
those: bussene

Here the base form is “buss”, while the two other are plural. Lets apply the same rules to all six words:

buss => buss
busser => buss [strips “er”]
bussene => buss [strips “ene”]

elektrikerene => “elektriker” [strips “ene”]
elektrikere => “elektriker” [strips “e”]

So far everything has gone as planned. We’re able to search for ‘elektrikerene’ and get hits that say ‘elektrikere’, just as planned. All is not perfect, though. We’ve forgotten one word, and evil forces will say that I forgot it on purpose:

elektriker => ?

The problem is that “elektriker” (which is the single form of the word) ends in -er. The rule defined for a word in the class of “buss” says that -er should be stripped (and this is correct for the majority of words). The result then becomes:

elektriker => “elektrik” [strips “er”]
elektrikere => “elektriker” [strips “e”]
elektrikerene => “elektriker” [strips “ene”]

As you can see, there’s a mismatch between the form that the plurals gets chopped down to and the singular word.

My solution, while not perfect in any way, simply adds a few more terms so that we’re able to strip all these words down to the same form:

elektriker => “elektrik” [strips “er”]
elektrikere => “elektrik” [strips “ere”]
elektrikerene => “elektrik” [strips “erene”]

I decided to go this route as it’s a lot easier than building a large selection of words where no stemming should be performed. It might give us a few false positives, but the most important part is that it provides the same results for the singular and plural versions of the same word. When the search results differ for such basic items, the user gets a real “WTF” moment, especially when the two plural versions of the word is considered identical.

To solve this problem we’re going to change the Snowball parser and build a new version of the stemmer that we can use in Lucene and Solr.

Getting Snowball

To generate the Java class that Lucene uses when attempting to stem a phrase (such as the NorwegianStemmer, EnglishStemmer, etc), you’ll need the Snowball distribution. This distribution also includes example stemming algorithms (which have been used to generate the current stemmers in Lucene).

You’ll need to download the application from the snowball download page – in particular the “Snowball, algorithms and libstemmer library” version [direct link].

After extracting the file you’ll have a directory named snowball_code, which contains among other files the snowball binary and a directory named algorithms. The algorithms-directory keeps all the different default stemmers, and this is where you’ll find a good starting point for the changes you’re about to do.

But first, we’ll make sure we have the development version of Lucene installed and ready to go.

Getting Lucene

You can check out the current SVN trunk of Lucene by doing:

svn checkout http://svn.apache.org/repos/asf/lucene/java/trunk lucene/java/trunk

This will give you the bleeding edge version of Lucene available for a bit of toying around. If you decide to build Solr 1.4 from SVN (as we’ll do further down), you do not have to build Lucene 2.9 from SVN – as it already is included pre-built.

If you need to build the complete version of Lucene (and all contribs), you can do that by moving into the Lucene trunk:

cd lucene/java/trunk/
ant dist (this will also create .zip and .tgz distributions)

If you already have Lucene 2.9 (.. or whatever version you’re on when you’re reading this), you can get by with just compiling the snowball contrib to Lucene, from lucene/java/trunk/:

cd contrib/snowball/
ant jar

This will create (if everything works as it should) a file named lucene-snowball-2.9-dev.jar (.. or another version number, depending on your version of Lucene). The file will be located in a sub directory of the build directory on the root of the lucene checkout (.. and the path will be shown after you’ve run ant jar): lucene/java/trunk/build/contrib/snowball/.

If you got the lucene-snowball-2.9-dev.jar file compiled, things are looking good! Let’s move on getting the bleeding edge version of Solr up and running (if you have an existing Solr version that you’re using and do not want to upgrade, skip the following steps .. but be sure to know what you’re doing .. which coincidentally you also should be knowing if you’re building stuff from SVN as we are. Oh the joy!).

Getting Solr

Getting and building Solr from SVN is very straight forward. First, check it out from Subversion:

svn co http://svn.apache.org/repos/asf/lucene/solr/trunk/ solr/trunk/

And then simply build the war file for your favourite container:

cd solr/trunk/
ant dist

Voilá – you should now have a apache-solr-1.4-dev.war (or something similiar) in the build/ directory. You can test that this works by replacing your regular solr installation (.. make a backup first..) and restarting your application server.

Editing the stemmer definition

After extracting the snowball distribution, you’re left with a snowball_code directory, which contains algorithms and then norwegian (in addition to several other stemmer languages). My example here expands the definition used in the norwegian stemmer, but the examples will work with all the included stemmers.

Open up one of the files (I chose the iso-8859-1 version, but I might have to adjust this to work for UTF-8/16 later. I’ll try to post an update in regards to that) and take a look around. The snowball language is interesting, and you can find more information about it at
the Snowball site.

I’ll not include a complete dump of the stemming definition here, but the interesting part (for what we’re attempting to do) is the main_suffix function:

define main_suffix as (
    setlimit tomark p1 for ([substring])
    among(
        'a' 'e' 'ede' 'ande' 'ende' 'ane' 'ene' 'hetene' 'en' 'heten' 'ar'          
        'er' 'heter' 'as' 'es' 'edes' 'endes' 'enes' 'hetenes' 'ens'
        'hetens' 'ers' 'ets' 'et' 'het' 'ast' 
            (delete)
        's'
            (s_ending or ('k' non-v) delete)
        'erte' 'ert'
            (<-'er')
    )
)

This simply means that for any word ending in any of the suffixes in the three first lines will be deleted (given by the (delete) command behind the definitions). The problem provided our example above is that neither of the lines will capture an "ere" ending or "erene" - which we'll need to actually solve the problem.

We simply add them to the list of defined endings:

    among(
        ... 'hetene' 'en' 'heten' 'ar' 'ere' 'erene' 'eren'
        ...
        ...
            (delete)

I made sure to add the definitions before the shorter versions (such as 'er'), but I'm not sure (.. I don't think) if it actually is required.

Save the file under a new file name so you still have the old stemmers available.

Compiling a New Version of the Snowball Stemmer

After editing and saving your stemmer, it's now time to generate the Java class that Lucene will use to generate it base forms of the words. After extracting the snowball archive, you should have a binary file named snowball in the snowball_code directory. If you simply run this file with snowball_code as your current working directory:

./snowball

You'll get a list of options that Snowball can accept when generating the stemmer class. We're only going to use three of them:

-j[ava] Tell Snowball that we want to generate a Java class
-n[ame] Tell Snowball the name of the class we want generated
-o <filename> The filename of the output file. No extension.

So to compile our NorwegianExStemmer from our modified file, we run:

./snowball algorithms/norwegian/stem2_ISO_8859_1.sbl -j -n NorwegianExStemmer -o NorwegianExStemmer

(pardon the excellent file name stem2...). This will give you one new file in the current working directory: NorwegianExStemmer.java! We've actually built a stemming class! Woohoo! (You may do a few dance moves here. I'll wait.)

We're now going to insert the new class into the Lucene contrib .jar-file.

Rebuild the Lucene JAR Library

Copy the new class file into the version of Lucene you checked out from SVN:

cp NorwegianExStemmer.java /contrib/snowball/src/java/org/tartaru/snowball/ext

Then we simply have to rebuild the .jar file containing all the stemmers:

cd /contrib/snowball/
ant jar

This will create lucene-snowball-2.9-dev.jar in <lucenetrunk>/build/contrib/. You now have a library containing your stemmer (and all the other default stemmers from Lucene)!

The last part is simply getting the updated stemmer library into Solr, and this will be a simple copy and rebuild:

Inserting the new Lucene Library Into Solr

From the build/contrib directory in Lucene, copy the jar file into the lib/ directory of Solr:

cp lucene-snowball-2.9-dev.jar lib/

Be sure to overwrite any existing files (.. and if you have another version of Lucene in Solr, do a complete rebuild and replace all the Lucene related files in Solr). Rebuild Solr:

cd 
ant dist

Copy the new apache-solr-1.4-dev.war (check the correct name in the directory yourself) from the build/ directory in Solr to your application servers home as solr.war (.. if you use another name, use that). This is webapps/ if you're using Tomcat. Remember to back up the old .war file, just to be sure you can restore everything if you've borked something.

Add Your New Stemmer In schema.xml

After compiling and packaging the stemmer, it's time to tell Solr that it should use the newly created stemmer. Remember that a stemmer works both when indexing and querying, so we're going to need to reindex our collection after implementing a new stemmer.

The usual place to add the stemmer is the definition of your text fields under the <analyzer>-sections for index and query (remember to change it BOTH places!!):


Change NorwegianEx into the name of your class (without the Stemmer-part, Lucene adds that for you automagically. After changing both locations (or more if you have custom datatypes and indexing or query steps).

Restart Application Server and Reindex!

If you're using Tomcat as your application server this might simply be (depending on your setup and distribution):

cd /path/to/tomcat/bin
./shutdown.sh
./startup.sh

Please consult the documentation for your application server for information about how to do a proper restart.

After you've restarted the application server, you're going to need to reindex your collection before everything works as planned. You can however check that your stemmer works as you've planned already at this stage. Log into the Solr admin interface, select the extended / advanced query view, enter your query (which should now be stemmed in another way than before), check the "debug" box and submit your search. The resulting XML document will show you the resulting of your query in the parsedquery element.

Download the Generated Stemmer

If you're just looking for an improved stemmer for norwegian words (with the very, very simple changes outlined above, and which might give problems when concerned with UTF-8 (.. please leave a comment if that's the case)), you can simply download NorwegianExStemmer.java. Follow the guide above for adding it to your Lucene / Solr installation.

Please leave a comment if something is confusing or if you want free help. Send me an email if you're looking for a consultant.

Support for Solr in eZ Components’ Search

The new release of eZ Components (2008.1) has added a new Search module, and the first implementation included is an interface for sending search requests and new documents to a Solr installation. An introduction can be found over at the eZ Components Search Tutorial. The new release of eZ Components requires at least PHP 5.2.1 (.. and if you’re not already running at least 5.2.5, it’s time to get moving. The world is moving. Fast.).


Writing a Solr Analysis Filter Plugin

Update: If you’re writing a plugin for a Solr-version after 1.4.1 or Lucene 3.0+, you should be sure to read Updating a Solr Analysis Plugin to Lucene 4.0 as well. A few of the method calls used below has changed in the new API.

As we’ve been working on getting a better result out of the phonetic search we’re currently doing at derdubor, I started writing a plugin for Solr to be able to return better search results when searching for norwegian names. We’ve been using the standard phonetic filter from Solr 1.2 so far, using the double metaphone encoder for encoding a regular token as a phonetic value. The trouble with this is that a double metaphone value is four simple letters, which means that searchwords such as ‘trafikkontroll’ would get the same meaning as ‘Dyrvik’. The latter being a name and the first being a regular search string which would be better served through an article view. TRAFIKKONTROLL resolves to TRFK in double metaphone, while DYRVIK resolves to DRVK. T and D is considered similiar, as is V and F, and voilá, you’ve got yourself a match in the search result, but not a visual one (or a semantic one, as the words have very different meanings).

To solve this, I decided to write a custom filter plugin which we could tune to names that are in use in Norway. I’ll post about the logic behind my reasoning in regards to wording later and hopefully post the complete filter function we’re applying, but I’ll leave that for another post.

First you need a factory that’s able to produce filters when Solr asks for them:

NorwegianNameFilterFactory.java:

package no.derdubor.solr.analysis;

import java.util.Map;

import org.apache.solr.analysis.BaseTokenFilterFactory;
import org.apache.lucene.analysis.TokenStream;

public class NorwegianNameFilterFactory extends BaseTokenFilterFactory
{
    Map args;

    public Map getArgs()
    {
        return args;
    }

    public void init(Map args)
    {
        this.args = args;
    }

    public NorwegianNameFilter create(TokenStream input)
    {
        return new NorwegianNameFilter(input);
    }
}

To compile this example yourself, put the file in no/derdubor/solr/analysis/ (which matches no.derdubor.solr.analysis; in the package statement), and run

javac -6 no/derdubor/solr/analysis/NorwegianNameFilterFactory.java

(you’ll need apache-solr-core.jar and lucene-core.jar in your classpath to do this)

to compile it. You’ll of course also need the filter itself (which is returned from the create-method above):

package no.derdubor.solr.analysis;

import java.io.IOException;
import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.TokenFilter;
import org.apache.lucene.analysis.TokenStream;

public class NorwegianNameFilter extends TokenFilter
{
    public NorwegianNameFilter(TokenStream input)
    {
        super(input);
    }

    public Token next() throws IOException
    {
        return parseToken(this.input.next());
    }

    public Token next(Token result) throws IOException
    {
        return parseToken(this.input.next());
    }

    protected Token parseToken(Token in)
    {
        /* do magic stuff with in.termBuffer() here (a char[] which can be manipulated) */
        /* set the changed length of the new term with in.setTermLength(); before returning it */
        return in;
    }
}

You should now be able to compile both files:

javac -6 no/derdubor/solr/analysis/*.java

After compiling the plugin, create a jar file which contain your plugin. This will be the “distributable” version of your plugin, and should contain the .class-files of your application.

jar cvf derdubor-solr-norwegiannamefilter.jar no/derdubor/solr/analysis/*.class

Move the file you just created (derdubor-solr-norwegiannamefilter.jar in the example above) into your Solr home directory. This is where you keep your bin/ and conf/ directory (which contains schema.xml, etc). Create a lib directory in the solr home directory. This is where your custom libraries will live, so copy the file into this directory (lib/).

Restart Solr and check that everything still works as it should. If everything still seems normal, it’s time to enable your filter. In one of your <filter>-chains, you can simply append a <filter> element to insert your own filter into the chain:


    
    
    
    

Restart Solr again, and if everything still works as it should, you’re all set! Time to index some new data (remember that you’ll need to reindex the data for things to work as you expect, since no stored data is processed when you edit your configuration files) and commit it! Do a few searches through the admin interface to see that everything works as it should. I’ve used the “debug” option to .. well, debug .. my plugin while developing it. A very neat trick is to see what terms your filter expands to (if you set type=”query” in the analyzer section, it will be applied to all queries against that field), which will be shown in the first debug section when looking at the result (you’ll have to scroll down to the end to see this). If you need to debug things to a greater extend, you can attach a debugger or simply use the Good Old Proven Way of println! (these will end up in catalina.out in logs/ in your tomcat directory). Good luck!

Potential Problems and How To Solve Them

  • If you get an error about incompatible class versions, check that you’re actually running the same (or newer) version of the JVM (java -version) on your Solr search server that you use on your own development machine (use -5 to force 1.5 compatible class files instead of 1.6 when compiling).
  • If you get an error about missing config or something similiar, or that Solr is unable to find the method it’s searching for (generally triggered by an ReflectionException), remember to define your classes public! public class NorwegianNameFilter is your friend! It took at least half an hour until I realized what this simple issue was…

Any comments and followups are of course welcome!

David Cummins on Fulltext Search as a Webservice

David Cummins has a neat little post up about replicating some of Solr’s features in a PHP based solution. His post “Fulltext search as a webservice” should sound familiar to Solr’s approach from the title, and David describes how they built a similiar solution on top of Zend_Search_Lucene (Solr also uses Lucene in the backend). Seems like it would be easier to just set up a dedicated Solr cluster instead, but hey, how often has “it would be easier to do something else” sparked innovation?

I’d also like to note that the coming Solr 1.3 supports php serialization as an output format, so you can just unserialize() the response from Solr. Should provide for even easier integration between PHP and Solr in the future. While on the subject, I’d like to suggest reading Stemming in Zend_Search_Lucene too, an introduction to adding filters to Zend_Search_Lucene. Also worth a look is the Search Tools in PHP presentation from phplondon.

Solving UTF-8 Problems With Solr and Tomcat

Came across an issue with searching for UTF-8 characters in Solr today; the search worked just as it should (probably since we’re using a phonetic field to search), but our facets and limitations didn’t work as they should. This happened as soon as we had a value with an UTF-8 character (> 127 in ascii value), in our case the norwegian letters Æ, Ø or Å.

The solution was presented by Charlie Jackson at the Solr-user mailing list and is quite simply to add URIEncoding="UTF-8" to the appropriate connector in the Tomcat server.xml file. This is also documented on the Solr on Tomcat page in the Solr Wiki .

Using Solrj – A short guide to getting started with Solrj

As Solrj – The Java Interface for Solr – is slated for being released together with Solr 1.3, it’s time to take a closer look! Solrj is the preferred, easiest way of talking to a Solr server from Java (unless you’re using Embedded Solr). This way you get everything in a neat little package, and can avoid parsing and working with XML etc directly. Everything is tucked neatly away under a few classes, and since the web generally lacks a good example of how to use SolrJ, I’m going to share a small class I wrote for testing the data we were indexing at work. As Solr 1.2 is the currently most recent version available at apache.org, you’ll have to take a look at the Apache Solr Nightly Builds website and download the latest version. The documentation is also contained in the archive, so if you’re going to do any serious solrj development, this is the place to do it.

Oh well, enough of that, let’s cut to the chase. We start by creating a CommonsHttpSolrServer instance, which we provide with the URL of our Solr server as the only argument in the constructor. You may also provide your own parsers, but I’ll leave that for those who need it. I don’t. By default your Solr-installation is running on port 8080 and under the solr directory, but you’ll have to accomodate your own setup here. I’ve included the complete source file for download.

class SolrjTest
{
    public void query(String q)
    {
        CommonsHttpSolrServer server = null;

        try
        {
            server = new CommonsHttpSolrServer("http://localhost:8080/solr/");
        }
        catch(Exception e)
        {
            e.printStackTrace();
        }

The next thing we’re going to do is to actually create the query we’re about to ask the Solr server about, and this means building a SolrQuery object. We simply instanciate the object and then start to set the query values to what we’re looking for. The setQueryType call can be dropped to use the default QueryType-handler, but as we currently use dismax, this is what I’ve used here. You can then also turn on Facet-ing (to create navigators/facets) and add the fields you want for those.

        SolrQuery query = new SolrQuery();
        query.setQuery(q);
        query.setQueryType("dismax");
        query.setFacet(true);
        query.addFacetField("firstname");
        query.addFacetField("lastname");
        query.setFacetMinCount(2);
        query.setIncludeScore(true);

Then we simply query the server by calling server.query, which takes our parameters, build the query URL, sends it to the server and parses the response for us.

        try
        {
            QueryResponse qr = server.query(query);

This result can then be fetched by calling .getResults(); on the QueryResponse object; qr.

            SolrDocumentList sdl = qr.getResults();

We then output the information fetched in the query. You can change this to print all fields or other stuff, but as this is a simple application for searching a database of names, we just collect the first and last name of each entry and print them out. Before we do that, we print a small header containing information about the query, such as the number of elements found and which element we started on.

            System.out.println("Found: " + sdl.getNumFound());
            System.out.println("Start: " + sdl.getStart());
            System.out.println("Max Score: " + sdl.getMaxScore());
            System.out.println("--------------------------------");

            ArrayList> hitsOnPage = new ArrayList>();

            for(SolrDocument d : sdl)
            {
                HashMap values = new HashMap();

                for(Iterator> i = d.iterator(); i.hasNext(); )
                    Map.Entry e2 = i.next();
                    values.put(e2.getKey(), e2.getValue());
                }

                hitsOnPage.add(values);
                System.out.println(values.get("displayname") + " (" + values.get("displayphone") + ")");
            }

After this we output the facets and their information, just so you can see how you’d go about fetching this information from Solr too:

            List facets = qr.getFacetFields();

            for(FacetField facet : facets)
            {
                List facetEntries = facet.getValues();

                for(FacetField.Count fcount : facetEntries)
                {
                    System.out.println(fcount.getName() + ": " + fcount.getCount());
                }
            }
        }
        catch (SolrServerException e)
        {
            e.printStackTrace();
        }
    }

    public static void main(String[] args)
    {
        SolrjTest solrj = new SolrjTest();
        solrj.query(args[0]);
    }
}

And there you have it, a very simple application to just test the interface against Solr. You’ll need to add the jar-files from the lib/-directory in the solrj archive (and from the solr library itself) to compile and run the example.

Download: SolrTest.java