Apache 2.4 Failing to Recognize Virtual Hosts

After upgrading from Apache 2.2 to 2.4 in a Windows development environment, all my virtualhosts stopped working. I could add syntax errors to the files which would make Apache refuse to start up, or get notices about invalid document roots, but the virtual host server names just wouldn’t catch on.

After removing the reference to Include conf/extra/httpd-vhosts.conf things suddenly started working! Weird. The reason seems to be that the default vhost referenced in httpd-vhosts.conf uses _default_ instead of * to reference the virtualhost name. I’ve used * in all my configuration files, and apparently Apache refuses to reference any * references if it hits a _default_ name in the VirtualHost configuration first. That seems weird, so if someone has any more information about what’s causing this, I’m very interested.

My setup now works again, so I’m not going to start digging into the source to find the reason for this just yet. :-)

AH01753: access check of ‘127.0.0.1’ to /xxx/ failed, reason: unable to get the remote host name

This error message can be caused by placing an IP instead of a hostname in a Require host statement in Apache 2.4+. After porting some old access rules to Apache 2.4 I had used Require host 127.0.0.1 instead of the correct Require ip 127.0.0.1. Switched it, and ahoy! It now works.

svn: OPTIONS of ‘‘: SSL handshake failed: SSL error: A TLS warning alert has been received. ()

If your svn client suddenly starts complaining about something similar to

svn: OPTIONS of '...': SSL handshake failed: SSL error: A TLS warning alert has been received. (...)

The reason might be that the host in the URL (https://example.com/ => example.com) doesn’t match the ServerName setting in the SSL host for your web server. You might not have configured this, so for Apache add:

ServerName example.com

.. and restart the server. It might just work again!

Solr: Replication not starting?

After upgrading our Solr-servers from 1.4.1 to 4.0-trunk (to be sure we were ready for the next version), I had trouble with getting replication to start again. It worked perfectly back with 1.4.1, but after upgrading to 4.0-trunk, it simply wouldn’t start.

I had to upgrade the machines individually (to allow the current index to continue serve requests), I removed the replication and then directed all the traffic to the slave. After updating the master (which worked after actually remembering to clean out the old webapps from Tomcat and adding a few new settings) and reindexing, most of the traffic were directed to it, and the slave were upgraded to the new Solr-version. I turned on replication again, updated the configuration file with the needed settings and started the slave. Nothing happened. Weird.

Time to debug!

On any slaves there’s a “replication.properties” file in the data directory ($SOLRHOME/data) which contain information about the current replication status. This file were created, indicating that at least the replication was attempting to run. If you open the file in a text editor (or just cat it), you should be able to read a bit of meta information about the replication state.

replicationFailedAtList=1311072270004,1311072240006..
timesFailed=11

Seems like it’s trying, but for some reason it doesn’t work. First thing to check would be to grep for replication in the log on both the master and the slave, and see if there’s any requests being made at all. There might be, but the replication still doesn’t start.

Try fetching the current state yourself to see what response the master is serving. You can do this by using “GET” or “wget” or “curl” to make an HTTP request to the master Solr-server from the slave together with the URL from “masterUrl” in the requestHandler for /replication from solrconfig.xml:

GET http://example.com/solr/replication?command=indexversion

This should respond with something close to:

<?xml version="1.0" encoding="UTF-8"?>
<response>
  <lst name="responseHeader">
    <int name="status">0
    <int name="QTime">0
  </lst>
  <long name="indexversion">1310994445934
  <long name="generation">2
</response>

If “indexversion” is 0, this means that the master hasn’t triggered a replication yet, which may seem weird if you’ve just started the server and the slave doesn’t have any data at all.

The reason might be that the master has not been instructed to actually trigger a replication event (and unless a replication event has been triggered, the indexversion will be 0):

<requestHandler name="/replication" class="solr.ReplicationHandler">
  <lst name="master">
    <str name="replicateAfter">commit
    <str name="replicateAfter">startup
    <str name="replicateAfter">optimize

If you only have “commit” in the above list, a replication event will not be triggered unless you’ve actually performed a commit after the slave has connected for the first time. If you add “startup”, the replication will also be triggered when the master starts up (so that any connecting slaves will start replicating right away).

To fix the issue without restarting any nodes, issue a single commit to the master and watch as the slaves start replicating. To issue a commit through curl:

curl http://example.com/solr/update -H "Content-Type: text/xml" --data-binary '<commit />'

nginx and rewriting based on GET-parameter (URL-parameters/arguments)

Update: see the comment below from Alan Orth about how to implement this in a much cleaner way now!

When rewriting URLs in Apache through mod_rewrite, you have the possibility of using RewriteCond to only apply rewrites if the original resource has been called with a particular argument in the URL (such as “/file?oid=..”).

The solution in nginx was however a bit different, but thanks to Rewriting URL-params in nginx I got on the right track from the start.

In nginx this information is available through the $args variable, which will contain the complete query string. In Will’s example above he’ll replace the query string, but I were interested in inserting a specific parameter instead (and include the previous query string, so I couldn’t just do the “set $args ..” that he does in the example).

My first try was to simply use $1 in the rewrite destination, but this didn’t work – as rewrite will reset the captured patterns from the previous regular expression (since the rewrite source also is a regular expression). But by introducing my own, temporary variable I were able to save the value from the matching regular expression (for the GET parameter) and use it in my rewrite destination.

The following example shows how I ended up solving the issue. This will rewrite the URL only if the “oid” parameter is found at the beginning of the query string when the URL is requested, and the location = /oldURL limits the rewrite to requests for the old resource.

location = /oldURL {
    if ($args ~ "^oid=(\d+)") {
        set $key1 $1;
        rewrite ^.*$  /newURL?param1=foo¶m2=bar&key1=$key1 last;
    }
}

This will rewrite a request for /oldURL?oid=123&what=cheese to /newURL?param1=foo&param2=bar&key1=123&oid=123&what=cheese — if you want to exclude the previous arguments, you can either just set $args directly to key1=$1 and just use param1=foo and param2=bar in the rewrite destination:

        set $args key1=$1;
        rewrite ^.*$  /newURL?param1=foo¶m2=bar last;

This might be cleaner, depending on what you’re trying to do.

mod_jk and Internal Server Error (HTTP 500)

We’ve extended our previously single Solr-node to a few nodes in a cluster. This allows us to run queries against one node while updating or configuring another, distributing the load across several servers (although we’re not there yet load wise) and being able to handle any out of memory or other critical errors.

While Solr supports querying several cores or distributing the queries internally, we decided to move the load balancing and handling of failed nodes higher up in the hierarchy. We’re now doing simple load balancing and handling of failed nodes by using mod_jk in our existing Apache-based environment. mod_jk also handles failed servers without any administrator interaction. We were already using mod_jk for our main web frontend, and since we use Tomcat as our application container for Solr, things should be a breeze!

Well, no. After copying our existing mod_jk setup, configuring our new workers and restarting Apache, all I got was the well known 500 INTERNAL SERVER ERROR. Here’s the worker configuration file:

worker.list=loadbalancer,status

worker.solr1.port=8009
worker.solr1.host=10.0.0.4
worker.solr1.type=ajp13
worker.solr1.lbfactor=1
worker.solr1.cachesize=10

worker.solr2.port=8009
worker.solr2.host=10.0.0.5
worker.solr2.type=ajp13
worker.solr2.lbfactor=4
worker.solr2.cachesize=10

worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=solr1,solr2
worker.loadbalancer.sticky_session=0

worker.status.type=status

This provides us with two solr servers and one status worker (the status worker is responsible for providing a simple web interface for enabling/disabling/seeing the status of the other workers), configured with a 1:4 load balancing (the second server has quite a bit more memory available for Solr).

I provided the configuration of the workers through the JkWorkersFile configuration setting (in a VirtualHost block… don’t do that):

JkWorkersFile conf/workers.properties

I’d also enable debug logging to attempt to find the problem (still in a VirtualHost block):

JkLogFile logs/mod_jk.log
JkLogLevel debug
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

Other mod_jk settings (in the VirtualHost block) were:

JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat "%w %V %T"
JkShmFile logs/jk.shm
JkMount /* loadbalancer

<Location /jkstatus>
	JkMount status
	Order deny,allow
        Deny from all
        Allow from 127.0.0.1
</Location>

Still no solution. Peeking at the log files mod_jk provided, I were able to deduce the following:

[debug] map_uri_to_worker::jk_uri_worker_map.c (525): Attempting to map context URI '/jkstatus'
[debug] map_uri_to_worker::jk_uri_worker_map.c (550): Found an exact match status -> /jkstatus
[debug] jk_handler::mod_jk.c (1920): Into handler jakarta-servlet worker=status r->proxyreq=0
[debug] wc_get_worker_for_name::jk_worker.c (111): did not find a worker status
[info]  jk_handler::mod_jk.c (2071): Could not find a worker for worker name=status

This indicates that mod_jk was unable to find a worker matching the name I provided in the JkMount statement above; status. Weird. I added some garbage characters to the “JkWorkersFile” setting, and Apache complained that it were unable to find the workers file. Changed it back, reloaded, and still nothing. It was apparently unable to find the worker. The map did however work, as it tried to launch a worker.

Looking back at the start up sequence of mod_jk, the following were found in the log:

[debug] build_worker_map::jk_worker.c (236): creating worker ajp13
[debug] wc_create_worker::jk_worker.c (141): about to create instance ajp13 of ajp13
[debug] wc_create_worker::jk_worker.c (154): about to validate and init ajp13
[debug] ajp_validate::jk_ajp_common.c (1922): worker ajp13 contact is 'localhost:8009'
[debug] ajp_init::jk_ajp_common.c (2047): setting endpoint options:
[debug] ajp_init::jk_ajp_common.c (2050): keepalive:        0
[debug] ajp_init::jk_ajp_common.c (2054): timeout:          -1
[debug] ajp_init::jk_ajp_common.c (2058): buffer size:      0
ajp_init::jk_ajp_common.c (2062): pool timeout:     0
[debug] ajp_init::jk_ajp_common.c (2066): connect timeout:  0
[debug] ajp_init::jk_ajp_common.c (2070): reply timeout:    0
[debug] ajp_init::jk_ajp_common.c (2074): prepost timeout:  0
[debug] ajp_init::jk_ajp_common.c (2078): recovery options: 0
[debug] ajp_init::jk_ajp_common.c (2082): retries:          2
[debug] ajp_init::jk_ajp_common.c (2086): max packet size:  8192
[debug] ajp_create_endpoint_cache::jk_ajp_common.c (1959): setting connection pool size to 1 with min 0

It took a bit of time, but I realized that this tells me that mod_jk created _a default_ worker named ajp13. Apparently it was not reading my worker file at all, but it still complained if I changed the file name. You’d think that the setting which loads the configuration file would work when it complains when it doesn’t. But .. well. After an hour of attempting to find out why the workers didn’t load, revising the workers file to a minimal example, trying with just a single status worker, I concluded that my workers file was correct, and obviously mod_jk found it when it attempted to load it.

Then I suddenly discovered the small notice in the mod_jk configuration manual:

JkWorkersFile: This directive is only allowed once. It must be put into the global part of the configuration.

JkWorkersFile can not be defined in a <VirtualHost> section. It will NOT complain if you do it, it’ll just never define any workers. It will complain if the file doesn’t exist, even if it never tries to actually load it.

Confusing.

Moving the JkWorkersFile statement out from the <VirtualHost> block and to the LoadModule statement instead solved the issue. This is also the case for JkWorkerProperty.

Solr, Tomcat and HTTP/1.1 505 HTTP Version Not Supported

During today’s hacking aboot I came across the above error from our Solr query library. The error indicates that some part of Tomcat was unable to parse the “GET / HTTP/1.1” string – where it is unable to determine the “HTTP/1.1” part. A problem like this could be introduced by having a space in the query string (and it not being escaped properly), so that the request would have been for “GET /?a=b c HTTP/1.1”. After running through both the working and non-working query through ngrep and wireshark, this did however not seem to be the problem. My spaces were properly escaped using plus signs (GET /?a=b+c HTTP/1.1).

There does however seem to be a problem (at least with our version of Tomcat – 6.0.20) which results in the +-s being resolved before the request is handed off to the code that attempts to parse the header, so even though it is properly escaped using “+”, it still barfs.

The solution:

Use %20 to escape spaces instead of + signs; simply adding str_replace(” “, “%20”, ..); in our query layer solved the problem.

SOLR: java.io.FileNotFoundException: no segments* file found

While playing around with one of my development SOLR installations (this time under Windows), I suddenly got a weird error message when feeding data to one of the fresh cores.


SEVERE: java.lang.RuntimeException: java.io.FileNotFoundException: no segments* file found in org.apache.lucene.store.SimpleFSDirectory@C:\temp\solr\*\data\index: files:

Taking a look at the contents of the index\ directory, it was in fact empty. Seems weird, but my initial guess was that Lucene / SOLR would treat this as a new installation and create the files.

Turns out the issue is that it won’t – as long as the index directory exists, Lucene / SOLR goes looking for the segment files.

Thanks to an old post to the solr-dev list by Yonik, the easiest fix is to simply delete the index directory and restart your applet container (Tomcat in this case).

Porting SOLR Token Filter from Lucene 2.4.x to Lucene 2.9.x

I had trouble getting our current token filter to work after recompiling with the nightly builds of SOLR, which seemed to stem from the recently adopted upgrade to 2.9.0 of Lucene (not released yet, but SOLR nightly is bleeding edge!). There’s functionality added for backwards compability, and while that might have worked, things didn’t really come together as it should (somewhere or the other). So I decided to port our filter over to the new model, where incrementToken() is the New Way ™ of doing stuff. Helped by the current lowercase filter in the SVN trunk of Lucene, I made it all the way through.

Our old code:

    public NorwegianNameFilter(TokenStream input)
    {
        super(input);
    }

    public Token next() throws IOException
    {
        return parseToken(this.input.next());
    }
 
    public Token next(Token result) throws IOException
    {
        return parseToken(this.input.next());
    }

Compiling this with Lucene 2.9.0 gave me a new warning:

Note: .. uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

To the internet mobile!

Turns out next() and next(Token) has been deprecated in the new TokenStream implementation, and the New True Way is to use the incrementToken() method instead.

Our new code:

    private TermAttribute termAtt;

    public NorwegianNameFilter(TokenStream input)
    {
        super(input);
        termAtt = (TermAttribute) addAttribute(TermAttribute.class);
    }

    public boolean incrementToken() throws IOException
    {
        if (this.input.incrementToken())
        {
            termAtt.setTermLength(this.parseBuffer(termAtt.termBuffer(), termAtt.termLength()));
            return true;
        }
        
        return false;
    }

A few gotcha’s along the way: incrementToken needs to be called on the input token string, not on the filter (super.incrementToken() will give you a stack overflow). This moves the token stream one step forward. We also decided to move the buffer handling into the parse token function to handle this, and remember to include the length of the “live” part of the buffer (the buffer will be larger, but only the content up to termLength will be valid).

The return value from our parseBuffer function is the actual amount of usable data in the buffer after we’ve had our way with it. The concept is to modify the buffer in place, so that we avoid allocating or deallocating memory.

Hopefully this will help other people with the same problem!

Transparent Remapping of / For Struts Actions

I’ve been trying to find a solution to this issue for a couple of hours: We have several struts actions in our Java-based webapp, all neatly mapped through different .do action handlers. I wanted to switch our handling of the root / index URL (http://www.example.com/) from being a static redirect to the actual action to instead present the content right there, without the useless redirect. This apparently proved to be harder than I thought; after searching a lot through Google, reading through mailing lists and the official documentation, it seems that there is no way to specify an action to handle these requests by default in struts. There may be one, but I could not for the life of ${deity} find it.

As simply doing it in Struts were out of the question, I turned to an old friend of mine, the ever so helpful mod_rewrite. mod_rewrite is capable of rewriting URLs internally in Apache before they get handled at other levels. The problem was that mod_jk seemed to grab the request before the replacements were made, but a few resources pointed me in the correct direction:

After a bit of debugging with the rewritelog, everything came together. This is how it ended up:

       	RewriteEngine On
       	RewriteLog /tmp/rewrite.log
       	RewriteLogLevel 3
       	RewriteRule ^/?$ /destinationfile [PT,NE]

The RewriteLog say that we should log the rewrite progress to /tmp/rewrite.log, and RewriteLogLevel say that we should log at the most detailed format. Use 0, 1, 2 for less debugging information. Remember to comment out these lines when things work. DO NOT use the RewriteLog when not actually debugging the rewrites.