We’ve extended our previously single Solr-node to a few nodes in a cluster. This allows us to run queries against one node while updating or configuring another, distributing the load across several servers (although we’re not there yet load wise) and being able to handle any out of memory or other critical errors.
While Solr supports querying several cores or distributing the queries internally, we decided to move the load balancing and handling of failed nodes higher up in the hierarchy. We’re now doing simple load balancing and handling of failed nodes by using mod_jk in our existing Apache-based environment. mod_jk also handles failed servers without any administrator interaction. We were already using mod_jk for our main web frontend, and since we use Tomcat as our application container for Solr, things should be a breeze!
Well, no. After copying our existing mod_jk setup, configuring our new workers and restarting Apache, all I got was the well known 500 INTERNAL SERVER ERROR. Here’s the worker configuration file:
worker.list=loadbalancer,status worker.solr1.port=8009 worker.solr1.host=10.0.0.4 worker.solr1.type=ajp13 worker.solr1.lbfactor=1 worker.solr1.cachesize=10 worker.solr2.port=8009 worker.solr2.host=10.0.0.5 worker.solr2.type=ajp13 worker.solr2.lbfactor=4 worker.solr2.cachesize=10 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=solr1,solr2 worker.loadbalancer.sticky_session=0 worker.status.type=status
This provides us with two solr servers and one status worker (the status worker is responsible for providing a simple web interface for enabling/disabling/seeing the status of the other workers), configured with a 1:4 load balancing (the second server has quite a bit more memory available for Solr).
I provided the configuration of the workers through the JkWorkersFile configuration setting (in a VirtualHost block… don’t do that):
JkWorkersFile conf/workers.properties
I’d also enable debug logging to attempt to find the problem (still in a VirtualHost block):
JkLogFile logs/mod_jk.log JkLogLevel debug JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"
Other mod_jk settings (in the VirtualHost block) were:
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" JkShmFile logs/jk.shm JkMount /* loadbalancer <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location>
Still no solution. Peeking at the log files mod_jk provided, I were able to deduce the following:
[debug] map_uri_to_worker::jk_uri_worker_map.c (525): Attempting to map context URI '/jkstatus' [debug] map_uri_to_worker::jk_uri_worker_map.c (550): Found an exact match status -> /jkstatus [debug] jk_handler::mod_jk.c (1920): Into handler jakarta-servlet worker=status r->proxyreq=0 [debug] wc_get_worker_for_name::jk_worker.c (111): did not find a worker status [info] jk_handler::mod_jk.c (2071): Could not find a worker for worker name=status
This indicates that mod_jk was unable to find a worker matching the name I provided in the JkMount statement above; status. Weird. I added some garbage characters to the “JkWorkersFile” setting, and Apache complained that it were unable to find the workers file. Changed it back, reloaded, and still nothing. It was apparently unable to find the worker. The map did however work, as it tried to launch a worker.
Looking back at the start up sequence of mod_jk, the following were found in the log:
[debug] build_worker_map::jk_worker.c (236): creating worker ajp13 [debug] wc_create_worker::jk_worker.c (141): about to create instance ajp13 of ajp13 [debug] wc_create_worker::jk_worker.c (154): about to validate and init ajp13 [debug] ajp_validate::jk_ajp_common.c (1922): worker ajp13 contact is 'localhost:8009' [debug] ajp_init::jk_ajp_common.c (2047): setting endpoint options: [debug] ajp_init::jk_ajp_common.c (2050): keepalive: 0 [debug] ajp_init::jk_ajp_common.c (2054): timeout: -1 [debug] ajp_init::jk_ajp_common.c (2058): buffer size: 0 ajp_init::jk_ajp_common.c (2062): pool timeout: 0 [debug] ajp_init::jk_ajp_common.c (2066): connect timeout: 0 [debug] ajp_init::jk_ajp_common.c (2070): reply timeout: 0 [debug] ajp_init::jk_ajp_common.c (2074): prepost timeout: 0 [debug] ajp_init::jk_ajp_common.c (2078): recovery options: 0 [debug] ajp_init::jk_ajp_common.c (2082): retries: 2 [debug] ajp_init::jk_ajp_common.c (2086): max packet size: 8192 [debug] ajp_create_endpoint_cache::jk_ajp_common.c (1959): setting connection pool size to 1 with min 0
It took a bit of time, but I realized that this tells me that mod_jk created _a default_ worker named ajp13. Apparently it was not reading my worker file at all, but it still complained if I changed the file name. You’d think that the setting which loads the configuration file would work when it complains when it doesn’t. But .. well. After an hour of attempting to find out why the workers didn’t load, revising the workers file to a minimal example, trying with just a single status worker, I concluded that my workers file was correct, and obviously mod_jk found it when it attempted to load it.
Then I suddenly discovered the small notice in the mod_jk configuration manual:
JkWorkersFile: This directive is only allowed once. It must be put into the global part of the configuration.
JkWorkersFile can not be defined in a <VirtualHost> section. It will NOT complain if you do it, it’ll just never define any workers. It will complain if the file doesn’t exist, even if it never tries to actually load it.
Confusing.
Moving the JkWorkersFile statement out from the <VirtualHost> block and to the LoadModule statement instead solved the issue. This is also the case for JkWorkerProperty.
Thanks for sharing, this just saved me hours of searching and woes. Much appreciated!