Android: Changing the Title of an Activity – setTitle works – android:label does not?

To change the title of an activity (to show a more custom title for a certain activity than the application title), you can set the new title either through calling setTitle("foo") on the activity or by setting android:label="@string/price_after_rebate" in your activity style.

The problem was that the latter didn’t work, while the first one did. I try to keep any static definitions related to the activities outside of the code itself, but that’s hard when it doesn’t work as expected.

Turns out that if there’s a title set in the AndroidManifest.xml file (located under app/manifests/ in the standard layout / Android Studio), it’ll override any title set elsewhere in the definitions. You can change the specific titles by setting android:label="@string/price_after_rebate" on the activity definitions in the manifest instead of the activity xml file:

<activity
    android:name=".xyz.Foo"
    android:parentActivityName=".MainActivity"
    android:label="@string/xyz_foo_activity_title"
>
    <meta-data
        android:name="android.support.PARENT_ACTIVITY"
        android:value="xyz.MainActivity" />
</activity>

NetBeans with NBandroid, Emulator Never Shows After Building/Running With F6

Trying to build my first (or second, I tend to forget) Android project under NetBeans, I ran into an issue where the emulator would never show up when I tried to build the project. Turns out I even got a null pointer exception which I thought were generated somewhere else in NetBeans (next time: read the actual exception and don’t assume).

The solution to fix the emulator never showing up? Update the currently installed version of the JDK. (Thanks to a Stackoverflow thread for hinting in the correct direction) Remember that NetBeans might be tied to a particular version of the JDK (either in the command line arguments or in netbeans.conf in the etc/ directory of the NetBeans installation directory). I uninstalled any older version, which gave me an error about the value of jdkhome being wrong, and asking if I wanted to use the default path instead. That worked, but the error shows up each time. Comment out the jdkhome-line in netbeans.conf and it’ll guess automagically each time (if guessing works for you), or if guessing doesn’t work, add the new path to the JDK in netbeans.conf.

Replacement for Deprecated / Removed BaseTokenFilterFactory

When writing plugins for Solr you’d previously extend the BaseTokenFilterFactory, but at some time since I last built trunk, that changed to TokenFilterFactory – which is located in the util package of lucene instead.

Diff:

- import org.apache.solr.analysis.BaseTokenFilterFactory; 
+ import org.apache.lucene.analysis.util.TokenFilterFactory; 

...

- public class xxxxxFilterFactory extends BaseTokenFilterFactory 
+ public class xxxxxFilterFactory extends TokenFilterFactory 

Solr, Memory Usage and Dynamic Fields

One of the many great things about Solr is that it allows you to add dynamic fields – you can define a certain pattern that a field will have to follow, but it can then use any field name that matches the pattern.

We’ve been using one such dynamic field to add a sort field for our documents:

xxx_Category_Subcategory: 300

This would allow us to sort by this field to get the priority of our documents in this particular category and subcategory. A document would contain somewhere between 1 and 15 such fields. The total selection of unique field names is somewhere around 1200 across all documents.

Be small, be happy

As long as our collection were quite small (<10k documents) this scheme worked great. When our collection grew to around 500k documents, we started seeing out of memory errors quite often. At the worst rate we got an out of memory exception every 30 minutes, and had to restart the Solr server. Performance didn't suffer, but obviously we couldn't continue restarting servers until we got bored. After removing a few other possible issues (such as our stable random sort) I were rather stumped that things didn't improve. The total amount of data in our dynamic fields were rather low, somewhere around 2.5 - 3.5m integers, or possibly somewhere around 50-70MB in total. The JVM should be able to fit everything about these fields in memory and query them for the fields we're trying to find, but a heap dump of the jvm just before it hit the out of memory exception revealed that we were getting quite a few GBs of Lucene's FieldCache objects. These objects cache the value of a field for the total set of documents available in the index, and you're sadly not able to tune this cache through the Solr configuration (at least not for 1.4 as far as I could find).

Less Dynamic Fields, More Manual Labor

After pondering this issue a bit I came to the conclusion that our problem was related to the dynamic fields we had, and the fact that we used them for sorting. Lucene / Solr keeps one set of field caches for each field when it’s used for sorting, to avoid having to do duplicate work later. For us, this meant that each time we sorted a new field, an array had to be created with the size of the total document set. As long as we just had 10k documents, these arrays were small enough that we had enough memory available – when the document set grew to almost 500k documents, not so much.

This means that the total memory required for field caches will be limited by DocumentsInIndex * FieldsSortedBy. As long as our DocumentsInIndex were just 10k, the available memory to the jvm was enough to keep sorting by the number of fields we did. When the number of documents grew, the memory usage grew by the same factor and we got our OutOfMemoryException.

The Solution

Our solution could probably be more elegant, but currently we’ve moved the sorting to our application layer instead of the data provider layer. We’re requesting the complete set of hits from the Solr-server in the category anyway, so we’re able to sort it in the application – and by using a response format other than XML we’re also doing it rather quickly. This means that we’re not using sorting at all, and are only querying against one multivalued field to see if the category key is present there at all.

Note: Other solutions we considered were to divide our index into several Solr cores. This would allow us to keep the number of documents in each core low, and therefor also keep the fieldcache size in check. We know that each category could very well live on just on core as we won’t be mixing it with data from the other cores (and for that we could keep a separate core with all the documents, just not use it for searching across dynamic fields). We dropped this plan because of the rather worrying increase in complexity in our Solr installation. This could however help in your own case. :-)

Java and NetBeans: Illegal escape character

When defining strings in programming languages, they’re usually delimited by ” and “, such as “This is a string” and “Hello World”. The immediate question is what do you do when the string itself should contain a “? “Hello “World”” is hard to read and practically impossible to parse for the compiler (which tries to make sense out of everything you’ve written). To solve this (and similiar issues) people started using escape characters, special characters that tell the parser that it should pay attention to the following character(s) (some escape sequences may contain more than one character after the escape character).

Usually the escape character is \, and rewriting our example above we’ll end up with “Hello \”World\””. The parser sees the \, telling it that it should parse the next characters in a special mode and then inserts the ” into the string itself instead of using it as a delimiter. In Java, C, PHP, Python and several other languages there are also special versions of the escape sequences that does something else than just insert the character following the escape character.

\n – Inserts a new line.
\t – Inserts a tab character.
\xNN – Inserts a byte with the byte value provided (\x13, \xFF, etc).

A list of the different escape sequences that PHP supports can be found in the PHP manual.

Anyways, the issue is that Java found an escape sequence that it doesn’t know how to handle. Attempting to define a string such as “! # \ % &” will trigger this message, as it sees the escape character \, and then attempts to parse the following byte – which is a space (” “). The escape sequence “\ ” is not a valid escape sequence in the Java language specification, and the parser (or NetBeans or Eclipse) is trying to tell you this is probably not what you want.

The correct way to define the string above would be to escape the escape character (now we’re getting meta): “! # \\ % &”. This would define a string with just a single backlash in it.

Porting SOLR Token Filter from Lucene 2.4.x to Lucene 2.9.x

I had trouble getting our current token filter to work after recompiling with the nightly builds of SOLR, which seemed to stem from the recently adopted upgrade to 2.9.0 of Lucene (not released yet, but SOLR nightly is bleeding edge!). There’s functionality added for backwards compability, and while that might have worked, things didn’t really come together as it should (somewhere or the other). So I decided to port our filter over to the new model, where incrementToken() is the New Way ™ of doing stuff. Helped by the current lowercase filter in the SVN trunk of Lucene, I made it all the way through.

Our old code:

    public NorwegianNameFilter(TokenStream input)
    {
        super(input);
    }

    public Token next() throws IOException
    {
        return parseToken(this.input.next());
    }
 
    public Token next(Token result) throws IOException
    {
        return parseToken(this.input.next());
    }

Compiling this with Lucene 2.9.0 gave me a new warning:

Note: .. uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

To the internet mobile!

Turns out next() and next(Token) has been deprecated in the new TokenStream implementation, and the New True Way is to use the incrementToken() method instead.

Our new code:

    private TermAttribute termAtt;

    public NorwegianNameFilter(TokenStream input)
    {
        super(input);
        termAtt = (TermAttribute) addAttribute(TermAttribute.class);
    }

    public boolean incrementToken() throws IOException
    {
        if (this.input.incrementToken())
        {
            termAtt.setTermLength(this.parseBuffer(termAtt.termBuffer(), termAtt.termLength()));
            return true;
        }
        
        return false;
    }

A few gotcha’s along the way: incrementToken needs to be called on the input token string, not on the filter (super.incrementToken() will give you a stack overflow). This moves the token stream one step forward. We also decided to move the buffer handling into the parse token function to handle this, and remember to include the length of the “live” part of the buffer (the buffer will be larger, but only the content up to termLength will be valid).

The return value from our parseBuffer function is the actual amount of usable data in the buffer after we’ve had our way with it. The concept is to modify the buffer in place, so that we avoid allocating or deallocating memory.

Hopefully this will help other people with the same problem!

The Results of Our Recent Python Competition

Last week we had yet another competition where the goal is to create the smallest program that solves a particular problem. This time the problem to solve was a simple XML parsing routine with a few extra rules to make the parsing itself easier to implement (The complete rule set). This time python was chosen as the required language of the submissions.

The winning contribution from Helge:

from sys import stdin
p=0
s=str.split
for a in s(stdin.read(),'<'):
 a=s(a,'>')[0];b=a.strip('/');p-=a

The contribution from Tobias:

from sys import stdin
i=stdin.read()
s=x=t=0
k=i.find
while x")
        elif i[x+1]=="/":s-=1
        else:
            u=0
            while u",x)-1]=="/":t=1
            else:t=0;s+=1
            print i[x+1:k(">",x)-t].strip()
    x+=1

The contribution from Harald:

from sys import stdin
l=stdin.read()
e,p,c,x=0,0,0,0
r=""
for i in l:
       if l[e:e+2]==']>'or l[e:e+2]=='->':
               c=0
       if l[e:e+2]=='':
               p=0
               if i=='/' and l[e+1]=='>':
                       x-=1
       if p and not c:
               r+=i
       if not c and i=='<'and l[e+1]!='/':
               r+="\n"+(' '*4)*x
               x+=1
               p=1
       if i=='<'and l[e+1]=='/':
               x-=1
       e+=1

If any of the contributors want to provide a better description of their solutions, feel free to leave a comment!

Thanks to all the participants!

Modifying a Lucene Snowball Stemmer

This post is written for advanced users. If you do not know what SVN (Subversion) is or if you’re not ready to get your hands dirty, there might be something more interesting to read on Wikipedia. As usual. This is an introduction to how to get a Lucene development environment running, a Solr environment and lastly, to create your own Snowball stemmer. Read on if that seems interesting. The receipe for regenerating the Snowball stemmer (I’ll get back to that…) assumes that you’re running Linux. Please leave a comment if you’ve generated the stemmer class under another operating system.

When indexing data in Lucene (a fulltext document search library) and Solr (which uses Lucene), you may provide a stemmer (a piece of code responsible for “normalizing” words to their common form (horses => horse, indexing => index, etc)) to give your users better and more relevant results when they search. The default stemmer in Lucene and Solr uses a library named Snowball which was created to do just this kind of thing. Snowball uses a small definition language of its own to generate parsers that other applications can embed to provide proper stemming.

By using Snowball Lucene is able to provide a nice collection of default stemmers for several languages, and these work as they should for most selections. I did however have an issue with the Norwegian stemmer, as it ignores a complete category of words where the base form end in the same letters as plural versions of other words. An example:

one: elektriker
several: elektrikere
those: elektrikerene

The base form is “elektriker”, while “elektrikere” and “elektrikerene” are plural versions of the same word (the word means “electrician”, btw).

Lets compare this to another word, such as “Bus”:

one: buss
several: busser
those: bussene

Here the base form is “buss”, while the two other are plural. Lets apply the same rules to all six words:

buss => buss
busser => buss [strips “er”]
bussene => buss [strips “ene”]

elektrikerene => “elektriker” [strips “ene”]
elektrikere => “elektriker” [strips “e”]

So far everything has gone as planned. We’re able to search for ‘elektrikerene’ and get hits that say ‘elektrikere’, just as planned. All is not perfect, though. We’ve forgotten one word, and evil forces will say that I forgot it on purpose:

elektriker => ?

The problem is that “elektriker” (which is the single form of the word) ends in -er. The rule defined for a word in the class of “buss” says that -er should be stripped (and this is correct for the majority of words). The result then becomes:

elektriker => “elektrik” [strips “er”]
elektrikere => “elektriker” [strips “e”]
elektrikerene => “elektriker” [strips “ene”]

As you can see, there’s a mismatch between the form that the plurals gets chopped down to and the singular word.

My solution, while not perfect in any way, simply adds a few more terms so that we’re able to strip all these words down to the same form:

elektriker => “elektrik” [strips “er”]
elektrikere => “elektrik” [strips “ere”]
elektrikerene => “elektrik” [strips “erene”]

I decided to go this route as it’s a lot easier than building a large selection of words where no stemming should be performed. It might give us a few false positives, but the most important part is that it provides the same results for the singular and plural versions of the same word. When the search results differ for such basic items, the user gets a real “WTF” moment, especially when the two plural versions of the word is considered identical.

To solve this problem we’re going to change the Snowball parser and build a new version of the stemmer that we can use in Lucene and Solr.

Getting Snowball

To generate the Java class that Lucene uses when attempting to stem a phrase (such as the NorwegianStemmer, EnglishStemmer, etc), you’ll need the Snowball distribution. This distribution also includes example stemming algorithms (which have been used to generate the current stemmers in Lucene).

You’ll need to download the application from the snowball download page – in particular the “Snowball, algorithms and libstemmer library” version [direct link].

After extracting the file you’ll have a directory named snowball_code, which contains among other files the snowball binary and a directory named algorithms. The algorithms-directory keeps all the different default stemmers, and this is where you’ll find a good starting point for the changes you’re about to do.

But first, we’ll make sure we have the development version of Lucene installed and ready to go.

Getting Lucene

You can check out the current SVN trunk of Lucene by doing:

svn checkout http://svn.apache.org/repos/asf/lucene/java/trunk lucene/java/trunk

This will give you the bleeding edge version of Lucene available for a bit of toying around. If you decide to build Solr 1.4 from SVN (as we’ll do further down), you do not have to build Lucene 2.9 from SVN – as it already is included pre-built.

If you need to build the complete version of Lucene (and all contribs), you can do that by moving into the Lucene trunk:

cd lucene/java/trunk/
ant dist (this will also create .zip and .tgz distributions)

If you already have Lucene 2.9 (.. or whatever version you’re on when you’re reading this), you can get by with just compiling the snowball contrib to Lucene, from lucene/java/trunk/:

cd contrib/snowball/
ant jar

This will create (if everything works as it should) a file named lucene-snowball-2.9-dev.jar (.. or another version number, depending on your version of Lucene). The file will be located in a sub directory of the build directory on the root of the lucene checkout (.. and the path will be shown after you’ve run ant jar): lucene/java/trunk/build/contrib/snowball/.

If you got the lucene-snowball-2.9-dev.jar file compiled, things are looking good! Let’s move on getting the bleeding edge version of Solr up and running (if you have an existing Solr version that you’re using and do not want to upgrade, skip the following steps .. but be sure to know what you’re doing .. which coincidentally you also should be knowing if you’re building stuff from SVN as we are. Oh the joy!).

Getting Solr

Getting and building Solr from SVN is very straight forward. First, check it out from Subversion:

svn co http://svn.apache.org/repos/asf/lucene/solr/trunk/ solr/trunk/

And then simply build the war file for your favourite container:

cd solr/trunk/
ant dist

Voilá – you should now have a apache-solr-1.4-dev.war (or something similiar) in the build/ directory. You can test that this works by replacing your regular solr installation (.. make a backup first..) and restarting your application server.

Editing the stemmer definition

After extracting the snowball distribution, you’re left with a snowball_code directory, which contains algorithms and then norwegian (in addition to several other stemmer languages). My example here expands the definition used in the norwegian stemmer, but the examples will work with all the included stemmers.

Open up one of the files (I chose the iso-8859-1 version, but I might have to adjust this to work for UTF-8/16 later. I’ll try to post an update in regards to that) and take a look around. The snowball language is interesting, and you can find more information about it at
the Snowball site.

I’ll not include a complete dump of the stemming definition here, but the interesting part (for what we’re attempting to do) is the main_suffix function:

define main_suffix as (
    setlimit tomark p1 for ([substring])
    among(
        'a' 'e' 'ede' 'ande' 'ende' 'ane' 'ene' 'hetene' 'en' 'heten' 'ar'          
        'er' 'heter' 'as' 'es' 'edes' 'endes' 'enes' 'hetenes' 'ens'
        'hetens' 'ers' 'ets' 'et' 'het' 'ast' 
            (delete)
        's'
            (s_ending or ('k' non-v) delete)
        'erte' 'ert'
            (<-'er')
    )
)

This simply means that for any word ending in any of the suffixes in the three first lines will be deleted (given by the (delete) command behind the definitions). The problem provided our example above is that neither of the lines will capture an "ere" ending or "erene" - which we'll need to actually solve the problem.

We simply add them to the list of defined endings:

    among(
        ... 'hetene' 'en' 'heten' 'ar' 'ere' 'erene' 'eren'
        ...
        ...
            (delete)

I made sure to add the definitions before the shorter versions (such as 'er'), but I'm not sure (.. I don't think) if it actually is required.

Save the file under a new file name so you still have the old stemmers available.

Compiling a New Version of the Snowball Stemmer

After editing and saving your stemmer, it's now time to generate the Java class that Lucene will use to generate it base forms of the words. After extracting the snowball archive, you should have a binary file named snowball in the snowball_code directory. If you simply run this file with snowball_code as your current working directory:

./snowball

You'll get a list of options that Snowball can accept when generating the stemmer class. We're only going to use three of them:

-j[ava] Tell Snowball that we want to generate a Java class
-n[ame] Tell Snowball the name of the class we want generated
-o <filename> The filename of the output file. No extension.

So to compile our NorwegianExStemmer from our modified file, we run:

./snowball algorithms/norwegian/stem2_ISO_8859_1.sbl -j -n NorwegianExStemmer -o NorwegianExStemmer

(pardon the excellent file name stem2...). This will give you one new file in the current working directory: NorwegianExStemmer.java! We've actually built a stemming class! Woohoo! (You may do a few dance moves here. I'll wait.)

We're now going to insert the new class into the Lucene contrib .jar-file.

Rebuild the Lucene JAR Library

Copy the new class file into the version of Lucene you checked out from SVN:

cp NorwegianExStemmer.java /contrib/snowball/src/java/org/tartaru/snowball/ext

Then we simply have to rebuild the .jar file containing all the stemmers:

cd /contrib/snowball/
ant jar

This will create lucene-snowball-2.9-dev.jar in <lucenetrunk>/build/contrib/. You now have a library containing your stemmer (and all the other default stemmers from Lucene)!

The last part is simply getting the updated stemmer library into Solr, and this will be a simple copy and rebuild:

Inserting the new Lucene Library Into Solr

From the build/contrib directory in Lucene, copy the jar file into the lib/ directory of Solr:

cp lucene-snowball-2.9-dev.jar lib/

Be sure to overwrite any existing files (.. and if you have another version of Lucene in Solr, do a complete rebuild and replace all the Lucene related files in Solr). Rebuild Solr:

cd 
ant dist

Copy the new apache-solr-1.4-dev.war (check the correct name in the directory yourself) from the build/ directory in Solr to your application servers home as solr.war (.. if you use another name, use that). This is webapps/ if you're using Tomcat. Remember to back up the old .war file, just to be sure you can restore everything if you've borked something.

Add Your New Stemmer In schema.xml

After compiling and packaging the stemmer, it's time to tell Solr that it should use the newly created stemmer. Remember that a stemmer works both when indexing and querying, so we're going to need to reindex our collection after implementing a new stemmer.

The usual place to add the stemmer is the definition of your text fields under the <analyzer>-sections for index and query (remember to change it BOTH places!!):


Change NorwegianEx into the name of your class (without the Stemmer-part, Lucene adds that for you automagically. After changing both locations (or more if you have custom datatypes and indexing or query steps).

Restart Application Server and Reindex!

If you're using Tomcat as your application server this might simply be (depending on your setup and distribution):

cd /path/to/tomcat/bin
./shutdown.sh
./startup.sh

Please consult the documentation for your application server for information about how to do a proper restart.

After you've restarted the application server, you're going to need to reindex your collection before everything works as planned. You can however check that your stemmer works as you've planned already at this stage. Log into the Solr admin interface, select the extended / advanced query view, enter your query (which should now be stemmed in another way than before), check the "debug" box and submit your search. The resulting XML document will show you the resulting of your query in the parsedquery element.

Download the Generated Stemmer

If you're just looking for an improved stemmer for norwegian words (with the very, very simple changes outlined above, and which might give problems when concerned with UTF-8 (.. please leave a comment if that's the case)), you can simply download NorwegianExStemmer.java. Follow the guide above for adding it to your Lucene / Solr installation.

Please leave a comment if something is confusing or if you want free help. Send me an email if you're looking for a consultant.

New Adventures in Reverse Engineering

Before I go into the gory details of this post, I’ll start by saying that this method is probably not the right solution for you. This is not something you want to do if you have readily access to any source code or if you have an existing relationship with the 3rd party that provided the library you’re using. Do not do this. This is not for you.

With that out of the mind, this is the part for those who actually are interested in getting down and dirty with Java, and maybe solving a problem that’s hard to solve otherwise.

The setting: We have a library for interfacing with another internal web service, where the library was provided in binary form by a 3rd party as part of the agreement when the service were delivered to us. The problem is that due to some unknown matter, this library is perfectly capable of understanding UTF-8, both as input from us and as input from the web service, but all web related methods in the result class returns data encoded as ISO-8859-1. The original solution to this was to keep two different parts of the query string — the original query in one particular key — and the key for the library in ISO-8859-1. This needs loads of special casing, manually handling that single parameter, etc. This works to a certain degree as long as the library is the only component in the mix. The troubles really began to surface when we started querying other services based on the same query. We’d then have to special case all methods that were used in URLs, as they returned ISO-8859-1 — and all other libraries and encodings are using UTF-8.

The library has since been made into a separate product with a hefty price tag, so upgrading the library was not an acceptable solution for us. Another solution had to be found, and this is were things starts to get interesting.

Writing a proxy class to handle the encoding issue transparently

This was the solution we attempted first, but this requires us to implement quite a few methods, to add additional code to the method that provides access to the library and to extend and embrace parts of the object. This could have been done quite easily by simply changing one method of the class to reference super.methodName() and then returning that result, but as we have to change several classes (these objects live 3-4 levels down into the result object from the library) which add both developer and runtime overhead. Not good.

Decompiling the library

The next step was to decompile the library to see how the code of the library actually worked. This proved to be a good way to find out how we could possibly solve the issue. We could try to fix the issue in the code and then recompile the library, but some of the class files were too new for jad to decompile them completely. The decompilation did however show the problem with the code:

    if (encoding != null)
    {
        return encoding.toString() :
    }

    return "ISO-8859-1";

This was neatly located in a helper method that ran on every property used when generating a query string. The encoding variable is retrieved from a global settings object, only accessible in the same library. This object is empty in our version of the library, so not much help there. But here’s the little detail that leads into the next part, and actually made this hack possible: “ISO-8859-1” is constant. This means that it gets neatly tucked away as an UTF-8 string when the class file is generated. Let’s gets down and dirty.

Binary patching the encoding in the class file

We’ll start by taking a look at the hexdump in our class file, after searching for the string “ISO” in the ASCII representation (“ISO” in UTF-8 is identical to the ASCII representation):

Binary Patching a Java Class

I’ve highlighted the interesting part where “ISO-8859-1” is stored in the file. This is where we want to do our surgical incision and make the method return the string “UTF-8” instead. There is one important thing you should be aware of if you’ve never done any hex editing of files before, and that is the fact that the byte offset of parts of the file may be very important. Sadly, the strings “UTF-8” and “ISO-8859-1” have different lengths, and as such, would require us to either delete bytes following “UTF-8” or put spaces there instead (“UTF-8     “). The first method might leave the rest of the file skewed, the latter might not work if the method used for encoding the value doesn’t trim the string first.

To solve this issue, we turn to our good friend VM Spec: The class File Format, which contains all the details of how the class file format is designed. Interesting parts:

In the ClassFile structure:

cp_info constant_pool[constant_pool_count-1];

As we’re looking at a constant, this is where it should be stored. The cp_info is defined as:

cp_info {
    u1 tag;
    u1 info[];
}

The tag contains the type of constant, the info[] array varies depending on the type of the constant. If we take a look at the table in Chapter 4.4, we see that the the identifier for a unicode string is:

CONSTANT_Utf8 	1

So we should have the value of 1 in the byte (as the actual value, not the ascii character) describing this constant. If the value is one, the complete structure is:

    CONSTANT_Utf8_info {
    	u1 tag;
    	u2 length;
    	u1 bytes[length];
    }

The tag should be 1 as the byte value, the length should be two bytes describing the length of the actual string saved (since we’re storing the length in two bytes (u2), it can be a maximum of 2^16 bytes in total). After that identifier, we should have length number of bytes with UTF-8 data.

If we go back to our hex dump, we can now make more sense of the data we’re seeing:

binarypatchingclass-length

The byte shown as 0x01 in hex is the value 1 for the tag of the structure. The 0x00 0x0A is the two bytes making up the length of the string:

    0000 0000 0000 1010 binary = 10 decimal

    ISO-8859-1
    1234567890 

This shows that the length of our string “ISO-8859-1” is 10 bytes in UTF-8, which is the same value that is stored in the two bytes showing the length of the string in the structure.

Heading back to our original goal: changing the length of the string stored. We change the string bytes to “UTF-8”, which is five bytes. We then change the stored length of the string:

    00 0A becomes
    00 05

We save our changes and re-create the jar file again, with all the previous classes and our changed one.

After inserting our new JAR-file into our maven repository as a new build and updating our local repository, we now have complete UTF-8 support from start to finish. Yey!

Communicating The Right Thing Through Code

While trying to fix a larger bug in a module I never had touched before, I came across the following code. While technically correct (it does what it’s supposed to do, and there is no way to currently get it to do something wrong (.. an update on just that), does have a serious flaw:

$result = get_entry($id);
if(is_bool($result))
{
    die('bailing out');
}

Hopefully you can see the error in what the code communicates; namely that the return type from the function is used to what should be considered an error.

While this works as the only way the function can return a boolean value is if it returns false, the person reading the code at a later date will wonder what the code is supposed to do – he or she might not have any knowledge about how the method works. Maybe the method just sets up some resource, a global variable (.. no, don’t do that. DON’T.), etc, but the code does not communicate what we really expect.

As PHP is dynamically typed, checking for type before comparing is perfectly OK, as long as you’re not counting “no returned elements” as an error. The following code more clearly communicates it intent (=== in PHP is comparison based on both type and content, which means that both the type of the variable and the content of it have to match. 0 === “0” will be considered false.):

$result = get_entry($id);
if($result === false)
{
    die('bailing out');
}

Or if you’re interested in getting to know if the element returned is actually considered false (such as an empty array, an empty string, etc), just drop one of the equal signs:

$result = get_entry($id);
if($result == false)
{
    die('bailing out');
}

I’m also not fond of using die() as a method for stopping a faulty request, as that should be properly logged and dealt with in the best manner possible, but I’ll leave that for a later post.