Solr Upgrade Surprise and Using Kill To Debug It

At work, we’ve recently upgraded to the latest and greatest stable version of Solr (3.6), and moved from using the dismax parser to the edismax parser. The initial performance of Solr was very poor in our environment, and we removed the initial set of search features we had planned to deploy trying to get the CPU utilization in order.

Once we finally, rolled back a set of features Solr seemed to be behaving optimally. Below is what we were seeing as we looked at our search servers CPU:
Solr CPU Usage Pre and Post Fix
Throughout the day we had periods where we saw large CPU spikes, but they didn’t really seem to affect throughput or average latency of the server. None the less we suspected there was still an issue, and started looking for a root cause.
 

Kill -3 To The Rescue

 
If you’ve never used kill -3, its perhaps one of the most useful Java debugging utilities around. It tells the JVM to produce a full thread dump, which it will then print to the STDOUT of the process. I became familiar with this when trying to hunt down treads in a Tomcat container that were blocking the process from exiting. Issuing kill -3 would give you enough information to find the problematic thread, and work with development to fix it.

In this case, I was hunting for a hint as to what went wrong with our search. I issued kill -3 during a spike, and got something like this:

012-07-27_16:52:01.54871 2012-07-27 16:52:01
2012-07-27_16:52:01.54873 Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.5-b03 mixed mode):
2012-07-27_16:52:01.54874
2012-07-27_16:52:01.54874 "JMX server connection timeout 1663" daemon prio=10 tid=0x0000000040dee800 nid=0x192c in Object.wait() [0x00007f1a24327000]
2012-07-27_16:52:01.54874 java.lang.Thread.State: TIMED_WAITING (on object monitor)
2012-07-27_16:52:01.54999 at java.lang.Object.wait(Native Method)
2012-07-27_16:52:01.55000 - waiting on <0x00007f7c189ff118> (a [I)
2012-07-27_16:52:01.55001 at com.sun.jmx.remote.internal.ServerCommunicatorAdmin$Timeout.run(ServerCommunicatorAdmin.java:150)
2012-07-27_16:52:01.55001 - locked <0x00007f7c189ff118> (a [I)
2012-07-27_16:52:01.55002 at java.lang.Thread.run(Thread.java:662)
2012-07-27_16:52:01.55002
...
2012-07-27_16:52:01.55458 "1565623588@qtp-1939768105-762" prio=10 tid=0x00007f7314537800 nid=0x120c runnable [0x00007f1a24c2f000]
2012-07-27_16:52:01.55459 java.lang.Thread.State: RUNNABLE
2012-07-27_16:52:01.55459 at org.apache.lucene.util.PriorityQueue.downHeap(PriorityQueue.java:239)
2012-07-27_16:52:01.55459 at org.apache.lucene.util.PriorityQueue.pop(PriorityQueue.java:176)
2012-07-27_16:52:01.55459 at org.apache.lucene.index.DirectoryReader$MultiTermEnum.next(DirectoryReader.java:1129)
2012-07-27_16:52:01.55460 at org.apache.lucene.search.FilteredTermEnum.next(FilteredTermEnum.java:77)
2012-07-27_16:52:01.55460 at org.apache.lucene.search.FilteredTermEnum.setEnum(FilteredTermEnum.java:56)
2012-07-27_16:52:01.55461 at org.apache.lucene.search.FuzzyTermEnum.<init>(FuzzyTermEnum.java:121)
2012-07-27_16:52:01.55461 at org.apache.lucene.search.FuzzyQuery.getEnum(FuzzyQuery.java:135)
2012-07-27_16:52:01.55462 at org.apache.lucene.search.MultiTermQuery$RewriteMethod.getTermsEnum(MultiTermQuery.java:74)
2012-07-27_16:52:01.55462 at org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:34)
2012-07-27_16:52:01.55463 at org.apache.lucene.search.TopTermsRewrite.rewrite(TopTermsRewrite.java:58)
2012-07-27_16:52:01.55463 at org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:312)
2012-07-27_16:52:01.55463 at org.apache.lucene.search.vectorhighlight.FieldQuery.flatten(FieldQuery.java:114)
2012-07-27_16:52:01.55464 at org.apache.lucene.search.vectorhighlight.FieldQuery.flatten(FieldQuery.java:104)
2012-07-27_16:52:01.55464 at org.apache.lucene.search.vectorhighlight.FieldQuery.flatten(FieldQuery.java:98)
2012-07-27_16:52:01.55465 at org.apache.lucene.search.vectorhighlight.FieldQuery.flatten(FieldQuery.java:98)
2012-07-27_16:52:01.55465 at org.apache.lucene.search.vectorhighlight.FieldQuery.flatten(FieldQuery.java:98)
2012-07-27_16:52:01.55466 at org.apache.lucene.search.vectorhighlight.FieldQuery.<init>(FieldQuery.java:69)
2012-07-27_16:52:01.55466 at org.apache.lucene.search.vectorhighlight.FastVectorHighlighter.getFieldQuery(FastVectorHighlighter.java:97)
2012-07-27_16:52:01.55466 at org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:388)
2012-07-27_16:52:01.55467 at org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:131)
2012-07-27_16:52:01.55467 at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:186)
2012-07-27_16:52:01.55468 at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
2012-07-27_16:52:01.55468 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1376)
....

 

Looking at the the output, I noticed that we had a lot threads calling FuzzyTermEnum. I thought this was strange, and sounded like an expensive search method. I talked with the developer, and we expected that the tilde character was being ignored by edismax. At the very least being escaped by our library, since it was included in the characters to escape. I checked the request logs, and we had people looking for exact titles that contained ~. This turned a 300ms query into a query that timed out, due to the size of our index. Further inspection of the thread dump revealed that we were also allowing the * to be used in query terms as well. Terms like *s ended up being equally problematic.
 

A Solr Surprize

 
We hadn’t sufficiently tested edismax, and we’re surprised that it ran ~,+,^, and * when escaped. I didn’t find any documentation that stated this directly, but I didn’t really expect to. We double checked our Solr library to see if that it was properly escaping the special characters in the query, but they we’re still being processed by Solr. On a hunch we tried double escaping the characters, which resolved the issue.

I’m not sure if this is a well known problem with edismax, but if you’re seeing odd CPU spikes this is definitely worth checking for. In addition, when trying to get to a root of a tough problem kill -3 can be a great shortcut. It saved me a bunch of painful debugging, and really eliminated almost all my guess work.


Posted

in

, ,

by

Tags:

Comments

One response to “Solr Upgrade Surprise and Using Kill To Debug It

  1. Adam S.
    Adam S.

    Thanks Geoffrey for this article, this is really helpful as I am going to do migration to 4.0 myself.

    A note about kill -3. First of all it does not kill the process, it just send a “signal” to the process. Many times people are afraid to use it because they think it is going to cause the process to terminate.

    Second, it sends the output to the standard out of the process you are sending the signal to, i.e. your java/tomcat, which may be redirected somewhere, like catalina.out

    Third, instead of kill -3 you can use jstack . Jstack is bundled with jdk, although I am not sure if it is a standard across differents JDKs implementations or specific to Oracle’s.

    Jstack is different in the way that it prints thread dump to its own standard out, i.e. current shell, which is usually more convenient.