Wednesday, August 03, 2005

org.jboss.mq.SpyJMSException

org.jboss.mq.SpyJMSException: Could not store message: 2 msg=1 hard 
NOT_STORED PERSISTENT queue=QUEUE.teema_email_queue priority=4 
lateClone=false hashCode=28390332; - nested throwable: 
(java.sql.SQLException: Io exception: Connection reset)

Above exception is thrown when trying to enqueue object into a JBoss JMS queue that is backed by a Oracle database.

The cryptic error message is not much help. Neither does Google. Not able to find stock anwser, I started to lay down facts in order to piece together the puzzle. The first clue is that the error appears only in a particular use case, but not in others. What is the difference, I wonder. One noticeable difference is that the object to be enqueued in the failed use case is quite large compare to those in other use cases. May be size of the object is the deciding factor here. A quick testcase confirmed my suspicion.

As I know JBoss saves JMS message as blob, this looks like a problem in handling blob data. Googling "oracle blob size" turns out both explanation and solution. Apparently the older Oracle thin JDBC drivers require non-JDBC compliant way to properly handle blob data. In other words, calling setBlob() doesn't work as expected, which is exactly what JBoss JMS implementation appears to be doing. When data is less than 4000 bytes, Oracle stores it inline, directly in the column, therefore setBlob() works for small data. For larger data, Oracle stores it as LOB and that is when setBlob() breaks and our problem begins. Upgrade to Oracle driver for 10g, which is compatible with Oracle 9 database, fixes the problem. According to its release note, 10g driver adds direct support for lob/clob/blob at JDBC level.

Technorati Tags: , ,

5 comments:

Sebastiano Pilla said...

I'm not so sure about the Oracle JDBC driver, as I routinely read and write much larger blobs with it. My guess is that it is bad client code, or not reading the docs, as the reading or writing of blobs in that version is different that reading or writing ints... It may very well be that in the 10g version Oracle made blob handling easier just to end nonsense (like the koders.com page mentioned in the TSS thread).

Anonymous said...

Thank you.
Your proposal solution was excellent!

Anonymous said...

Thanks a lot...
Very very useful solution!

Anonymous said...

Its a very very good solution. Thanks a lot.

Anonymous said...

the author is correct; i could not insert blobs into oracle larger than 4000 bytes; as soon as i upgraded ojdbc14.jar to the latest version, my code started working.