Message Queues
The approach I settled for is one that uses message queues between the OSGi services. This is roughly the equivalent of inter-process message queues in UNIX System V or file sockets in BSD.
When the messages themselves are immutable objects, this avoids almost all potential race conditions and deadlocks caused by shared objects. Even when using non-blocking queues, the simple matter of copying/moving large amount of objects (or, worse, using any form of object serialization) makes this much slower than using shared memory. If the message objects are big, then it may be better to avoid serialization and take the risk of having two services refer to the same object at the same time.
As for what library to use to implement those queues, there are many high-performance libraries to choose from, though few of these have a way of running in “embedded mode” within the same JVM that runs the OSGi container.
There’s a way of running Open Message Queues for Java, which can run in embedded mode, but it’s quite big, complex and “Enterprise-y”. There’s also Kestrel, a minimalist small implementation, and can also run in embedded mode, but then it needs Scala which I won’t even try running within an OSGi module.
But these library are network-oriented, which not only makes their implementation more complex but also slower in the special case when all the messages remains in the same JVM. So, in the end, I just used the Java 5 built-in java.util.concurrent.BlockingQueue
and made my own simple message passing interface around it.
Published on August 17, 2010 at 14:03 EDT
Older post: MobileMe: Member Since 10 Years!
Newer post: Anti-Social News Aggregation