Apache Camel is a pretty full-featured EIP implementation framework. It has several existing strategies for load-balancing right out of the box. Round Robin, Random, Sticky, Weighted Round Robin, Weighted Random,… the list goes on and on. But being that it’s a very well written and pluggable framework, it also gives you the ability to drop in your own custom strategies should you find that none of the existing ones meet your specific needs. So for this post, I created a custom Camel Load Balancer implementation utilizing an Infinispan cache to dynamically discover and load-balance between destination endpoints.
The sample code for this post can be found at https://github.com/joshdreagan/infinispan-discovery
Why would I do such a thing? Well… there’s a few good reasons.
First, all of the existing load-balancer strategies work on a static list. So if I know all of my endpoints ahead of time, no problem. I just code them into my Camel route. But what if my list of endpoints changes between environments? Maybe I could use properties. Well… only if the number of endpoints is static. Which brings me to the next reason…
All of the existing load-balancer strategies are configured at startup. So what do I do if my list changes dynamically at runtime? Let’s say that I want to do service discovery and load-balance between the currently active/registered backend services. If you’re familiar with Camel, you might be thinking “Why not just use the Camel Fabric Component? It does dynamic load-balancing and service discovery. Problem solved right?”. If all of my services are running in containers that are managed by Fabric8, that is a viable solution. But what if I want to discover some endpoints that are running on JBoss EAP instances. Or what if I’m not running a Fabric8 ensemble at all?
Finally, the most important reason… Because I can. :)
Creating a custom Camel Load Balancer implementation is fairly straight forward. You just create a class and implement the
LoadBalancer interface. There’s even a base class (
LoadBalancerSupport) that you can extend that will take care of some of the boilerplate coding for you. You then just fill in the details of how it picks the next endpoint from its internal list. Pretty simple right?
In my case, however, I’m not actually coming up with my own strategy for how to pick endpoints. I’m really just augmenting some existing strategy with a dynamic list of endpoints. So to be more specific, I’m not interested in implementing my own flavor of the Random, Round Robin, Sticky, … strategies. No need to reinvent that wheel. Instead, I just want to decorate those existing strategies and provide them with some additional capabilities. So I use the decorator pattern. That allows me to ignore all the tom-foolery of the load-balancing itself and concentrate on the portion that I really want. The dynamic service discovery.
Here’s my custom load-balancer class (or at least the important parts):
You can see that it’s just delegating most of the methods (ie, the
removeProcessor methods) to whatever existing implementation that it’s decorating. The actual methods that do the load-balancing (ie, the
process methods) do a little bit of work, but end up just delegating as well. So I didn’t actually have to do any algorithm work and I still get to use all the existing strategies. Pretty neat!
In addition to a delegate
LoadBalancer implementation, this class expects that you will give it a fully-configured jCache instance. In my example, I used Infinispan. But I could have just as easily used any other spec compliant implementation. Here’s my Infinispan configuration:
Now let’s get to the part that’s actually doing some work. The
LookupCacheListener class just implements the various
CacheListener interfaces from the jCache API. If it gets any events on the cache entry containing our endpoints, it simply updates the delegate’s internal list of processors. So as services come and go they can register their URIs in the cache, our listener will be notified, and our list of available load-balancer endpoints will be updated.
The final piece to discuss for this load-balancer implementation is the
UriPreProcessor. This is an interface that I created to allow an implementation to customize the URI in some way before adding it to the list. The idea is that other services that are registering themselves might not know that they’re going to be invoked from a Camel endpoint. So they likely won’t add options like
bridgeEndpoint=true to the URI. An implementation of this interface would allow you to add such options on their behalf. Here’s the interface itself:
And here’s a sample implementation that adds the options:
Now all that’s left is to actually use it in my Camel routes. To do so, I declare it just like any other bean. Then I use the
custom element in my
ref it. Looks something like this:
That’s it for the Camel side of things. Now let’s discuss how to get some services registered.
In my example, I just created a simple JAX-WS service in JBoss WildFly. Here’s the code so you can see how simple it is:
For this service, I created a
ServletContextListener to register/unregister it’s URI to/from the jCache.
So now when my
ServletContext is started, my JAX-WS service URI will be registered. And when it is stopped, my URI will be removed. Since I configured my Infinispan cache the same on both the JBoss WildFly and Camel sides, the local cache instances are connected and will receive events and updates.
That’s it! If you want to give it a go, check out the full source code at https://github.com/joshdreagan/infinispan-discovery. Hopefully it’s useful…