Decommissioning JBoss A-MQ Brokers

Decommissioning JBoss A-MQ Brokers

There are many reasons why someone might need to decommision a JBoss A-MQ broker. Perhaps you are taking a server down for maintenance. Maybe you’re trying to do an upgrade. Or maybe you’ve scaled up during a peak period and now need to scale back down. In any case, you likely don’t want the messages that are persisted in that store to be stuck until you bring things back online. And in the case that you don’t plan to bring things back online, you certainly don’t want them to be lost. So what do you do?

One strategy that I see a lot of people employ is to stop all the producers, and then wait until all the messages get processed by the consumers. This works fine for a lot of customers. And if you’re currently doing it this way (and it’s working), don’t worry. You’re certainly not doing anything wrong. However, this requires a lot of coordination and planning.

It requires coordination and planning because A-MQ (at the broker level) doesn’t really have the ability to stop the production of messages without also stopping the consumption of messages. This is due to the fact that the default configuration (which is what most people will use) only opens one transport connector (listener) that will service both producers and consumers. You can disconnect individual clients, but if they decide to reconnect there’s nothing that’s going to stop them. Most people control this on the application side. They just shut down all of their producer applications (or at least the initial ingress ones) and wait for the consumers to fully process the existing messages. Like I said, there’s nothing wrong with this approach if it’s working for you. But I’ve effectively shut my entire system down even if I’m only decommissioning a single broker. What if I can’t have that much downtime?

Maybe I could get creative with my clustering and partition my load (ie, multiple network of brokers that are separated from eachother). Then I’d only affect a single partition of my cluster at a time. My producer clients could failover and reconnect to another partition during this downtime so it would seem as if I’m still operational. Then I could swap them back if desired when I’m done. Definitely a step in the right direction, but I’m still taking down a whole partition of brokers just to decommision one.

Another option would be to open two separate transport connectors (listeners) and have producers connect to one and consumers to the other. I’ve complicated my client code a bit, but maybe that’s ok. It’s not too bad after all… And now I have the ability to shut down the producer transport connector separately from the consumer transport connector at a single broker level, thus ensuring that no more messages will be produced to my broker while still allowing them to be consumed. But what if I have a network of brokers setup? I’ll need to also disable my network connector so that messages don’t get forwarded to me. Ok… we’re getting better… One outstanding problem is that I now have to rely on the locally connected consumers to successfully process all of my messages. How long will I need to wait? How many consumers are even connected to my broker?

This brings us to the final solution (and best in my opinion). I can take advantage of the fact that ActiveMQ is really just a very flexible set of libraries and I can create a “message drainer” application to purge my persistent store. What do I mean? Well, first I would create a simple Java app that will spin up an embedded broker. I can point that embedded broker at a KahaDB persistent store. Then I can start consuming messages from it (like I would from any broker) and send them off to another broker. And since my embedded broker is local (ie., inside my JVM), I can just connect to it using the VM transport. So I don’t even have to worry about remote clients. They can’t even see my broker. They will simply failover to another active broker as soon as I take mine down and have no knowledge that I’m even connected and draining the messages.

Neat! Now I don’t have to worry about coordinating my applications, separating my transport connectors, bringing down brokers unnecessarily, … I can simply bring down the broker that I wish to decommission. Then just run my “message drainer” application, point it to the KahaDB of my downed broker, and give it the url of an active broker that I’d like to send my messages off to. Once I’ve finished draining the messages, I can get rid of my broker and it’s persistent store. Or if I was just doing maintenance, I can bring the broker back online and it will see its store with no messages. So no need to worry about dupicates. This solution is simple, requires no unnecessary downtime, and can be used in any situation from performing maintenance to down-scaling. Here’s some sample source code to get you started: Enjoy!


Josh Reagan

Posted on


Updated on


Licensed under