by R.I. Pienaar | Dec 13, 2011 | Code
This is an ongoing post in a series of posts about Middlware for Stomp users, please read parts 1, 2 and 3 of this series first before continuing below.
Back in Part 2 we wrote a little system to ship metrics from nodes into Graphite via Stomp. This solved the goals of the problem then but now lets see what to do when our needs change.
Graphite is like RRD where it would summarize data over time and eventually discard old data. Contrast that with OpenTSDB that never summarizes or delete data and can store billions of data points. Imagine we want to use Graphite for a short term reporting service for our data but we also need to store the data long term without losing any data. So we really want to store the data in 2 locations.
We have a few options open to use:
- Send the metric twice from every node, once to Graphite and once to OpenTSDB.
- Write a software router that receives metrics on one queue and then route the metric to 2 other queues in the middleware.
- Use facilities internal to the middleware to do the routing for us
The first option is an obvious bad idea and should just be avoided – this would be the worst case scenario for data collection at scale. The 3rd seems like the natural choice here but first we need to understand the facilities the middleware provides. Todays article will explore what ActiveMQ can do for you in this regard.
The 2nd seems an odd fit but as you’ll see below the capabilities for internal routing at the middleware layer isn’t all that exciting, useful in some cases but I think most projects will reach for some kind of message router in code sooner or later.
Virtual Destinations
If you think back to part 2 you’ll remember we have a publisher that publishes data into a queue and any number of consumers that consumes the queue. The queue will load balance the messages for us thus helping us scale.
In order to also create OpenTSDB data we essentially need to double up the consumer side into 2 groups. Ideally each set of consumers will be scalable horizontally and the sets of consumers should both get a copy of all the data – in other words we need 2 queues with all the data in it, one for Graphite and one for OpenTSDB.
You will also remember that Topics have the behavior of duplicating data they receive to all consumers of the topics. So really what we want is to attach 2 queues to a single topic. This way the topic will duplicate the data and the queues will be used for the scalable consumption of the data.
ActiveMQ provides a feature called Virtual Topics that solves this exact problem by convention. You publish messages to a predictably named topic and then you can create any number of queues that will all get a copy of that message.
The image above shows the convention:
- Publish to /topic/VirtualTopic.metrics
- Create consumers for /queue/Consumer.Graphite.VirtualTopic.metrics
Create as many of the consumer queues as you want, changing Graphite for some unique name and each of the resulting queues will behave like a normal queue with all the load balancing, storage and other queue like behaviors but all the queues will get a copy of all the data.
You can customize the name pattern of these queues by changing the ActiveMQ configuration files. I really like this approach to solving the problem vs approaches found in other brokers since this is all done by convention and you do not need to change your code to set up a bunch of internal structures that describes the routing topology. I consider routing topology that is living in code of the consumers to be a form of hard coding. Using this approach all I need to do is make sure the names of the destinations to publish to and consume from is configurable strings.
Our Graphite consumer would not need to change other than the name of the queue it should read and ditto for the producer.
If we find that we simply could not change the code for the consumers/producer or if it just was not a configurable setting you can still achieve this behavior by using something called a Composite Destinations in ActiveMQ that could describe this behavior purely in the config file with any arbitrarily named queues and topics.
Selective Consumers
Imagine we wish to give each one of our thousands of servers a unique destination on the middleware so that we can send machines a command directly. You could simple create queues like /queue/nodes.web1.example.com and keep creating queues per server.
The problem with this approach is that internally to ActiveMQ each queue is a thread. So you’d be creating thousands of threads – not ideal.
As we saw before in Part 3 messages can have headers – there we used the reply-to header. Below you’ll find some code that sets an arbitrary header:
stomp.publish("/queue/nodes", "service restart httpd", {"fqdn" => "web1.example.com"}) |
stomp.publish("/queue/nodes", "service restart httpd", {"fqdn" => "web1.example.com"})
We are publishing a message with the text service restart httpd to a queue and we are setting a fqdn header.
Now if every server in our estate subscribed to this one queue with the knowledge you have at this point of Queues this would have the effect of sending this restart request to some random one of our servers, not ideal!
The JMS specification allow for something called selectors to be used while subscribing to a destination:
stomp.subscribe("/queue/nodes", {"selector" => "fqdn = 'web1.example.com'"}) |
stomp.subscribe("/queue/nodes", {"selector" => "fqdn = 'web1.example.com'"})
The selector header sets the logic to apply to every message which will help decide if you get the message on your subscription or not. The selector language is defined using SQL 92 language and you can generally apply logic to any header in the message.
This way we set up a queue for all our servers without the overhead of 1000s of threads.
The choice for when to use a queue like this and when to use a traditional queue comes down to weighing up the overhead of validating all the SQL statements vs creating all the threads. There are also some side effects if you have a cluster of brokers – the queue traffic gets duplicated to all cluster brokers where with traditional queues the traffic only gets send to a broker if that broker actually has any subscribers interested in this data.
So you need to carefully consider the implications and do some tests with your work load, message sizes, message frequencies, amount of consumers etc.
Conclusion
There is a 3rd option that combines these 2 techniques. You’d create queues sourcing from the topic based on JMS Selectors deciding what data hits what queue. You would set up this arrangement in the ActiveMQ config file.
This, as far as I am aware, covers all the major areas internal to ActiveMQ that you can use to apply some routing and duplication of messages.
These methods are useful and solves some problems but as I pointed out it’s not really that flexible. In a later part of this series I will look into software routers from software like Apache Camel and how to write your own.
From a technology choices point of view future self is now thanking past self for building the initial metrics system using MOM since rather than go back to the drawing board when our needs changed we were able to solve our problems by virtue of the fact that we built it on a flexible foundation using well known patterns and without changing much if any actual code.
This series continue in part 5.
by R.I. Pienaar | Dec 12, 2011 | Code
Yesterday I showed a detailed example of a Asynchronous system using MOM. Please read part 1 and part 2 of this series first before continuing below.
The system shown yesterday was Asynchronous since there is no coupling, no conversation or time constraints. The Producer does not know or care what happens to the messages once Published or when that happens. This is a kind of nirvana for distributed systems but sadly it’s just not possible to solve every problem using this pattern.
Today I’ll show how to use MOM technologies to solve a different kind of problem. Specifically I will show how large retailers scale their web properties using these technologies to create web sites that is more resilient to failure, easier to scale and easier to manage.
Imagine you are just hitting buy on some online retailers web page, perhaps they have implemented a 1-click based buying system where the very next page would be a Thank You page showing some details of your order and also recommendations of what other things you might like. It would have some personalized menus and in some cases even a personalized look and feel.
By the time you see this page your purchase is not complete, it is still going on in the background but you have a fast acknowledge back and immediately you are being enticed to spend more money with relevant products.
To achieve this in a PHP or Rails world you would typically have a page that runs top down and do things like generate your CSS page, generate your personalized menu then write some record into a database perhaps for a system like delayed job to process the purchase later on and finally it would do a bunch of SQL queries to find the related items.
This approach is very difficult to scale, all the hard work happens in your front controller, it has to be able to communicate with all the technology you choose in the backend and you end up with a huge monolithic chunk of code that can rapidly become a nightmare. If you need more capacity to render the recommendations you have no choice but to scale up the entire stack.
The better approach is to decouple all of the bits needed to generate a web page, if you take the narrative above you would have small single purpose services that does the following:
- Take a 1-click order request, save it and provide an order number back. Start an Asynchronous process to fulfill the order.
- Generate CSS for the custom look and feel for user X
- Generate Menu for logged in user X
- Generate recommendation for product Y based on browsing history for user X
Here we have 4 possible services that could exist on the network and that do not really relate to each other in any way. They are decoupled, do not share state with each other and can do their work in parallel independently from each other.
Your front controller now would become something that simply Published to the MOM requests for each of the 4 services providing just the information each service needs and then wait for the responses. Once all 4 responses were received the page would be assembled and rendered. If some response does not arrive in time a graceful failure can be done – like render a generic menu, or do not show the recommendations only show the Thank You text.
There are many benefits to this approach, I’ll highlight some I find compelling below:
- You can scale each Service independently based on performance patterns – more order processors as this requires slower ACID writes into databases etc.
- You can use different technology where appropriate. Your payment systems might be .Net while your CSS generation is in Node.JS and recommendations are in Java
- Each system can be thought of as a simple standalone island with its own tech stack, monitoring etc, thinking about and scaling small components is much easier than a monolithic system
- You can separate your payment processing from the rest of your network for PCI compliance by only allowing the middleware to connect into a private subnet where all actual credit information lives
- Individual services can be upgraded or replaced with new ones much easier than in a monolithic system thus making the lives of Operations much better and lowering risk in ongoing maintenance of the system.
- Individual services can be reused – the recommendation engine isn’t just a engine that gets called at the end of a sale but also while browsing through the store, the same service can serve both types of request
This pattern is often known as Request Response or similar terms. You should only use it when absolutely needed as it increases coupling and effectively turn your service into a Synchronous system but it does have it’s uses and advantages as seen above.
Sample Code
I’ll show 2 quick samples of how this conversation flow works in code and expand a bit into the details wrt to the ActiveMQ JMS Broker. The examples will just have the main working part of the code not the bits that would set up connections to the brokers etc, look in part 2 for some of that.
My example will create a service that generates random data using OpenSSL, maybe you have some reason to create a very large number of these and you need to distribute it across many machines so you do not run out of entropy.
As this is basically a Client / Server relationship I will use these terms, first the client part – the part that requests a random number from the server:
stomp.subscribe("/temp-queue/random_replies")
stomp.publish("/queue/random_generator", "random", {"reply-to" => "/temp-queue/random_replies"})
Timeout::timeout(2) do
msg = stomp.receive
puts "Got random number: #{msg.body}"
end |
stomp.subscribe("/temp-queue/random_replies")
stomp.publish("/queue/random_generator", "random", {"reply-to" => "/temp-queue/random_replies"})
Timeout::timeout(2) do
msg = stomp.receive
puts "Got random number: #{msg.body}"
end
This is pretty simple, the only new thing here is that we are subscribing first to a Temporary Queue that we will receive the responses on and we send the request including this queue name. Below will have some more detail on temp queues and temp topics. The timeout part is important you need this to be able to handle the case where all of the number generators died or if the service is just too overloaded to service the request.
Here is the server part, it gets a request then generates the number and replies to the reply-to destination.
require 'openssl'
stomp.subscribe("/queue/random_generator")
loop do
begin
msg = client.receive
number = OpenSSL::Random.random_bytes(8).unpack("Q").first
stomp.publish(msg.headers["reply-to"], number)
rescue
puts "Failed to generate random number: #{$!}"
end
end |
require 'openssl'
stomp.subscribe("/queue/random_generator")
loop do
begin
msg = client.receive
number = OpenSSL::Random.random_bytes(8).unpack("Q").first
stomp.publish(msg.headers["reply-to"], number)
rescue
puts "Failed to generate random number: #{$!}"
end
end
You could start instances of this code on 10 servers and the MOM will load share the requests across the workers thus spreading out the available entropy across 10 machines.
When run this will give you nice big random numbers like 11519368947872272894. The web page in our example would follow a very similar process only it would post the requests to each of the services mentioned and then just wait for all the responses to come in and render them.
Temporary Destination
The big thing here is that we’re using a Temporary Queue for the replies. The behavior of temporary destinations differ from broker to broker and how the Stomp library needs to be used also changes. For ActiveMQ the behavior and special headers etc can be seen in their docs.
When you subscribe to a temporary destination like the client code above internally to ActiveMQ it sets up a queue that has a your connection as an exclusive subscriber. Internally the name would be something else entirely from what you gave it, it would be unique and exclusive to you. Here is an example for a Temporary Queue setup on a remote broker:
/remote-temp-queue/ID:stomp1.us.xx.net-39316-1323647624072-3:3005:1 |
/remote-temp-queue/ID:stomp1.us.xx.net-39316-1323647624072-3:3005:1
If you were to puts the contents of msg.headers[“reply-to”] in the server code you would see the translated queue name as above. The broker does this transparently for you.
Other processes can write to this unique destination but your connection would be the only one able to consume message from it. Soon as your connection closes or you unsubscribe from it the broker will free the queue, delete any messages on it and anyone else trying to write to it will get an exception.
Temporary queues and this magical translation happens even across a cluster of brokers so you can spread this out geographically and it would work.
Setting up a temporary queue and informing a network of brokers about it is a costly process so you should always try to set up a temporary queue early on in the process life time and reuse it for all the work you have to do.
If you need to correlate responses to their requests then you should use the correlation-id header for that – set it on the request and when constructing the reply read it from the request and set it again on the new reply.
This series continue in part 4.
by R.I. Pienaar | Dec 11, 2011 | Code
Yesterday I gave a quick intro to the basics of Message Orientated Middleware, today we’ll build something kewl and useful.
Graphite is a fantastic statistics as a service for your network package. It can store, graph, slice and dice your time series data in ways that was only imaginable in the dark days of just having RRD files. The typical way to get data into it is to just talk to its socket and send some metric. This is great mostly but have some issues:
- You have a huge network and so you might be able to overwhelm its input channel
- You have strict policies about network connections and are not allowed to have all servers open a connection to it directly
- Your network is spread over the globe and sometimes the connections are just not reliable, but you do not wish to loose metrics during this time
Graphite solves this already by having a AMQP input channel but for the sake of seeing how we might solve these problems I’ll show how to build your own Stomp based system to do this.
We will allow all our machines to Produce messages into the Queue and we will have a small pool of Consumers that read the queue and speak to Graphite using the normal TCP protocol. We’d run Graphite and the Consumers on the same machine to give best possible availability to the TCP connections but the Middleware can be anywhere. The TCP connections to Graphite will be persistent and be reused to publish many metrics – a connection pool in other words.
Producer
So first the Producer side of things, this is a simple CLI tool that take a metric and value on the CLI and publish it.
#!/usr/bin/ruby
require 'rubygems'
require 'stomp'
raise "Please provide a metric and value on the command line" unless ARGV.size == 2
raise "The metric value must be numeric" unless ARGV[1] =~ /^[\d\.]+$/
msg = "%s.%s %s %d" % [Socket.gethostname, ARGV[0], ARGV[1], Time.now.utc.to_i]
begin
Timeout::timeout(2) do
stomp = Stomp::Client.new("", "", "stomp.example.com", 61613)
stomp.publish("/queue/graphite", msg)
stomp.close
end
rescue Timeout::Error
STDERR.puts "Failed to send metric within the 2 second timeout"
exit 1
end |
#!/usr/bin/ruby
require 'rubygems'
require 'stomp'
raise "Please provide a metric and value on the command line" unless ARGV.size == 2
raise "The metric value must be numeric" unless ARGV[1] =~ /^[\d\.]+$/
msg = "%s.%s %s %d" % [Socket.gethostname, ARGV[0], ARGV[1], Time.now.utc.to_i]
begin
Timeout::timeout(2) do
stomp = Stomp::Client.new("", "", "stomp.example.com", 61613)
stomp.publish("/queue/graphite", msg)
stomp.close
end
rescue Timeout::Error
STDERR.puts "Failed to send metric within the 2 second timeout"
exit 1
end
This is all there really is to sending a message to the middleware, you’d just run this like
producer.rb load1 `cat /proc/loadavg|cut -f1 -d' '` |
producer.rb load1 `cat /proc/loadavg|cut -f1 -d' '`
Which would result in a message being sent with the body
devco.net.load1 0.1 1323597139 |
devco.net.load1 0.1 1323597139
Consumer
The consumer part of this conversation is not a whole lot more complex, you can see it below:
#!/usr/bin/ruby
require 'rubygems'
require 'stomp'
def graphite
@graphite ||= TCPSocket.open("localhost", 2003)
end
client = Stomp::Connection.new("", "", "stomp.example.com", 61613, true)
client.subscribe("/queue/graphite")
loop do
begin
msg = client.receive
graphite.puts msg
rescue
STDERR.puts "Failed to receive from queue: #{$!}"
sleep 1
retry
end
end |
#!/usr/bin/ruby
require 'rubygems'
require 'stomp'
def graphite
@graphite ||= TCPSocket.open("localhost", 2003)
end
client = Stomp::Connection.new("", "", "stomp.example.com", 61613, true)
client.subscribe("/queue/graphite")
loop do
begin
msg = client.receive
graphite.puts msg
rescue
STDERR.puts "Failed to receive from queue: #{$!}"
sleep 1
retry
end
end
This subscribes to the queue, loops forever while reading messages that then get sent to Graphite using a normal TCP socket. This should be a bit more complex to use the transaction properties I mentioned since a crash here will loose a single message.
Results
So that is really all there is to it! You’d totally want to make the receiving end a bit more robust, make it a daemon perhaps using the Daemons or Dante Gems and add some logging. You’d agree though this is extremely simple code that anyone could write and maintain.
This code has a lot of non obvious side effects though simply because we use the Middleware for communication:
- It’s completely decoupled, the Producers don’t know anything about the Consumers other than the message format.
- It’s reliable because the Consumer can die but the Producers would not even be aware or need to care about this
- It’s scalable – by simply starting more Consumers you can consume messages from the queue quicker and in a load balanced way. Contrast this with perhaps writing a single multi threaded server with all that entails.
- It’s trivial to understand how it works and the code is completely readable
- It protects my Graphite from the Thundering Herd Problem by using the middleware as a buffer and only creating a manageable pool of writers to Graphite
- It’s language agnostic, you can produce messages from Perl, Ruby, Java etc
- The network layer can be made resilient without any code changes
You wouldn’t think this 44 lines of code could have all these properties, but they do and this is why I think this style of coding is particularly well suited to Systems Administrators. We are busy people, we do not have time to implement from scratch our own connection pooling, buffers, spools and everything else you would need to try to duplicate these points from scratch. We have 20 minutes and we just want to solve our problem. Languages like Ruby and technologies like Message Orientated Middleware lets you do this.
I’d like to expand on the one aspect a bit – I mentioned that the network topology can change without the code being aware of it and that we might have restricted firewalls preventing everyone from communicating with Graphite. Our 44 lines of code solves these problems with the help of the MOM.
By using the facilities the middleware provides to create complex networks we can distribute our connectivity layer globally as below:
Here we have producers all over the world and our central consumer sitting in the EU somewhere. The queuing and storage characteristics of the middleware is present in every region. The producers in each region only need the ability to communicate with their regional Broker.
The middleware layer is reliably connected in a Mesh topology but in the event that transatlantic communications are interrupted the US broker will store the metrics till the connection problem is resolved. At that point it will forward the messages on to the EU broker and finally to the Consumer.
We can deploy brokers in a HA configuration regionally to protect against failure there. This is very well suited for multi DC deployments, deployments in the cloud where you have machines in different Regions and Availability Zones etc.
This is also an approach you could use to also allow your DMZ machines to publish metrics without needing the ability to connect directly to the Graphite service. The middleware layer is very flexible in how it’s clustered, who makes the connections etc so it’s ideal for that.
Conclusion
So in the end with just a bit of work once we’ve invested in the underlying MOM technology and deployed that we have solved a bunch of very complex problems using very simple techniques.
While this was done with reliability and scalability in mind for me possibly the bigger win is that we now have a simple network wide service for creating metrics. You can write to the queue from almost any language and you can easily allow your developers to just emit metrics from their Java code and you can emit metrics from the system side perhaps by reusing Munin.
Using code that is not a lot more complex than this I have been able to gather 10s of thousands of Munin metrics in a very short period of time into Graphite. Was able to up my collection frequency to once every minute instead of the traditional 5 minutes and was able to do that with a load average below 1 vs below 30 for Munin. This is probably more to do with Graphite being superior than anything else but the other properties outlined above makes this very appealing. Nodes push their statistics soon as they are built and I never need to edit a Munin config file anymore to tell it where my servers are.
This enabling of all parties in the organization to quickly and easily create metrics without having an operations bottleneck is a huge win and at the heart of what it means to be a DevOps Practitioner.
Part 3 has been written, please read that next.
by R.I. Pienaar | Dec 11, 2011 | Code
As most people who follow this blog know I’m quite a fan of messaging based architectures. I am also a big fan of Ruby and I like the simplicity of the Stomp Gem to create messaging applications rather than some of the more complex options like those based on Event Machine (which I am hard pressed to even consider Ruby) or the various AMQP ones.
So I wanted to do a few blog posts on basic messaging principals and patterns and how to use those with the Stomp gem in Ruby. I think Ruby is a great choice for systems development – it’s not perfect by any stretch – but it’s a great replacement for all the things systems people tend to reach to Perl for.
Message Orientated Middleware represents a new way of inter process communications, different from previous approaches that were in-process, reliant on file system sockets or even TCP or UDP sockets. While consumers and producers connect to the middleware using TCP you simply cannot really explain how messaging works in relation to TCP. It’s a new transport that brings with it its own concepts, addressing and reliability.
There are some parallels to TCP/IP wrt to reliability as per TCP and unreliability as per UDP but that’s really where it ends – Messaging based IPC is very different and best to learn the semantics. TCP is to middleware as Ethernet frames are to TCP, it’s just one of the possible ways middleware brokers can communicate and is at a much lower level and works very differently.
Why use MOM
There are many reasons but it comes down to promoting a style of applications that scales well. Mostly it does this by a few means:
- Promotes application design that breaks complex applications into simple single function building blocks that’s easy to develop, test and scale.
- Application building blocks aren’t tightly coupled, doesn’t maintain state and can scale independently of other building blocks
- The middleware layer implementation is transparent to the application – network topologies, routing, ACLs etc can change without application code change
- The brokers provide a lot of the patterns you need for scaling – load balancing, queuing, persistence, eventual consistency, etc
- Mature brokers are designed to be scalable and highly available – very complex problems that you really do not want to attempt to solve on your own
There are many other reasons but for me these are the big ticket items – especially the 2nd one.
Note of warning though while mature brokers are fast, scalable and reliably they are not some magical silver bullet. You might be able to handle 100s of thousands of messages a second on commodity hardware but it has limits and trade offs. Enable persistence or reliable messaging and that number drops drastically.
Even without enabling reliability or persistence you can easily do dumb things that overwhelm your broker – they do not scale infinitely and each broker has design trade offs. Some dedicate single threads to topics/queues that can become a bottleneck, others do not replicate queues across a cluster and so you end up with SPOFs that you might not have expected.
Message passing might appear to be instantaneous but they do not defeat the speed of light, it’s only fast relative to the network distance, network latencies and latencies in your hardware or OS kernel.
If you wish to design a complex application that relies heavily on your middleware for HA and scaling you should expect to spend as much time learning, tuning, monitoring, trending and recovering from crashes as you might with your DBMS, Web Server or any other big complex component of your system.
Types of User
There are basically 2 terms that are used to describe actors in a message orientated system. You have software that produce messages called Producers and ones consuming them called Consumers. Your application might be both a producer and a consumer but these terms are what I’ll use to describe the roles of various actors in a system.
Types of Message
In the Stomp world there really are only two types of message destinations. Message destinations have names like /topic/my.input or /queue/my.input. Here we have 2 message sources – the one is a Topic and the other is a Queue. The format of these names might even change between broker implementations.
There are some riffs on these 2 types of message source – you get short lived private destinations, queues that vanish soon as all subscribers are gone, you get topics that behave like queues and so forth. The basic principals you need to know are just Topics and Queues and detail on these can be seen below, the rest builds on these.
Topics
A topic is basically a named broadcast zone. If I produce a single message into a topic called /topic/my.input and there are 10 consumers of that topic then all 10 will get a copy of the message. Messages are not stored when you aren’t around to read them – it’s a stream of messages that you can just tap into as needed.
There might be some buffers involved which means if you’re a bit slow to consume messages you will have a few 100 or 1000 there waiting depending on your settings, but this is just a buffer it’s not really a store and shouldn’t be relied on. If your process crash the buffer is lost. If the buffer overflow messages are lost.
The use of topic is often described as having Publish and Subscribe semantics since consumers Subscribe and every subscriber will get messages that are published.
Topics are often used in cases where you do not need reliable handling of your data. A stock symbol or high frequency status message from your monitoring system might go over topics. If you miss the current stock price soon enough the next update will come that would supersede the previous one so why would you queue them, perfect use for a broadcast based system.
Queues
Instead of broadcasting messages queues will store messages if no-one is around to consume them and a queue will load balance the work load across consumers.
This style of message is often used to create async workers that does some kind of long running task.
Imagine you need to convert many documents from MS Word to PDF – maybe after someone uploaded it to your site. You would create each job request in a queue and your converter process consumes the queue. If the converter is too slow or you need more capacity you simply need to add more consumers – perhaps even on different servers – and the middleware will ensure the traffic is load shared across the consumers.
You can therefore focus on a single function process – convert document to PDF – and the horizontal scalability comes at a very small cost on the part of the code since the middleware handles most of that for you. Messages are stored in the broker reliably and if you choose can even survive broker crashes and server reboots.
Additionally queues generally have a transaction metaphor, you start a transaction when you begin to process the document and if you crash mid processing the message will be requeued for later processing. To avoid a infinite loop of a bad message that crash all Consumers the brokers will also have a Dead Letter Queue where messages that have been retried too many times will go to sit in Limbo for an administrator to investigate.
These few basic features enable you to create software that resilient to failure, scalable and not susceptible to thundering herd problems. You can easily monitor the size of queues and know if your workers are not keeping up so you can provision more worker capacity – or retire unneeded capacity.
Demonstration
Playing with these concepts is very easy, you need a middleware broker and the Stomp library for Ruby, follow the steps below to install both in a simple sandbox that you can delete when you’re done. I’ll assume you installed Ruby, Rubygems and Python with your OS package management.
Note I am using CoilMQ here instead of the Ruby Stompserver since Stompserver has some bug with queues – they just don’t work right at all.
$ export GEM_HOME=/tmp/gems
$ export PYTHONPATH=/tmp/python
$ gem install stomp
$ mkdir /tmp/python; easy_install -d /tmp/python CoilMQ
$ /tmp/python/coilmq |
$ export GEM_HOME=/tmp/gems
$ export PYTHONPATH=/tmp/python
$ gem install stomp
$ mkdir /tmp/python; easy_install -d /tmp/python CoilMQ
$ /tmp/python/coilmq
At this point you have a working Stomp server that is listening on port 61613, you can just ^C it when you are done. If you want to do the stuff below using more than one machine then add to the command line for stompserver -b 0.0.0.0 and make sure port 61613 is open on your machine. The exercises below will work fine on one machine or twenty.
To test topics we first create multiple consumers, I suggest you do this in Screen and open multiple terms, for each terminal set the GEM_HOME as above.
Start 2 or 3 of these, these are consumers on the topic:
$ STOMP_HOST=localhost STOMP_PORT=61613 /tmp/gems/bin/stompcat /topic/my.input
Connecting to stomp://localhost:61613 as
Getting output from /topic/my.input |
$ STOMP_HOST=localhost STOMP_PORT=61613 /tmp/gems/bin/stompcat /topic/my.input
Connecting to stomp://localhost:61613 as
Getting output from /topic/my.input
Now we’ll create 1 producer and send a few messages, just type into the console I typed 1, 2, 3 (ignore the warnings about deprecation):
$ STOMP_HOST=localhost STOMP_PORT=61613 /tmp/gems/bin/catstomp /topic/my.input
Connecting to stomp://localhost:61613 as
Sending input to /topic/my.input
1
2
3 |
$ STOMP_HOST=localhost STOMP_PORT=61613 /tmp/gems/bin/catstomp /topic/my.input
Connecting to stomp://localhost:61613 as
Sending input to /topic/my.input
1
2
3
You should see these messages showing up on each of your consumers at roughly the same time and all your consumers should have received each message.
Now try the same with /queue/my.input instead of the topic and you should see that the messages are distributed evenly across your consumers.
You should also try to create messages with no consumers present and then subscribe consumers to the queue or topic, you’ll notice the difference in persistence behavior between topics and queues right away
When you’re done you can ^C everything and just rm /tmp/python and /tmp/gems.
That’s it for today, I’ll post several follow up posts soon.
UPDATE: part 2 has been published.
by R.I. Pienaar | Oct 8, 2011 | Code
I love graphite, I think it’s amazing, I specifically love that it’s essentially Stats as a Service for your network since you can get hold of the raw data to integrate into other tools.
I’ve started pushing more and more things to it on my network like all my Munin data as per my previous blog post.
What’s missing though is a very simple to manage dashboard. Work is ongoing by the Graphite team on this and there’s been a new release this week that refines their own dashboard even more.
I wanted a specific kind of dashboard though:
- The graph descriptions should be files that you can version control
- Graphs should have meta data that’s visible to people looking at the graphs for context. The image below show a popup that is activated by hovering over a graph.
- Easy bookmarkable URLs
- Works in common browsers and resolutions
- Allow graphs to be added/removed/edited on the fly without any heavy restarts required using something like Puppet/Chef – graphs are just text files in a directory
- Dashboards and graphs should be separate files that can be shared and reused
I wrote such a dashboard with the very boring name – GDash – that you can find in my GitHub. It only needs Sinatra and uses the excellent Twitter bootstrap framework for the visual side of things.
click for full size
The project is setup to be hosted in any Rack server like Passenger but it will also just work in Heroku, if you hosted it on Heroku it would create URLs to your private graphite install. To get it going on Heroku just follow their QuickStart Guide. Their free tier should be enough for a decent sized dashboard. Deploying the app into Heroku once you are signed up and setup locally is just 2 commands.
You should only need to edit the config.ru file to optionally enable authentication and to point it at your Graphite and give it a name. After that you can add graphs, the example one that creates the above image is in the sample directory.
More detail about the graph DSL used to describe graphs can be found at GitHub, I know the docs for the DSL needs to be improved and will do so soon.
I have a few plans for the future:
- As I am looking to replace Munin I will add a host view that will show common data per host. It will show all the data there and you can give it display hints using the same DSL
- Add a display mode suitable for big monitors – wider layout, no menu bar
- Some more configuration options for example to set defaults that apply to all graphs
- Add a way to use dygraphs to display Graphite data
Ideas, feedback and contributions welcome!