by R.I. Pienaar | Mar 11, 2012 | Code
I’ve wanted to play with the GLI gem from Dave Copeland for a while after seeing his presentation on writing CLI tools in Ruby – he also has a book due on the subject that I’m quite keen to get.
I hate all task trackers, todo lists and everything like it, I’ve spent lots of money buying all sorts of tools with syncing and reminders and whatever and never end up seeing it through. I’ve always attacked the problem from my desktop, phone or tablet front but I realize I don’t actually use any of those. I use the CLI. I’ve had some ideas like using sub-shells to track progress on tasks while working in my Unix CLI where I live so I thought I’ll give it a go.
So this weekend I finally sat down to play with GLI and wrote a little task tracker. GLI is quite nice, it’s not perfect there’s a few things I’d have done differently. But it saves so much time and brings all important consistency – there really isn’t enough wrong with it that you wouldn’t want to use it as the benefits are huge.
The task tracker I wrote is designed for the CLI, it has the usual create, edit, close work flow so I won’t be going into much of that but see below:
$ alias t=gwtf
$ t a this is a sample task
3 (open): this is a sample task
$ t ls
3 this is a sample task
Items: 1 / 2
$ t done 3
3 (closed): this is a sample task |
$ alias t=gwtf
$ t a this is a sample task
3 (open): this is a sample task
$ t ls
3 this is a sample task
Items: 1 / 2
$ t done 3
3 (closed): this is a sample task
Apart from these basic things though each task has a worklog where you can record activities associated with this task:
$ t log 3 did some work on this sample task --hour=2
Logged 'did some work on this sample task' against item 3 for 2 hours 0 minutes 0 seconds |
$ t log 3 did some work on this sample task --hour=2
Logged 'did some work on this sample task' against item 3 for 2 hours 0 minutes 0 seconds
So this just tracks that 2 hours of work was done on the item, this is nice if the work isn’t CLI based, but if they are CLI based:
$ t shell 3
Starting work on item 3, exit to record the action and time
$ tmux attach-session -t foo
# do your work
$ exit
Optional description for work log: Added some code feature
Recorded 30 seconds of work against item: 3 (open,2): this is a sample task
$ t show 3
ID: 3
Subject: this is a sample task
Status: open
Time Worked: 2 hours 0 minutes 30 seconds
Created: 03/11/12 10:30
Work Log:
03/11/12 10:33 did some work on this sample task (2 hours 0 minutes 0 seconds)
03/11/12 10:36 Added some code feature (30 seconds) |
$ t shell 3
Starting work on item 3, exit to record the action and time
$ tmux attach-session -t foo
# do your work
$ exit
Optional description for work log: Added some code feature
Recorded 30 seconds of work against item: 3 (open,2): this is a sample task
$ t show 3
ID: 3
Subject: this is a sample task
Status: open
Time Worked: 2 hours 0 minutes 30 seconds
Created: 03/11/12 10:30
Work Log:
03/11/12 10:33 did some work on this sample task (2 hours 0 minutes 0 seconds)
03/11/12 10:36 Added some code feature (30 seconds)
So here I went into a sub shell for this task, did some work in a tmux session and used gwtf to record the time spent – 30 seconds. When viewing the item all the work done is added up and a total is showed above.
In the shell you’ll find environment variables for the current task ID, description and project I use this to give myself some visual feedback that I am working on a task as below
It supports projects so you can group related tasks into a project. All the data is stored in simple JSON files in your home directory and it makes backups of your data on every change.
Hopefully someone finds this useful, I’ll probably be tweaking it a lot to see if I can finally create a solution to this problem I like.
The code is on GitHub and you can just install the Gem.
by R.I. Pienaar | Feb 26, 2012 | Code
I have a number of mail servers where mail enters, get spam scanned etc and then forwarded to mail box servers. This used to be customer facing and had web interfaces and statistics etc but I am now scaling all this down to just manage my own and some friends domains.
Rather than maintain all the web interfaces that I really could not care for I’d rather manage this with Puppet, my ideal end result would be:
exim::route{"devco.net":
nexthop => "my.mailbox.server",
spamthreshold => 10,
spamdestination => ":blackhole:",
has_greylist => 1,
has_spam_check => 1,
has_whitelist => 1
} |
exim::route{"devco.net":
nexthop => "my.mailbox.server",
spamthreshold => 10,
spamdestination => ":blackhole:",
has_greylist => 1,
has_spam_check => 1,
has_whitelist => 1
}
This should add all the required configuration to deliver mail arriving at the mail relay for devco.net to the server my.mailbox.server. It will set up Spam Assassin scans and send all mail that scores more than 10 to the exim specific destination :blackhole: that would simply delete the mail. I could specify any valid mail destination here like a file or other email address. I won’t be covering the has_* entries in this guide, they just control various policies in my ACLs on a per domain basis.
I’ll first cover the Exim side of things, clearly I do not want to be editing exim.conf each time so I will read the domain information from a file stored on the server. These files will be stored in /etc/exim/routes/devco.net and look like:
nexthop: my.mailbox.server
spamthreshold: 10
spamdestination: :blackhole: |
nexthop: my.mailbox.server
spamthreshold: 10
spamdestination: :blackhole:
In order to accept mail for a domain Exim needs a list of valid domains it will accept mail for, so as our routes are named after the domain we can just leverage that to build the list:
domainlist mw_domains = dsearch;/etc/exim/routes |
domainlist mw_domains = dsearch;/etc/exim/routes
Next we should pull from the file the various settings we store there:
NEXTHOP = ${lookup{nexthop}lsearch{/etc/exim/routes/${domain}}}
DOMAINREJECTSCORE = ${eval:10*${lookup{spamthreshold}lsearch{/etc/exim/routes/${domain}}}}
DOMAINSPAMDEST = ${lookup{spamdestination}lsearch{/etc/exim/routes/${domain}}}
ACL_SPAMSCORE = acl_m3 |
NEXTHOP = ${lookup{nexthop}lsearch{/etc/exim/routes/${domain}}}
DOMAINREJECTSCORE = ${eval:10*${lookup{spamthreshold}lsearch{/etc/exim/routes/${domain}}}}
DOMAINSPAMDEST = ${lookup{spamdestination}lsearch{/etc/exim/routes/${domain}}}
ACL_SPAMSCORE = acl_m3
This creates handy variables that we can just use in our routes and spam configuration, I won’t go into the actual setup of spam assassin scanning as that’s pretty standard stuff better documented elsewhere. In the spam assassin ACLs just store your $spam_score_int in ACL_SPAMSCORE.
To deliver the mail either to the specific spam destination or to the next hop we just need to add 2 routers to the routes section. These are order dependant so they should be in the order below:
spamblock:
driver = redirect
condition = ${if >= {$ACL_SPAMSCORE}{DOMAINREJECTSCORE}{true}{false}}
data = DOMAINSPAMDEST
headers_add = X-MW-Note: Redirecting mail to domain spam destination
domains = +mw_domains
no_verify |
spamblock:
driver = redirect
condition = ${if >= {$ACL_SPAMSCORE}{DOMAINREJECTSCORE}{true}{false}}
data = DOMAINSPAMDEST
headers_add = X-MW-Note: Redirecting mail to domain spam destination
domains = +mw_domains
no_verify
Here we’re just doing a quick if check over the stored spam score to see if its bigger or equal to the threshold stored in DOMAINREJECTSCORE and then set the data of the route – where the mail should go – to the configured address from DOMAINSPAMDEST. This router will only be active for domains that this Exim server is a relay for and it adds a little debug note as a header.
The actual mail delivery that is being used in place of the normal dnslookup route is here:
mw_domains:
driver = manualroute
transport = remote_smtp
domains = +mw_domains
user = root
headers_add = "X-MW-Recipient: ${local_part}@${domain}\n\
X-MW-Sender: $sender_address\n\
X-MW-Server: $primary_hostname"
route_data = MW_NEXTHOP |
mw_domains:
driver = manualroute
transport = remote_smtp
domains = +mw_domains
user = root
headers_add = "X-MW-Recipient: ${local_part}@${domain}\n\
X-MW-Sender: $sender_address\n\
X-MW-Server: $primary_hostname"
route_data = MW_NEXTHOP
This router is also restricted to only our relay domains, it adds some headers for debug purposes and finally sets the route_data of the email to the next hop from MW_NEXTHOP thus delivering the mail to the destination.
That’s all there is to do on the Exim side, it’s pretty standard stuff. Next up the Puppet define:
define exim::route($nexthop, $spamthreshold, $spamdestination, $ensure = "present") {
file{"/etc/exim/routes/${name}":
ensure => $ensure,
content => template("exim/route.erb")
}
} |
define exim::route($nexthop, $spamthreshold, $spamdestination, $ensure = "present") {
file{"/etc/exim/routes/${name}":
ensure => $ensure,
content => template("exim/route.erb")
}
}
And the template for this define is also extremely simple:
nexthop: <%= nexthop %>
spamthreshold: <%= spamthreshold %>
spamdestination: <%= spamdestination %> |
nexthop: <%= nexthop %>
spamthreshold: <%= spamthreshold %>
spamdestination: <%= spamdestination %>
I could stop here and just create a bunch of exim::route resources but that would be code changes, I prefer just changing data. So I am going to create a JSON file called mailrelay.json and store it with my Hiera data.
{
"relay_domains": {
"devco.net": {
"nexthop": "my.mailbox.server",
"spamdestination": ":blackhole:",
"spamthreshold": 10,
"has_dkim": 1
},
"another.com": {
"nexthop": "ASPMX.L.GOOGLE.COM.",
"spamdestination": ":blackhole:",
"spamthreshold": 10
}
}
} |
{
"relay_domains": {
"devco.net": {
"nexthop": "my.mailbox.server",
"spamdestination": ":blackhole:",
"spamthreshold": 10,
"has_dkim": 1
},
"another.com": {
"nexthop": "ASPMX.L.GOOGLE.COM.",
"spamdestination": ":blackhole:",
"spamthreshold": 10
}
}
}
I assign all my incoming mail servers a single class that would look roughly like this:
class roles::mailrelay {
include exim
include exim::mailrelay
$routes = hiera("relay_domains", "", "mailrelay")
$domains = keys($routes)
exim::routemap{$domains:
routes => $routes
}
} |
class roles::mailrelay {
include exim
include exim::mailrelay
$routes = hiera("relay_domains", "", "mailrelay")
$domains = keys($routes)
exim::routemap{$domains:
routes => $routes
}
}
The call to Hiera fetches the entire hash from the mailrelay.json file and stores it in $routes. I then use the keys function from puppetlabs-stdlib to extract just the list of domains into an array. I then pass that into a define exim::routemap that iterates the list and builds up individual exim::route resources.
The routemap define is just as below, I’ve shortened it a fair bit as I also have validation logic in here to make sure I pass valid data in the hash from Hiera, the stdlib module has various validator functions thats really handy for this:
define exim::routemap($routes) {
exim::route{$name:
nexthop => $routes[$name]["nexthop"],
spamthreshold => $routes[$name]["spamthreshold"],
spamdestination => $routes[$name]["spamdestination"]
}
if ($routes[$name]["has_dkim"] == 1) {
exim::dkim_domain{$name: }
} else {
exim::dkim_domain{$name: ensure => absent}
}
} |
define exim::routemap($routes) {
exim::route{$name:
nexthop => $routes[$name]["nexthop"],
spamthreshold => $routes[$name]["spamthreshold"],
spamdestination => $routes[$name]["spamdestination"]
}
if ($routes[$name]["has_dkim"] == 1) {
exim::dkim_domain{$name: }
} else {
exim::dkim_domain{$name: ensure => absent}
}
}
And that’s about it, now my mail routing setup, DKIM signing and other policies are managed in a simple JSON file in my Puppet Manifests.
by R.I. Pienaar | Jan 28, 2012 | Code
Earlier this week I wrote about Oldskool which is a Gem extendable search tool. Today I want to show how to create a plugin for it to query some custom source.
We’ll build a plugin that shows Puppet Type references, you can see how it will look in the image, click for a larger version.
The end result is that I can just search for “type exec” to get the correct documentation for my Puppet install. I’ll go quite quick through all the various bits here, the complete working plugin is in my GitHub.
The nice thing about rendering the type references locally is that you can choose exactly which version to render the docs for and you could possibly also render docs for locally written types that are not part of Puppet – not tried to see how you might render custom types though.
Plugins are made up of a few things that should have predictable names, in the case of our Puppet plugin I decided to call it puppet which means we need a class called Oldskool::PuppetHandler
that does the work for that plugin. You can see the one here and it goes in lib/oldskool/puppet_handler.rb
in your gem:
module Oldskool
class PuppetHandler
def initialize(params, keyword, config)
@params = params
@keyword = keyword
@config = config
self
end
def plugin_template(template)
File.read(File.expand_path("../../../views/#{template}.erb", __FILE__))
end
def handle_request(keyword, query)
type = Puppetdoc.new(query)
menu = [{:title => "Type Reference", :url => "http://docs.puppetlabs.com/references/stable/type.html"},
{:title => "Function Reference", :url => "http://docs.puppetlabs.com/references/stable/function.html"},
{:title => "Language Guide", :url => "http://docs.puppetlabs.com/guides/language_guide.html"}]
{:template => plugin_template(:type), :type => type.doc, :topmenu => menu}
end
end
end |
module Oldskool
class PuppetHandler
def initialize(params, keyword, config)
@params = params
@keyword = keyword
@config = config
self
end
def plugin_template(template)
File.read(File.expand_path("../../../views/#{template}.erb", __FILE__))
end
def handle_request(keyword, query)
type = Puppetdoc.new(query)
menu = [{:title => "Type Reference", :url => "http://docs.puppetlabs.com/references/stable/type.html"},
{:title => "Function Reference", :url => "http://docs.puppetlabs.com/references/stable/function.html"},
{:title => "Language Guide", :url => "http://docs.puppetlabs.com/guides/language_guide.html"}]
{:template => plugin_template(:type), :type => type.doc, :topmenu => menu}
end
end
end
The initialize
and plugin_template
methods will rarely change, the handle_request
is where the magic happens. It gets called with the keyword and the query, so I set this up to respond to searched like type exec
. If you needed any kind of configuration data from the main Oldskool config file you’d just add data to that YAML file and the data would be available in @config
.
The keyword
would be type
and the query
would be exec
. The idea is that we could route for example type
as well as function
keywords into the plugin and then do different things with the query string.
I wrote a class called Puppetdoc
that takes care of the Puppet introspection, I won’t go into the details but you can see it here, it just returns a hash with all the Markdown for each parameter, meta parameter and the type itself.
We then create a simple menu that’s just an array of title and url pairs that will be used to build the top menu that you see in the screenshot.
And finally we just return a hash. The hash that you return must include a template key the rest is optional, I override the meaning of the word template a bit – probably should have chosen a better name:
- If it’s a string it’s assumed the string is a ERB template, Sinatra will just render that
- When it’s the symbol :redirect then your hash must have a :url item in it, this will just redirect the user to another url
- When it’s the symbol :error or just nil you can optionally add a :error key that will be shown to the user
You can see in the code above I passed the menu in as :topmenu you could also pass it back as :sidemenu which will create buttons down the side of the page, you can use both menus and buttons at the same time.
This takes care of creating the data to display but not yet the displaying aspect. The call to plugin_template(:type) will read the contents of the type.erb in the plugins view directory and return the contents. The Oldskool framework will then render that template making your returned hash available in @result
Here’s the first part of the view in question, you can see the whole thing here:
<% unless @error %>
<h2><%= @result[:type][:name].to_s.capitalize %> version <%= @result[:type][:version] %></h2>
<% end %> |
<% unless @error %>
<h2><%= @result[:type][:name].to_s.capitalize %> version <%= @result[:type][:version] %></h2>
<% end %>
Your view can check if @error
is set to show some text to the user in the case of exceptions etc otherwise just display the results. You can see here the @result
variable is the data the handle_request
returned.
Finally there’s a convention for gem names – this one would be oldskool-puppet so you should create a similarly named Ruby file to load the various bits, place this in lib/oldskool-puppet.rb
:
require 'oldskool/puppetdoc'
require 'oldskool/puppet_handler' |
require 'oldskool/puppetdoc'
require 'oldskool/puppet_handler'
From there you just need to build the gem, the Rakefile below does that:
require 'rubygems'
require 'rake/gempackagetask'
spec = Gem::Specification::new do |spec|
spec.name = "oldskool-puppet"
spec.version = "0.0.3"
spec.platform = Gem::Platform::RUBY
spec.summary = "oldskool-1assword"
spec.description = "description: Generate documentation for Puppet types"
spec.files = FileList["lib/**/*.rb", "views/*.erb"]
spec.executables = []
spec.require_path = "lib"
spec.has_rdoc = false
spec.test_files = nil
spec.add_dependency 'puppet'
spec.add_dependency 'redcarpet'
spec.extensions.push(*[])
spec.author = "R.I.Pienaar"
spec.email = "rip@devco.net"
spec.homepage = "http://devco.net/"
end
Rake::GemPackageTask.new(spec) do |pkg|
pkg.need_zip = false
pkg.need_tar = false
end |
require 'rubygems'
require 'rake/gempackagetask'
spec = Gem::Specification::new do |spec|
spec.name = "oldskool-puppet"
spec.version = "0.0.3"
spec.platform = Gem::Platform::RUBY
spec.summary = "oldskool-1assword"
spec.description = "description: Generate documentation for Puppet types"
spec.files = FileList["lib/**/*.rb", "views/*.erb"]
spec.executables = []
spec.require_path = "lib"
spec.has_rdoc = false
spec.test_files = nil
spec.add_dependency 'puppet'
spec.add_dependency 'redcarpet'
spec.extensions.push(*[])
spec.author = "R.I.Pienaar"
spec.email = "rip@devco.net"
spec.homepage = "http://devco.net/"
end
Rake::GemPackageTask.new(spec) do |pkg|
pkg.need_zip = false
pkg.need_tar = false
end
% rake gem
mkdir -p pkg
WARNING: no rubyforge_project specified
Successfully built RubyGem
Name: oldskool-puppet
Version: 0.0.3
File: oldskool-puppet-0.0.3.gem
mv oldskool-puppet-0.0.3.gem pkg/oldskool-puppet-0.0.3.gem
% gem push pkg/oldskool-puppet-0.0.3.gem |
% rake gem
mkdir -p pkg
WARNING: no rubyforge_project specified
Successfully built RubyGem
Name: oldskool-puppet
Version: 0.0.3
File: oldskool-puppet-0.0.3.gem
mv oldskool-puppet-0.0.3.gem pkg/oldskool-puppet-0.0.3.gem
% gem push pkg/oldskool-puppet-0.0.3.gem
If your gem command is setup this will publish the gem to Github ready for use. In this case all I did was add it to my Gemfile for my webapp:
gem 'puppet', '2.6.9'
gem 'facter'
gem 'oldskool-puppet', '>= 0.0.3' |
gem 'puppet', '2.6.9'
gem 'facter'
gem 'oldskool-puppet', '>= 0.0.3'
And used bundler to update my site after that everything worked.
by R.I. Pienaar | Jan 26, 2012 | Code
Back in the day The Well had a text based conference system, you used dial in, then telnet and later ssh to their server and interacted with other members through a text system called PicoSpan. Eventually things moved to the web and it became a lot more forum like. The thing that I really loved was that in the web version of the forums there was a command line. You could type many of the same commands into the web CLI as you would into the Unix one and have the same effects. Posting, searching, jumping through conferences. It was the web with the CLI power for those who wanted it.
The browser is more and more our interface to all things online and frankly it sux a bit, I want the CLI speed for accessing the Web sites that I like. I’ve created a PHP system I called cmd ages ago that simply routed a command like “guk greenwich” to the Google UK search engine with results restricted to those from the UK. There are of course various online tools that does the same but I found that their ‘book’ keyword would search Amazon US while I wanted UK so I just did one that I can tweak to my liking.
Recently thanks to Googles widely hated changes to their Search UI simply redirecting to Google searches with keywords filled in just was not enough anymore. I want web search back the way it was before they made it suck. So I do what hackers do and wrote a Ruby based pluggable search system. You can see a screenshot of it here showing a Google search.
What you’re seeing here is the oldskool-gcse plugin in action. It uses the Google JSON API to query a Google Custom Search Engine and format the results in a way that does not suck. The Custom Search Engines are quite nice as you can customize all sorts of things in them like which sites to exclude, which to favor, limit results to certain countries or languages allowing you to really customize your search experience. The only down side to the GCSE approach is that Google limits API calls to 100 a day, for me that’s enough for searching but ymmv.
Using this method of searching can have some privacy wins, Google recently announced merging all their online accounts into one and will have all your online activity influence your searches. I wasn’t too worried since by then I had already written Oldskool and will simply use a different Google Account to access their search API than the one I use to read my work mail for example. Simple effective win.
My default search in oldskool is a GCSE that resembles a normal Google search but I can also search for “puppet exec” and oldskool will route that request to a specific GCSE that bumps the official Puppet Labs docs to the top, exclude some annoying things etc. So oldskool is a single entry frontend to many different GCSE backends is quite powerful.
As I said it’s plugable and I’ve written one other plugin that uses my Passmakr gem to generate random passwords. I can just search for pass 10 to get a 10 character password:
Writing your own plugins is very easy and I hope to see ones that queries Redmine instances or other internal databases that you might have using the Oldskool framework to display all the data in one handy place.
It retains the most basic feature of simple keyword base redirects, so I can search for book reamde to get Amazon UK book results instantly.
Config is through a simple YAML file:
---
:google_api_key: your.key
:username: http_auth_user
:password: http_auth_pass
:keywords:
- :type: :gcse
:cx: you_gcse
:keywords:
- :default
- :type: :gcse
:cx: your_gcse
:keywords:
- puppet
- :type: :url
:url: http://amazon.co.uk/exec/obidos/search-handle-url/index=books-uk&field-keywords=%Q%
:keywords:
- book
- books
- :type: :password
:keywords: pass |
---
:google_api_key: your.key
:username: http_auth_user
:password: http_auth_pass
:keywords:
- :type: :gcse
:cx: you_gcse
:keywords:
- :default
- :type: :gcse
:cx: your_gcse
:keywords:
- puppet
- :type: :url
:url: http://amazon.co.uk/exec/obidos/search-handle-url/index=books-uk&field-keywords=%Q%
:keywords:
- book
- books
- :type: :password
:keywords: pass
This sets up 2 GCSE searches – one marked as my default search – and the mentioned book search and one that uses the password plugin I’ve shown above.
It needs no writable access to the webserver it runs on and it’s all managed by Bundler and Sinatra – perfect for hosting on the free Heroku tier.
As this is effectively my Web CLI I want it integrated in as many places as possible. I use a lot of desktops – 3 regularly – so the browser is my unified UI to all of this. Your instance will publish OpenSearch meta data which will make it seamlessly integrate into Firefox, Chrome, IE, Gnome DO, Gnome Shell and many many other places.
Here’s Firefox search box the first time you browse to a new instance:
And here is Chrome, you do not even have to add it just start typing the URL to your instance and press tab, the URL bar transforms into a Oldskool search box magically. You can add it permanently and make it default by right clicking on the URL bar and choosing Edit Search Engines….
The code is in my GitHub – Oldskool, Oldskool GCSE and Oldskool Password. I will blog again tomorrow or on another day about creating your own plugins etc.
by R.I. Pienaar | Dec 14, 2011 | Code
This is a post in a series of about Middleware for Stomp users, please read the preceding parts starting at 1 before continuing below.
Today changing things around a bit and not so much talking about using Stomp from Ruby but rather how we would monitor ActiveMQ. The ActiveMQ broker has a statistics plugin that you can interact with over Stomp which is particularly nice – being able to interrogate it over the same protocols as you would to use it.
I’ll run through some basic approaches to monitor:
- The size of queues
- The memory usage of persisted messages on a queue
- The rate of messages through a topic or a queue
- Various memory usage statistics for the broker itself
- Message counts and rates for the broker as a whole
These are your standard kinds of things you need to know about a running broker in addition to various things like monitoring the length of garbage collections and such which is standard when dealing with Java applications.
Keeping an eye on your queue sizes is very important. I’ve focused a lot on how Queues help you scale by facilitating horizontally adding consumers. Monitoring facilitates the decision making process for how many consumers you need – when to remove some and when to add some.
First you’re going to want to enable the plugin for ActiveMQ, open up your activemq.xml and add the plugin as below and restart when you are done:
<plugins>
<statisticsBrokerPlugin/>
</plugins> |
<plugins>
<statisticsBrokerPlugin/>
</plugins>
A quick word about the output format of the messages you’ll see below. They are a serialized JSON (or XML) representation of a data structure. Unfortunately it isn’t immediately usable without some pre-parsing into a real data structure. The Nagios and Cacti plugins you will see below have a method in them for converting this structure into a normal Ruby hash.
The basic process for requesting stats is a Request Response pattern as per part 3.
stomp.subscribe("/temp-topic/stats", {"transformation" => "jms-map-json"})
# request stats for the random generator queue from part 2
stomp.publish("/queue/ActiveMQ.Statistics.Destination.random_generator", "", {"reply-to" => "/temp-topic/stats"})
puts stomp.receive.body |
stomp.subscribe("/temp-topic/stats", {"transformation" => "jms-map-json"})
# request stats for the random generator queue from part 2
stomp.publish("/queue/ActiveMQ.Statistics.Destination.random_generator", "", {"reply-to" => "/temp-topic/stats"})
puts stomp.receive.body
First we subscribe to a temporary topic that you first saw in Part 2 and we specify that while ActiveMQ will output a JMS Map it should please convert this for us into a JSON document rather than the java structures.
We then request Destination stats for the random_generator queue and finally wait for the response and print it, what you’ll get from it can be seen below:
{"map":{"entry":[{"string":"memoryUsage","long":0},{"string":"dequeueCount","long":13},{"string":"inflightCount","long":0},{"string":"messagesCached","long":0},
{"string":"averageEnqueueTime","double":0.46153846153846156},{"string":["destinationName","queue:\/\/mcollective.nodes"]},{"string":"size","long":0},
{"string":"memoryPercentUsage","int":0},{"string":"producerCount","long":0},{"string":"consumerCount","long":56},{"string":"minEnqueueTime","double":0},
{"string":"maxEnqueueTime","double":1},{"string":"dispatchCount","long":13},{"string":"expiredCount","long":0},{"string":"enqueueCount","long":13},
{"string":"memoryLimit","long":83886080}]}} |
{"map":{"entry":[{"string":"memoryUsage","long":0},{"string":"dequeueCount","long":13},{"string":"inflightCount","long":0},{"string":"messagesCached","long":0},
{"string":"averageEnqueueTime","double":0.46153846153846156},{"string":["destinationName","queue:\/\/mcollective.nodes"]},{"string":"size","long":0},
{"string":"memoryPercentUsage","int":0},{"string":"producerCount","long":0},{"string":"consumerCount","long":56},{"string":"minEnqueueTime","double":0},
{"string":"maxEnqueueTime","double":1},{"string":"dispatchCount","long":13},{"string":"expiredCount","long":0},{"string":"enqueueCount","long":13},
{"string":"memoryLimit","long":83886080}]}}
Queue Statistics
Queue sizes are basically as you saw above, hit the Stats Plugin at /queue/ActiveMQ.Statistics.Destination.<queue name> and you get stats back for the queue in question.
Below table lists the meaning of these values from what I understand – quite conceivable I am wrong about the specifics of ones like enqueueTime for example so happy to be corrected in comments:
destinationName |
The name of the queue in JMS URL format |
enqueueCount |
Amount of messages that was sent to the queue and committed to it |
inflightCount |
Messages sent to the consumers but not consumed – they might be sat in the prefetch buffers |
dequeueCount |
The opposite of enqueueCount – messages sent from the queue to consumers |
dispatchCount |
Like dequeueCount but includes messages that might been rolled back |
expiredCount |
Messages can have a maximum life, these are ones thats expired |
maxEnqueueTime |
The maximum amount of time a message sat on the queue before being consumed |
minEnqueueTime |
The minimum amount of time a message sat on the queue before being consumed |
averageEnqueueTime |
The average amount of time a message sat on the queue before being consumed |
memoryUsage |
Memory used by messages stored in the queue |
memoryPercentUsage |
Percentage of available queue memory used |
memoryLimit |
Total amount of memory this queue can use |
size |
How many messages are currently in the queue |
consumerCount |
Consumers currently subscribed to this queue |
producerCount |
Producers currently producing messages |
I have written a nagios plugin that can check the queue sizes:
$ check_activemq_queue.rb --host localhost --user nagios --password passw0rd --queue random_generator --queue-warn 10 --queue-crit 20
OK: ActiveMQ random_generator has 1 messages |
$ check_activemq_queue.rb --host localhost --user nagios --password passw0rd --queue random_generator --queue-warn 10 --queue-crit 20
OK: ActiveMQ random_generator has 1 messages
You can see there’s enough information about the specific queue to be able to draw rate of messages, consumer counts and all sorts of useful information. I also have a quick script that will return all this data in a format suitable for use by Cacti:
$ activemq-cacti-plugin.rb --host localhost --user nagios --password passw0rd --report exim.stats
size:0 dispatchCount:168951 memoryUsage:0 averageEnqueueTime:1629.42897052992 enqueueCount:168951 minEnqueueTime:0.0 consumerCount:1 producerCount:0 memoryPercentUsage:0 destinationName:queue://exim.stats messagesCached:0 memoryLimit:20971520 inflightCount:0 dequeueCount:168951 expiredCount:0 maxEnqueueTime:328585.0 |
$ activemq-cacti-plugin.rb --host localhost --user nagios --password passw0rd --report exim.stats
size:0 dispatchCount:168951 memoryUsage:0 averageEnqueueTime:1629.42897052992 enqueueCount:168951 minEnqueueTime:0.0 consumerCount:1 producerCount:0 memoryPercentUsage:0 destinationName:queue://exim.stats messagesCached:0 memoryLimit:20971520 inflightCount:0 dequeueCount:168951 expiredCount:0 maxEnqueueTime:328585.0
Broker Statistics
Getting stats for the broker is more of the same, just send a message to /queue/ActiveMQ.Statistics.Broker and tell it where to reply to, you’ll get a message back with these properties, I am only listing ones not seen above, the meanings is the same except in the broker stats its totals for all queues and topics.
storePercentUsage |
Total percentage of storage used for all queues |
storeLimit |
Total storage space available |
storeUsage |
Storage space currently used |
tempLimit |
Total temporary space available |
brokerId |
Unique ID for this broker that you will see in Advisory messages
|
|
dataDirectory |
Where the broker is configured to store its data for queue persistence etc |
brokerName |
The name this broker was given in its configuration file |
Additionally there would be a value for each of your connectors listing the URL to it including protocol and port
Again I have a Cacti plugin to get these values out in a format usable in Cacti data sources:
$ activemq-cacti-plugin.rb --host localhost --user nagios --password passw0rd --report broker
stomp+ssl:stomp+ssl storePercentUsage:81 size:5597 ssl:ssl vm:vm://web3 dataDirectory:/var/log/activemq/activemq-data dispatchCount:169533 brokerName:web3 openwire:tcp://web3:6166 storeUsage:869933776 memoryUsage:1564 tempUsage:0 averageEnqueueTime:1623.90502285799 enqueueCount:174080 minEnqueueTime:0.0 producerCount:0 memoryPercentUsage:0 tempLimit:104857600 messagesCached:0 consumerCount:2 memoryLimit:20971520 storeLimit:1073741824 inflightCount:9 dequeueCount:169525 brokerId:ID:web3-44651-1280002111036-0:0 tempPercentUsage:0 stomp:stomp://web3:6163 maxEnqueueTime:328585.0 expiredCount:0 |
$ activemq-cacti-plugin.rb --host localhost --user nagios --password passw0rd --report broker
stomp+ssl:stomp+ssl storePercentUsage:81 size:5597 ssl:ssl vm:vm://web3 dataDirectory:/var/log/activemq/activemq-data dispatchCount:169533 brokerName:web3 openwire:tcp://web3:6166 storeUsage:869933776 memoryUsage:1564 tempUsage:0 averageEnqueueTime:1623.90502285799 enqueueCount:174080 minEnqueueTime:0.0 producerCount:0 memoryPercentUsage:0 tempLimit:104857600 messagesCached:0 consumerCount:2 memoryLimit:20971520 storeLimit:1073741824 inflightCount:9 dequeueCount:169525 brokerId:ID:web3-44651-1280002111036-0:0 tempPercentUsage:0 stomp:stomp://web3:6163 maxEnqueueTime:328585.0 expiredCount:0
You can find the plugins mentioned above in my GitHub account.
In the same location is a generic checker that publishes a message and wait for its return within a specified number of seconds – good turn around test for your broker.
I don’t really have good templates to share but you can see a Cacti graph I built below with the above plugins.