{"id":2242,"date":"2011-10-02T10:56:15","date_gmt":"2011-10-02T09:56:15","guid":{"rendered":"http:\/\/www.devco.net\/?p=2242"},"modified":"2011-10-02T10:56:15","modified_gmt":"2011-10-02T09:56:15","slug":"interact-with-munin-node-from-ruby","status":"publish","type":"post","link":"https:\/\/www.devco.net\/archives\/2011\/10\/02\/interact-with-munin-node-from-ruby.php","title":{"rendered":"Interact with munin-node from Ruby"},"content":{"rendered":"
I’ve blogged a lot about a new kind of monitoring<\/a> but what I didn’t point out is that I do actually like the existing toolset. <\/p>\n I quite like Nagios. It’s configuration is horrible yes, the web ui is near useless, it throws away useful information like perfdata. It is though a good poller, it’s solid, never crashes, doesn’t use too much resources and have created a fairly decent plugin protocol (except for it’s perfdata representation).<\/p>\n I am at two minds about munin, I like munin-node and the plugin model. I love that there are 100s of plugins available already. I love the introspection that let’s machines discover their own capabilities. But I hate everything about the central munin poller that’s supposed to be able to scale and query all your servers and pre-create graphs. It simply doesn’t work, even on a few 100 machines it’s a completely broken model.<\/p>\n So I am trying to find ways to keep these older tools – and their collective thousands of plugins – around but improve things to bring them into the fold of my ideas about monitoring.<\/p>\n For munin I want to get rid of the central poller, I’d rather have each node produce its data and push it somewhere. In my case I want to put the data into a middleware queue and process the data later into an archive or graphite or some other system like OpenTSDB. I had a look around for some Ruby \/ Munin integrations and came across a few, I only investigated 2.<\/p>\n Adam Jacob has a nice little munin 2 graphite script<\/a> that simply talks straight to graphite, this might be enough for some of you so check it out. I also found munin-ruby from Dan Sosedoff<\/a> which is what I ended up using.<\/p>\n Using the munin-ruby code is really simple:<\/p>\n <\/code><\/p>\n This creates output like this:<\/p>\n <\/code><\/p>\n So from here it’s not far to go to get these events onto my middleware, I turn them into JSON blobs like, the last one is a stat about the collector:<\/p>\n <\/code><\/p>\n<\/p>\n
\r\n#!\/usr\/bin\/ruby\r\n\r\nrequire 'rubygems'\r\nrequire 'munin-ruby'\r\n\r\n# connect to munin on localhost\r\nmunin = Munin::Node.new(\"localhost\", :port => 4949)\r\n\r\n# get each service and print it's metrics\r\nmunin.services.each do |service|\r\n puts \"Metrics for service: #{service}\"\r\n\r\n munin.service(service).params.each_pair do |k, v|\r\n puts \" #{k} => #{v}\"\r\n end\r\n\r\n puts\r\nend\r\n<\/pre>\n
<\/p>\n
\r\nMetrics for service: entropy\r\n entropy => 174\r\n\r\nMetrics for service: forks\r\n forks => 7114853\r\n<\/pre>\n
<\/p>\n
\r\n{\"name\":\"munin\",\"text\":\"entropy\",\"subject\":\"devco.net\",\"tags\":{},\"metrics\":{\"entropy.entropy\":\"162\"},\"origin\":\"munin\",\"type\":\"metric\",\"event_time\":1317548538,\"severity\":0}\r\n{\"name\":\"munin\",\"text\":\"forks\",\"subject\":\"devco.net\",\"tags\":{},\"metrics\":{\"forks.forks\":\"7115300\"},\"origin\":\"munin\",\"type\":\"metric\",\"event_time\":1317548538,\"severity\":0}\r\n{\"name\":\"munin\",\"text\":\"\",\"subject\":\"devco.net\",\"tags\":{},\"metrics\":{\"um_munin.time\":3.722587,\"um_munin.services\":27,\"um_munin.metrics\":109,\"um_munin.sleep\":4},\"origin\":\"munin\",\"type\":\"metric\",\"event_time\":1317548538,\"severity\":0}\r\n<\/pre>\n