{"id":2898,"date":"2013-01-06T19:28:27","date_gmt":"2013-01-06T18:28:27","guid":{"rendered":"http:\/\/www.devco.net\/?p=2898"},"modified":"2013-01-07T21:58:59","modified_gmt":"2013-01-07T20:58:59","slug":"solving-monitoring-state-storage-problems-using-redis","status":"publish","type":"post","link":"https:\/\/www.devco.net\/archives\/2013\/01\/06\/solving-monitoring-state-storage-problems-using-redis.php","title":{"rendered":"Solving monitoring state storage problems using Redis"},"content":{"rendered":"
Redis<\/a> is an in-memory key-value data store that provides a small number of primitives suitable to the task of building monitoring systems. As a lot of us are hacking in this space I thought I’d write a blog post summarizing where I’ve been using it in a little Sensu like monitoring system I have been working on on and off. <\/p>\n There’s some monitoring related events coming up like MonitoringLove<\/a> in Antwerp and Monitorama<\/a> in Boston – I will be attending both and I hope a few members in the community will create similar background posts on various interesting areas before these events.<\/p>\n I’ve only recently started looking at Redis but really like it. It’s a very light weight daemon written in C with fantastic documentation detailing things like each commands performance characteristics and most documantation pages are live in that they have a REPL right on the page like the SET<\/a> page – note you can type into the code sample and see your changes in real time. It is sponsored by VMWare and released under the 3 clause BSD license.<\/p>\n All the keys support things like expiry based on time and TTL calculation. Additionally it also supports PubSub.<\/p>\n At first it can be hard to imagine how you’d use a data store with only these few data types and capable of only storing strings for monitoring but with a bit of creativity it can be really very useful.<\/p>\n The full reference about all the types can be found in the Redis Docs: Data Types<\/a><\/p>\n Status tracking is essentially transient data. If you loose your status view it’s not really a big deal it will be recreated quite quickly as new check results come in. Worst case you’ll get some alerts again that you recently got. This fits well with Redis that doesn’t always commit data soon as it receives it – it flushes roughly every second from memory to disk.<\/p>\n Redis does not provide much by way of SSL or strong authentication so I tend to consider it a single node IPC system rather than say a generic PubSub system. I feed data into a node using system like ActiveMQ and then for comms and state tracking on a single node I’ll use Redis.<\/p>\n I’ll show how it can be used to solve the following monitoring related storage\/messaging problems:<\/p>\n In my example a check result looks more or less like this:<\/p>\n <\/code><\/p>\n This is standard stuff and the most boring part – you might guess this goes into a Hash and you’ll be right. Note the count<\/em> item there Redis has special handling for counters and I’ll show that in a minute.<\/p>\n By convention Redis keys are name spaced by a :<\/em> so I’d store the check status for a specific node + check combination in a key like status:example.net:load<\/em><\/p>\n Updating or creating a new hash is real easy – just write to it:<\/p>\n <\/code><\/p>\n Here I assume we have a object that represents a check result called check<\/em> and we’re more or less just fetching\/updating data in it. I first retrieve the previously saved state of exitcode and last state change time and save those into the object. The object will do some internal state management to determine if the current check result represents a changed state – OK to WARNING etc – based on this information.<\/p>\n The @redis.multi<\/em> starts a transaction, everything inside the block will be written in an atomic way by the Redis server thus ensuring we do not have any half-baked state while other parts of the system might be reading the status of this check.<\/p>\n As I said the check<\/em> determines if the current result is a state change when I set the previous exitcode on line 5 this means lines 16-20 will either set the count to 1 if it’s a change or just increment the count if not. We use the internal Redis counter handling on line 17 to avoid having to first fetch the count and then update it and saving it, this saves a round trip to the database.<\/p>\n You can now just retrieve the whole hash with the HGETALL command, even on the command line:<\/p>\n <\/code><\/p>\n References: Redis Hashes<\/a>, MULTI<\/a>, HSET<\/a>, HINCRBY<\/a>, HGET<\/a>, HGETALL<\/a><\/p>\n This is where we really start using some of the Redis features to save us time. We need to track when last we saw a specific node and then we have to be able to quickly find all nodes not seen within certain amount of time like 120 seconds.<\/p>\n We could retrieve all the check results and check their last updated time and so figure it out but that’s not optimal.<\/p>\n This is what Sorted Lists are for. Remember Sorted Lists have a weight and orders the list by the weight, if we use the timestamp that we last received data at for a host as the weight it means we can very quickly fetch a list of stale hosts.<\/p>\n <\/code><\/p>\n When we call this code like update_host_last_seen(“dev2.devco.net”, Time.now.utc.to_i)<\/em> the host will either be added to or updated in the Sorted List based on the current UTC time. We do this every time we save a new result set with the code in the previous section.<\/p>\n To get a list of hosts that we have not seen in the last 120 seconds is really easy now:<\/p>\n <\/code><\/p>\n If we call this with an age like 120<\/em> we’ll get an array of nodes that have not had any data within the last 120 seconds. <\/p>\n You can do the same check on the CLI, this shows all the machines not seen in the last 60 seconds:<\/p>\n <\/code><\/p>\n Reference: Sorted Sets<\/a>, ZADD<\/a>, ZRANGEBYSCORE<\/a><\/p>\n We don’t know or care who those interested parties are we only care that there might be some interested parties – it might be something writing to Graphite or OpenTSDB or both at the same time or something alerting to Opsgenie or Pager Duty. This is a classic use case for PubSub and Redis has a good PubSub subsystem that we’ll use for this.<\/p>\n I am only going to show the metrics publishing – problem and state changes are very similar:<\/p>\n <\/code><\/p>\n This is pretty simple stuff, we’re just publishing some JSON to a named destination like overwatch:metrics:dev1.devco.net:load<\/em>. We can now write small standalone single function tools that consume this stream of metrics and send it wherever we like – like Graphite or OpenTSDB.<\/p>\n We publish similar events for any incoming check result that is not OK and also for any state transition like CRITICAL to OK, these would be consumed by alerter handlers that might feed pagers or SMS.<\/p>\n We’re publishing these alerts to to destinations that include the host and specific check – this way we can very easily create individual host views of activity by doing pattern based subscribes.<\/p>\n Reference: PubSub<\/a>, PUBLISH<\/a><\/p>\n Leading on from the previous section we’d just consume the problem and state change PubSub channels and react on messages from those:<\/p>\n A possible consumer of this might look like this:<\/p>\n <\/code><\/p>\n This subscribes to the 2 channels and pass the incoming events to a notifier. Note we’re using the patterns here to catch all alerts and changes for all hosts.<\/p>\n The problem here is that without any special handling this is going to fire off alerts every minute assuming we check the load<\/em> every minute. This is where Redis expiry of keys come in.<\/p>\n We’ll need to track which messages we have sent when and on any state change clear the tracking thus restarting the counters.<\/p>\n So we’ll just add keys called “alert:dev2.devco.net:load:3” to indicate an UNKNOWN<\/em> state alert for load<\/em> on dev2.devco.net<\/em>:<\/p>\n <\/code><\/p>\n This takes an expire time which defaults to 2 hours and tells redis to just remove the key when its time is up.<\/p>\n With this we need a way to figure out if we can send again:<\/p>\n <\/code><\/p>\n This will return the amount of seconds till next alert and -1 if we are ready to send again<\/p>\n And finally on every state change we need to just purge all the tracking for a given node + check combo. The reason for this is if we notified on CRITICAL a minute ago then the service recovers to OK but soon goes to CRITICAL again this most recent CRITICAL alert will be suppressed as part of the previous cycle of alerts.<\/p>\n <\/code><\/p>\n So now I can show the two methods that will actually publish the alerts:<\/p>\n The first notifies of issues but only every @interval<\/em> seconds and it uses the alert_ttl<\/em> helper above to determine if it should or shouldn’t send:<\/p>\n <\/code><\/p>\n The second will publish recovery<\/em> notices and we’d always want those and they will not repeat, here we clear all the previous alert tracking to avoid incorrect alert surpressions:<\/p>\n <\/code><\/p>\n References: SET<\/a>, EXPIRE<\/a>, SUBSCRIBE<\/a>, TTL<\/a>, DEL<\/a><\/p>\n Using its facilities saved me a ton of effort while working on a small monitoring system. It is fast and light weight and enable cross language collaboration that I’d have found hard to replicate in a performant manner without it.<\/p>\n","protected":false},"excerpt":{"rendered":" Redis is an in-memory key-value data store that provides a small number of primitives suitable to the task of building monitoring systems. As a lot of us are hacking in this space I thought I’d write a blog post summarizing where I’ve been using it in a little Sensu like monitoring system I have been […]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","footnotes":""},"categories":[7],"tags":[121,85,64,13,96],"_links":{"self":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/2898"}],"collection":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/comments?post=2898"}],"version-history":[{"count":42,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/2898\/revisions"}],"predecessor-version":[{"id":2932,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/2898\/revisions\/2932"}],"wp:attachment":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/media?parent=2898"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/categories?post=2898"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/tags?post=2898"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}Redis Data Types<\/H3>
\nRedis provides a few common data structures:<\/p>\n\n
Monitoring Needs<\/H3>
\nMonitoring systems generally need a number of different types of storage. These are configuration<\/em>, event archiving<\/em> and status and alert tracking<\/em>. There are more but these are the big ticket items, of the 3 I am only going to focus on the last one – Status and Alert Tracking here.<\/p>\n\n
Check Status<\/H4>
\nThe check<\/em> is generally the main item of monitoring systems. Something configures a check like load<\/em> and then every node gets check results for this item, the monitoring system has to track the status of the checks on a per node basis.<\/p>\n<\/p>\n
\r\n{\"lastcheck\" => \"1357490521\", \r\n \"count\" => \"1143\", \r\n \"exitcode\" => \"0\", \r\n \"output\" => \"OK - load average: 0.23, 0.10, 0.02\", \r\n \"last_state_change\"=> \"1357412507\",\r\n \"perfdata\" => '{\"load15\":0.02,\"load5\":0.1,\"load1\":0.23}',\r\n \"check\" => \"load\",\r\n \"host\" => \"dev2.devco.net\"}\r\n<\/pre>\n
<\/p>\n
\r\ndef save_check(check)\r\n key = \"status:%s:%s\" % [check.host, check.check]\r\n \r\n check.last_state_change = @redis.hget(key, \"last_state_change\")\r\n check.previous_exitcode = @redis.hget(key, \"exitcode\")\r\n\r\n @redis.multi do\r\n @redis.hset(key, \"host\", check.host)\r\n @redis.hset(key, \"check\", check.check)\r\n @redis.hset(key, \"exitcode\", check.exitcode)\r\n @redis.hset(key, \"lastcheck\", check.last_check)\r\n @redis.hset(key, \"last_state_change\", check.last_state_change)\r\n @redis.hset(key, \"output\", check.output)\r\n @redis.hset(key, \"perfdata\", check.perfdata)\r\n\r\n unless check.changed_state?\r\n @redis.hincrby(key, \"count\", 1)\r\n else\r\n @redis.hset(key, \"count\", 1)\r\n end\r\n end\r\n\r\n check.count = @redis.hget(key, \"count\")\r\nend\r\n<\/pre>\n
<\/p>\n
\r\n% redis-cli hgetall status:dev2.devco.net:load\r\n 1) \"check\"\r\n 2) \"load\"\r\n 3) \"host\"\r\n 4) \"dev2.devco.net\"\r\n 5) \"output\"\r\n 6) \"OK - load average: 0.00, 0.00, 0.00\"\r\n 7) \"lastcheck\"\r\n 8) \"1357494721\"\r\n 9) \"exitcode\"\r\n10) \"0\"\r\n11) \"perfdata\"\r\n12) \"{\\\"load15\\\":0.0,\\\"load5\\\":0.0,\\\"load1\\\":0.0}\"\r\n13) \"last_state_change\"\r\n14) \"1357412507\"\r\n15) \"count\"\r\n16) \"1178\"\r\n<\/pre>\n
Staleness Tracking<\/H4>
\nStaleness Tracking here means we want to know when last we saw any data about a node, if the node is not providing information we need to go and see what happened to it. Maybe it’s up but the data sender died or maybe it’s crashed. <\/p>\n<\/p>\n
\r\ndef update_host_last_seen(host, time)\r\n @redis.zadd(\"host:last_seen\", time, host)\r\nend\r\n<\/pre>\n
<\/p>\n
\r\ndef get_stale_hosts(age)\r\n @redis.zrangebyscore(\"host:last_seen\", 0, (Time.now.utc.to_i - age))\r\nend\r\n<\/pre>\n
<\/p>\n
\r\n% redis-cli zrangebyscore host:last_seen 0 $(expr $(date +%s) - 60)\r\n 1) \"dev1.devco.net\"\r\n<\/pre>\n
Event Notification<\/H4>
\nWhen a check result enters the system thats either a state change, a problem or have metrics associated it we’d want to send those on to other pieces of code.<\/p>\n<\/p>\n
\r\ndef publish_metrics(check)\r\n if check.has_perfdata?\r\n msg = {\"metrics\" => check.perfdata, \"type\" => \"metrics\", \"time\" => check.last_check, \"host\" => check.host, \"check\" => check.check}.to_json\r\n publish([\"metrics\", check.host, check.check], msg)\r\n end\r\nend\r\n\r\ndef publish(type, message)\r\n target = [\"overwatch\", Array(type).join(\":\")].join(\":\")\r\n @redis.publish(target, message)\r\nend\r\n<\/pre>\n
Alert Tracking<\/H4>
\nAlert Tracking means keeping track of which alerts we’ve already sent and when we’ll need to send them again like only after 2 hours of the same problem and not on every check result which might come in every minute.<\/p>\n<\/p>\n
\r\n@redis.psubscribe(\"overwatch:state_change:*\", \"overwatch:issues:*\") do |on|\r\n on.pmessage do |channel, message|\r\n event = JSON.parse(message)\r\n \r\n case event[\"type\"]\r\n when \"issue\"\r\n sender.notify_issue(event[\"issue\"][\"exitcode\"], event[\"host\"], event[\"check\"], event[\"issue\"][\"output\"])\r\n when \"state_change\"\r\n if event[\"state_change\"][\"exitcode\"] == 0\r\n sender.notify_recovery(event[\"host\"], event[\"check\"], event[\"state_change\"][\"output\"])\r\n end\r\n end\r\n end\r\nend\r\n<\/pre>\n
<\/p>\n
\r\ndef record_alert(host, check, status, expire=7200)\r\n key = \"alert:%s:%s:%d\" % [host, check, status]\r\n @redis.set(key, 1)\r\n @redis.expire(key, expire)\r\nend\r\n<\/pre>\n
<\/p>\n
\r\ndef alert_ttl(host, check, status)\r\n key = \"alert:%s:%s:%d\" % [host, check, status]\r\n @redis.ttl(key)\r\nend\r\n<\/pre>\n
<\/p>\n
\r\ndef clear_alert_ttls(host, check)\r\n @redis.del(@redis.keys.grep(\/^alert:#{host}:#{check}:\\d\/))\r\nend\r\n<\/pre>\n
<\/p>\n
\r\ndef notify_issue(exitcode, host, check, output)\r\n if (ttl = @storage.alert_ttl(host, check, exitcode)) == -1\r\n subject = \"%s %s#%s\" % [status_for_code(exitcode), host, check]\r\n message = \"%s: %s\" % [subject, output]\r\n\r\n send(message, subject, @recipients)\r\n\r\n @redis.record_alert(host, check, exitcode, @alert_interval)\r\n else\r\n Log.info(\"Not alerting %s#%s due to interval restrictions, next alert in %d seconds\" % [host, check, ttl])\r\n end\r\nend\r\n<\/pre>\n
<\/p>\n
\r\ndef notify_recovery(host, check, output)\r\n subject = \"RECOVERY %s#%s\" % [host, check]\r\n message = \"%s: %s\" % [subject, output]\r\n\r\n send_alert(message, subject, @recipients)\r\n \r\n @redis.clear_alert_ttls(host, check)\r\nend\r\n<\/pre>\n
Conclusion<\/H3>
\nThis covered a few Redis basics but it’s a very rich system that can be used in many areas so if you are interested spend some quality time with its docs.<\/p>\n