Select Page

Puppet Search Engine with MCollective

The current supported way of making a Puppet node aware of the state of other nodes is Exported Resources, there’s some movement in this space see for example a recent thread on the developers list.

I’ve personally never found exported resources to be a good fit for me, I have many masters spread over long network distances and they often operate in a split brain scenario where the masters couldn’t see each other. This makes using something like MySQL as a database hard since you’d effectively need a single write point and replication is point to point.

There are other problems too, consider a use case where you have modules for phpbb, wordpress, rails apps, phpmyadmin and several others that need to talk to the master database. Maybe you also want to show in the motd a resource list for customers.

Every one of these modules have configuration parameters that differ in some way or the other. The exported resources model kind of expect you to export from the MySQL Master the configs for all the apps using the database, this isn’t viable when you don’t know what all will talk to this MySQL server and the resource abstraction doesn’t really work well when the resulting config might need to look like this:

$write_database = "db1.your.net";
$read_databases = "db2.your.net", "db3.your.net", "db4.your.net";

Filling in that array with exported resources is pretty hard.

So I favor searching nodes and walking the results. I tend to use extlookup for this and just maintain the hosts lists by hand, this works but it’s maintained by hand, there has to be a better way.

I wrote a mcollective registration system that puts all node data in MongoDB instances.

Simply using mcollective to manage the data store has several advantages:

  • You can run many instances of the registration receiver, one per puppet master. As writes are not done by the master but by MCollective there’s no need for crazy database replication system, mcollective gives you that for free
  • Using ActiveMQ meshed network architecture you can achieve better resiliency and ensure more up to date data than with simple point to point replication of database systems. Routing around internet outages is much easier this way.
  • If you bootstrap machines via mcollective your registration data will be there before you even do your first Puppet run

And storing the data in a document database – and specifically Mongo over some of the other contenders:

  • Writes are very fast – ms per record written as the update is written as a single document and not 100s or 1000s of rows
  • Mongo has a query language to query the documents, other NoSQL systems insist on writing map reduce queries, this would be a bad fit for use inside Puppet manifests without writing a lot of boiler place query map reduce functions, something I don’t have time for.

The mcollective registration system will store per node the following data:

  • A list of all the Agents
  • A list of all the facts
  • A list of all the Puppet classes assigned to this node
  • FQDN, time of most recent update and a few other bits

You can now use this in Puppet manifests as follows to fill some of our stated needs, the code below use the search engine to find all the machines with class “mysql::server” for a specific customer and then display this data in the MOTD:

# Connect to the database, generally do this in site.pp
search_setup("localhost", "puppet", "nodes")
 
# A define to add the hostname and ip address of the MySQL server to motd
define motd::register_mysql {
   $node = load_node($name)
 
   motd::register{"MySQL Server:  ${node['fqdn']} / ${node['facts']['ipaddress']}"
}
 
$nodes = search_nodes("{'facts.customer' => '$customer', 'classes' => 'mysql::server'}")
 
motd::register_mysql{$nodes: }

Here is a template that does a search using this too, it makes a comma seperate list of mysql slave ip addresses for a customer:

$read_databases = <%= 
nodes = Puppet::Util::MongoQuery.instance.find_nodes({"facts.customer" => "#{customer}", "classes" => "mysql::slave"})
 
nodes.map {|n| '"' + n["facts"]["ipaddress"] + '"' }.join(", ")
%>

This code can probably be cleaned up a bit, would be helpful if Puppet gave us better ways to extend it to enable this kind of thing.

I find this approach much less restrictive and easier to get my head around than exported resources. I obviously see the need for tracking actual state across a infrastructure and exported resources will solve that in future when you have the ability to require a database on another node to be up before starting a webserver process, but that seems to be long way off for Puppet. And I think even then a combination system like this will have value.

A note about the code and how to install it. Puppet has a very limited extension system at present, you cannot add Puppet::Util classes via pluginsync and there’s no support for any kind of pooling of resources from plugins. So this code needs to be installed into your Ruby libdir directly and I use a Singleton to maintain a database pool per Puppetmaster process. This is not ideal, I’ve filed feature enhancement requests with Puppet Labs, hopefully things will become easier in time.

The code to enable above is in my GitHub account..

Puppet 2.6.1 and extlookup

NOTE: As of version 2.6.1 of Puppet this function is part of the core functionality provided from Puppet Labs.

I wrote a data store for puppet called extlookup and blogged about it before. With the release of Puppet 2.6.1 today extlookup is now fully integrated upstream and the code is owned by Puppet Labs.

Bug reports and so forth should go to their bugtracker and full documentation for the function now exist in the main project docs.

Very happy about this, looking forward to YAML and JSON support being added in the near future.

I’ve also just tested Puppet 2.6.1 on a number of my machines and so far no show stoppers and the basics all work with MCollective still. I’ll do some more thorough testing soon.

Visiting the US and Puppet Camp

I’ll be visiting the US later this month. I’ll be in San Francisco on and off from the 24th of September to 20th October and the rest of the time I’ll be in Portland.

I’ll be attending Puppet Camp and giving a talk about Marionette Collective for Puppet Users.

It would be excellent to meet a lot of people I speak with on IRC and Twitter while there and if you’re a Puppet user in the area you must come to Puppet Camp, I’ve been to the EU one and can totally recommend it.

Effective adhoc commands in clusters

Last night I had a bit of a mental dump on twitter about structured data and non structured data when communicating with a cluster or servers – Twitter fails at this kind of stuff so figured I’ll follow up with a blog post.

I started off asking for a list of tools in the cluster admin space and got some great pointers which I am reproducing here:

fabric, cap, func, clusterssh, sshpt, pssh, massh, clustershell, controltier, rash (related), dsh, chef knife ssh, pdsh+dshbak and of course mcollective. I was also sent a list of ssh related tools which is awesome.

The point I feel needs to be made is that in general these tools just run commands on remote servers. They are not aware of the commands output structure, what denotes pass or fail in the context of the command etc. Basically the commands people run are commands designed for ages to be looked at by human eyes and then parsed by a human mind. Yes they are easy to pipe and grep and chop up, but ultimately it was always designed to be run on one server at a time.

The parallel ssh’ers run these commands in parallel and you tend to get a mash of output. The output is mixed STDOUT and STDERR and often output from different machines are multiplexed into each other so you get a stream of text that is hard to decipher even on 2 machines, not to mention 200 at once.

Take as an example a simple yum command to install a package:

% yum install zsh
Loaded plugins: fastestmirror, priorities, protectbase, security
Loading mirror speeds from cached hostfile
372 packages excluded due to repository priority protections
0 packages excluded due to repository protections
Setting up Install Process
Package zsh-4.2.6-3.el5.i386 already installed and latest version
Nothing to do

When run on one machine you pretty much immediately know whats going on, package was already there so nothing got done, now lets see cap invoke:

# cap invoke COMMAND="yum -y install zsh"
  * executing `invoke'
  * executing "yum -y install zsh"
    servers: ["web1", "web2", "web3"]
    [web2] executing command
    [web1] executing command
    [web3] executing command
 ** [out :: web2] Loaded plugins: fastestmirror, priorities, protectbase, security
 ** [out :: web2] Loading mirror speeds from cached hostfile
 ** [out :: web3] Loaded plugins: fastestmirror, priorities, protectbase
 ** [out :: web3] Loading mirror speeds from cached hostfile
 ** [out :: web3] 495 packages excluded due to repository priority protections
 ** [out :: web2] 495 packages excluded due to repository priority protections
 ** [out :: web3] 0 packages excluded due to repository protections
 ** [out :: web3] Setting up Install Process
 ** [out :: web2] 0 packages excluded due to repository protections
 ** [out :: web2] Setting up Install Process
 ** [out :: web1] Loaded plugins: fastestmirror, priorities, protectbase
 ** [out :: web3] Package zsh-4.2.6-3.el5.x86_64 already installed and latest version
 ** [out :: web3] Nothing to do
 ** [out :: web1] Loading mirror speeds from cached hostfile
 ** [out :: web1] Install       1 Package(s)
 ** [out :: web2] Package zsh-4.2.6-3.el5.x86_64 already installed and latest version
 ** [out :: web2] Nothing to do
 ** [out :: web1] 548 packages excluded due to repository priority protections
 ** [out :: web1] 0 packages excluded due to repository protections
 ** [out :: web1] Setting up Install Process
 ** [out :: web1] Resolving Dependencies
 ** [out :: web1] --> Running transaction check
 ** [out :: web1] ---> Package zsh.x86_64 0:4.2.6-3.el5 set to be updated
 ** [out :: web1] --> Finished Dependency Resolution
 ** [out :: web1]
 ** [out :: web1] Dependencies Resolved
 ** [out :: web1]
 ** [out :: web1] ================================================================================
 ** [out :: web1] Package      Arch            Version                Repository            Size
 ** [out :: web1] ================================================================================
 ** [out :: web1] Installing:
 ** [out :: web1] zsh          x86_64          4.2.6-3.el5            centos-base          1.7 M
 ** [out :: web1]
 ** [out :: web1] Transaction Summary
 ** [out :: web1] ================================================================================
 ** [out :: web1] Install       1 Package(s)
 ** [out :: web1] Upgrade       0 Package(s)
 ** [out :: web1]
 ** [out :: web1] Total download size: 1.7 M
 ** [out :: web1] Downloading Packages:
 ** [out :: web1] Running rpm_check_debug
 ** [out :: web1] Running Transaction Test
 ** [out :: web1] Finished Transaction Test
 ** [out :: web1] Transaction Test Succeeded
 ** [out :: web1] Running Transaction
 ** [out :: web1] Installing     : zsh                                                      1/1
 ** [out :: web1]
 ** [out :: web1]
 ** [out :: web1] Installed:
 ** [out :: web1] zsh.x86_64 0:4.2.6-3.el5
 ** [out :: web1]
 ** [out :: web1] Complete!
    command finished
zlib(finalizer): the stream was freed prematurely.
zlib(finalizer): the stream was freed prematurely.
zlib(finalizer): the stream was freed prematurely.

Most of this stuff scrolled off my screen and at the end all I had was the last bit of output. I could scroll up and still figure out ok what was going on – 2 of the 3 already had it installed, one got it. Now imagine 100 or 500 of these machines output all mixed in? Just parsing this output would be prone to human error and you’re likely to miss that something failed.

So here is my point, your cluster management tool need to provide an API around the every day commands like packages, process listing etc. It should return structured data and you could use the structured data to create tools more fit for the purpose of using on large amount of machines. Being that the output is standardized it should provide generic tools that just do the right thing out of the box for you.

With the package example above knowing that all 500 machines had spewed out a bunch of stuff while installing isn’t important, you just want to know the result in a nice way. Here’s what mcollective does:

$ mc-package install zsh
 
 * [ ============================================================> ] 3 / 3
 
web2.my.net                      version = zsh-4.2.6-3.el5
web3.my.net                      version = zsh-4.2.6-3.el5
web1.my.net                      version = zsh-4.2.6-3.el5
 
---- package agent summary ----
           Nodes: 3 / 3
        Versions: 3 * 4.2.6-3.el5
    Elapsed Time: 16.33 s

In the case of a package you want to just know the version post the event and a summary of status. Just by looking at the stats I know the desired result was achieved, if I had different versions listed I could very quickly identify the problem ones.

Here’s another example – NRPE this time:

% mc-rpc nrpe runcommand command=check_disks
 
 * [ ============================================================> ] 47 / 47
 
 
dev1.my.net                      Request Aborted
   CRITICAL
          Exit Code: 2
   Performance Data:  /=4111MB;3706;3924;0;4361 /boot=26MB;83;88;0;98 /dev/shm=0MB;217;230;0;256
             Output: DISK CRITICAL - free space: / 24 MB (0% inode=86%);
 
 
Finished processing 47 / 47 hosts in 766.11 ms

Here notice I didn’t use a NRPE specific mc- command, I just used the generic rpc caller and the caller knows that I am only interesting in seeing the results of machines that are in WARNING or CRITICAL state. If you run this on your console you’d see the ‘Request Aborted’ would be red and the ‘CRITICAL’ would be yellow. Immediately pulling your eye to the important information. Also note how the result shows human friendly field names like ‘Performance Data’.

The formatting, highlighting, knowledge to only show failing resources and human friendly headings all happen automatically, no programming of client side UI is required you get the ability to do this for free simply from the fact that mcollective focuses on putting structure around outputs.

Here’s the earlier package install example with the standard rpc caller not with a specialized package frontend:

% mc-rpc package install package=zsh
Determining the amount of hosts matching filter for 2 seconds .... 47
 
 * [ ============================================================> ] 47 / 47
 
Finished processing 47 / 47 hosts in 2346.05 ms

Everything worked, all 47 machines have the package installed and your desired action was taken. So no point in spamming you with pages of junk, who cares to see all the Yum output? Had an install failed you’d have had usable error message just for the host that failed. The output would be equally usable on one or a thousand hosts with very little margin for human error in knowing the result of your request.

This happens because mcollective has a standard structure of responses, each response has a absolute success value that tells you if the request failed or not and by using this you can get generic CLI, Web, etc tools that displays large amounts of data from a network of hosts in a way that is appropriate and context aware.

For reference here’s the response as received on the client:

{:sender=>"dev1.my.net",
 :statuscode=>1,
 :statusmsg=>"CRITICAL",
 :data=>
  {:perfdata=>
    " /=4111MB;3706;3924;0;4361 /boot=26MB;83;88;0;98 /dev/shm=0MB;217;230;0;256",
   :output=>"DISK CRITICAL - free space: / 24 MB (0% inode=86%);",
   :exitcode=>2}}

Only by thinking about CLI and admin tasks in this way do I believe we can take the Unix utilities that we call on remote hosts and turn them into something appropriate for large scale parallel use that doesn’t overwhelm the human at the other end with information. Additionally since this is an API that is computer friendly it makes those tools usable in many other places like code deployers – for example to enable your continues deployment using robust use of unix tools via such an API.

There are many other advantages to this approach. Requests are authorized on a very fine level, requests are audited. API wrappers are code that’s versioned, that can be tested in development and makes the margin for error much smaller than just running random unix commands ad hoc. Finally if you’re using the code on a CLI ad-hoc as above or in your continues deployer you share the same code that you’ve already tested and trust.

Marionette Collective version 0.4.8

I just released version 0.4.8 of mcollective. It’s a small maintenance release fixing a few bugs and adding a few features. I wasn’t planning on another 0.4.x release before the big 1.0.0 but want to keep 1.0.0 close as possible to something that’s been out there for a while.

The only major feature it introduces is custom reports of your infrastructure.

It supports two types of scriptlet for building reports. The first is a little DSL that uses printf style format strings:

inventory do
    format "%s:\t\t%s\t\t%s"
 
    fields { [ identity, facts["serialnumber"], facts["productname"] ] }
end

Which does something like this:

$ mc-inventory --script hardware.mc
web1:           KKxxx1H         IBM eServer BladeCenter HS20 -[8832M1X]-
rep1:           KKxxx5Z         IBM eServer BladeCenter HS20 -[8832M1X]-
db4:            KDxxxZY         IBM System x3655 -[794334G]-
man2:           KDxxxR0         eserver xSeries 336 -[88372CY]-
db2:            KDxxxGD         IBM System x3655 -[79855AG]-

The other – perhaps more ugly – is using a Perl like format method. To use this you need the formatr gem installed, and a report might look like this:

formatted_inventory do
    page_length 20
 
    page_heading <<TOP
 
            Node Report @<<<<<<<<<<<<<<<<<<<<<<<<<
                        time
 
Hostname:         Customer:     Distribution:
-------------------------------------------------------------------------
TOP
 
    page_body <<BODY
 
@<<<<<<<<<<<<<<<< @<<<<<<<<<<<< @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
identity,    facts["customer"], facts["lsbdistdescription"]
                                @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
                                facts["processor0"]
BODY
end

And the resulting report is something like this:

$ mc-inventory --script hardware.mc
            Node Report Fri Aug 20 21:49:39 +0100
 
Hostname:         Customer:     Distribution:
-------------------------------------------------------------------------
 
web1              rip           CentOS release 5.5 (Final)
                                Intel(R) Xeon(R) CPU           L5420  
 
web2              xxxxxxx       CentOS release 5.5 (Final)
                                Intel(R) Xeon(R) CPU           X3430

The report will be paged 20 nodes per page. The result is very pleasing even if the report format is a bit grim, but it would be much worse to write yet another reporting DSL!

See the full release notes for details on bug fixes and other features.

Making machine metadata visible

I’m quite the fan of data, metadata and querying these to interact with my infrastructure rather than interacting by hostnames and wanted to show how far I am down this route.

This is more an iterative ongoing process than a fully baked idea at this point since the concept of hostnames is so heavily embedded in our Sysadmin culture. Today I can’t yet fully break away from it due to tools like nagios etc still relying heavily on the hostname as the index but these are things that will improve in time.

The background is that in the old days we attempted to capture a lot of metadata in hostnames, domain names and so forth. This was kind of OK since we had static networks with relatively small amounts of hosts. Today we do ever more complex work on our servers and we have more and more servers. The advent of cloud computing has also brought with it a whole new pain of unpredictable hostnames, rapidly changing infrastructures a much bigger emphasis on role based computing.

My metadata about my machines comes from 3 main sources:

  • My Puppet manifests – classes and modules that gets put on a machine
  • Facter facts with the ability to add many per machine easily
  • MCollective stores the meta data in a MongoDB and let me query the network in real time

Puppet manifests based on query

When setting up machines I keep some data like database master hostnames in extlookup but in many cases I am now moving to a search based approach to finding resources. Here’s a sample manifest that will find the master database for a customers development machines:

$masterdb = search_nodes("{'facts.customer': '${customer}', 'facts.environment':${environment}, classes: 'mysql::master'}")

This is MongoDB query against my infrastructure database, it will find for a given node the name of a node that has the class mysql::master on it, by convention there should be only one per customer in my case. When using it in a template I can get back full objects with all the meta data for a node. Hopefully with Puppet 2.6 I can get full hashes into puppet too!

Making Metadata Visible

With machines doing a lot of work, filling a lot of roles etc and with more and more machines you need to be able to tell immediately what machine you are on.

I do this in several places, first my MOTD can look something like this:

   Welcome to Synchronize Your Dogmas 
            hosted at Hetzner, Germany
 
        Puppet Modules:
                - apache
                - iptables
                - mcollective member
                - xen dom0 skeleton
                - mw1.xxx.net virtual machine

I build this up using snippet from my concat module, each important module like apache can just put something like this in:

motd::register{"Apache Web Server": }

Being managed by my snippet library, if you just remove the include line from the manifests the MOTD will automatically update.

With a big block of welcome done, I now need to also be able to show in my prompts what a machine does, who its for a importantly what environment it is in.

Above a shot of 2 prompts in different environments, you see customer name, environment and major modules. Like with the motd I have a prompt::register define that module use to register into the prompt.

SSH Based on Metadata

With all this meta data in place, mcollective rolled out and everything integrated it’s very easy to now find and access machines based on this.

MCollective does real time resource discovery, so keeping with the mysql example above from puppet:

$ mc-ssh -W "environment=development customer=acme mysql::master"
Running: ssh db1.acme.net
Last login: Thu Jul 29 00:22:58 2010 from xxxx

$

Here i am ssh’ing to a server based on a query, if it found more than one machine matching the query a menu would be presented offering me a choice.

Monitoring Based on Metatdata

Finally setting up monitoring and keeping it in sync with reality can be a big challenge especially in dynamic cloud based environments, again I deal with this through discovery based on meta data:

$ check-mc-nrpe -W "environment=development customer=acme mysql::master"  check_load
check_load: OK: 1 WARNING: 0 CRITICAL: 0 UNKNOWN: 0|total=1 ok=1 warn=0 crit=0 unknown=0 checktime=0.612054

Summary

This is really the tip of the ice berg, there is a lot more that I already do – like scheduling puppet runs on groups of machines based on metadata – but also a lot more to do this really is early days down this route. I am very keen to get views from others who are struggling with shortcomings in hostname based approaches and how they deal with it.