by R.I. Pienaar | Jul 16, 2010 | Front Page
I was invited to London QCon this year to give a talk, I chose to talk about how I’ve helped to build a startup heavily favoring the scenario where developers do support, rollouts and maintenance of their code directly in production.
My talk go into the approaches I took while thinking about networks, boxes, operating systems, team structure, monitoring and so forth to attain these goals in a way that does not compromise the traditional goals that sysadmins have as a team and profession.
You can watch the talk – 50 minutes roughly – at the InfoQ site.
I should add I was feeling a bit rough on the day and coming down with a cold, but mostly I think I remained more or less conscious during the talk ๐
by R.I. Pienaar | Jul 14, 2010 | Code
The problem of getting EC2 images to do what you want is quite significant, mostly I find the whole thing a bit flakey and with too many moving parts.
- When and what AMI to start
- Once started how to do you configure it from base to functional. Especially in a way that doesn’t become a vendor lock.
- How do you manage the massive sprawl of instances, inventory them and track your assets
- Monitoring and general life cycle management
- When and how do you shut them, and what cleanup is needed. Being billed by the hour means this has to be a consideration
These are significant problems and just a tip of the ice berg. All of the traditional aspects of infrastructure management – like Asset Management, Monitoring, Procurement – are totally useless in the face of the cloud.
A lot of work is being done in this space by tools like Pool Party, Fog, Opscode and many other players like the countless companies launching control panels, clouds overlaying other clouds and so forth. As a keen believer in Open Source many of these options are not appealing.
I want to focus on the 2nd step above here today and show how I pulled together a number of my Open Source projects to automate that. I built a generic provisioner that hopefully is expandable and usable in your own environments. The provisioner deals with all the interactions between Puppet on nodes, the Puppet Master, the Puppet CA and the administrators.
<rant> Sadly the activity in the Puppet space is a bit lacking in the area of making it really easy to get going on a cloud. There are suggestions on the level of monitoring syslog files from a cronjob and signing certificates based on that. Really. It’s a pretty sad state of affairs when that’s the state of the art.
Compare the ease of using Chef’s Knife with a lot of the suggestions currently out there for using Puppet in EC2 like these: 1, 2, 3 and 4.
Not trying to have a general Puppet Bashing session here but I think it’s quite defining of the 2 user bases that Cloud readiness is such an after thought so far in Puppet and its community. </rant>
My basic needs are that instances all start in the same state, I just want 1 base AMI that I massage into the desired final state. Most of this work has to be done by Puppet so it’s repeatable. Driving this process will be done by MCollective.
I bootstrap the EC2 instances using my EC2 Bootstrap Helper and I use that to install MCollective with just a provision agent. It configures it and hook it into my collective.
From there I have the following steps that need to be done:
- Pick a nearby Puppet Master, perhaps using EC2 Region or country as guides
- Set up the host – perhaps using /etc/hosts – to talk to the right master
- Revoke and clean any old certs for this hostname on all masters
- Instruct the node to create a new CSR and send it to its master
- Sign the certificate
- Run my initial bootstrap Puppet environment, this sets up some hard to do things like facts my full build needs
- Run the final Puppet run in my normal production environment.
- Notify me using XMPP, Twitter, Google Calendar, Email, Boxcar and whatever else I want of the new node
This is a lot of work to be done on every node. And more importantly it’s a task that involves many other nodes like puppet masters, notifiers and so forth. It has to adapt dynamically to your environment and not need reconfiguring when you get new Puppet Masters. It has to deal with new data centers, regions and countries without needing any configuration or even a restart. It has to happen automatically without any user interaction so that your auto scaling infrastructure can take care of booting new instances even while you sleep.
The provisioning system I wrote does just this. It follows the above logic for any new node and is configurable for which facts to use to pick a master and how to notify you of new systems. It adapts automatically to your ever changing environments thanks to discovery of resources. The actions to perform on the node are easily pluggable by just creating an agent that complies to the published DDL like the sample agent.
You can see it in action in the video below. I am using Amazon’s console to start the instance, you’d absolutely want to automate that for your needs. You can also see it direct on blip.tv here. For best effect – and to be able to read the text – please fullscreen.
In case the text is unreadable in the video a log file similar to the one in the video can be seen here and an example config here
Past this point my Puppet runs are managed by my MCollective Puppet Scheduler.
While this is all done using EC2 nothing prevents you from applying these same techniques to your own data center or non cloud environment.
Hopefully this shows that you can wrap all the logic needed to do very complex interactions with systems that are perhaps not known for their good reusable API’s in simple to understand wrappers with MCollective, exposing those systems to the network at large with APIs that can be used to reach your goals.
The various bits of open source I used here are:
by R.I. Pienaar | Jul 12, 2010 | Uncategorized
I’ve been working a bit on streamlining the builds I do on EC2 and wanted a better way to provision my machines. I use CentOS and things are pretty rough to non existent for nicely built EC2 images. I’ve used the Rightscale ones till now and while they’re nice they are also full of lots of code copyrighted by Rightscale.
What I really wanted was something as full featured as Ubuntu’s CloudInit but also didn’t feel much like touching any Python. I hacked up something that more or less do what I need. You can get it on GitHub. It’s written and tested on CentOS 5.5.
The idea is that you’ll have a single multi purpose AMI that you can easily bootstrap onto your puppet/mcollective infrastructure using this system. Below for some details.
I prepare my base CentOS AMI with the following mods:
- Install Facter and Puppet – but not enabled
- Install the EC2 utilities
- Setup the usual getsshkeys script
- Install the ec2-boot-init RPM
- Add a custom fact that reads /etc/facts.txt – see later why. Get one here.
With this in place you need to create some ruby scripts that you will use to bootstrap your machines. Examples of this would be to install mcollective, configure it to find your current activemq. Or to set up puppet and do your initial run etc.
We host these scripts on any webserver – ideally S3 – so that when a machine boots it can grab the logic you want to execute on it. This way you can bug fix your bootstrapping without having to make new AMIs as well as add new bootstrap methods in future to existing AMIs.
Here’s a simple example that just runs a shell command:
newaction("shell") do |cmd, ud, md, config|
if cmd.include?(:command)
system(cmd[:command])
end
end |
newaction("shell") do |cmd, ud, md, config|
if cmd.include?(:command)
system(cmd[:command])
end
end
You want to host this on any webserver in a file called shell.rb. Now create a file list.txt in the same location that just have this:
You can list as many scripts as you want. Now when you boot your instance pass it data like this:
---
:facts:
role: webserver
:actions:
- :url: http://your.net/path/to/actions/list.txt
:type: :getactions
- :type: :shell
:command: date > /tmp/test |
---
:facts:
role: webserver
:actions:
- :url: http://your.net/path/to/actions/list.txt
:type: :getactions
- :type: :shell
:command: date > /tmp/test
The above will fetch the list of actions – our shell.rb – from http://your.net/path/to/actions/list.txt and then run using the shell action the command date > /tmp/test. The actions are run in order so you probably always want getactions to happen first.
Other actions that this script will take:
- Cache all the user and meta data in /var/spool/ec2boot
- Create /etc/facts.txt with all your facts that you passed in as well as a flat version of the entire instance meta data.
- Create a MOTD that shows some key data like AMI ID, Zone, Public and Private hostnames
The boot library provides a few helpers that help you write scripts for this environment specifically around fetching files and logging:
["rubygems-1.3.1-1.el5.noarch.rpm",
"rubygem-stomp-1.1.6-1.el5.noarch.rpm",
"mcollective-common-#{version}.el5.noarch.rpm",
"mcollective-#{version}.el5.noarch.rpm",
"server.cfg.templ"].each do |pkg|
EC2Boot::Util.log("Fetching pkg #{pkg}")
EC2Boot::Util.get_url("http://foo.s3.amazonaws.com/#{pkg}", "/mnt/#{pkg}")
end |
["rubygems-1.3.1-1.el5.noarch.rpm",
"rubygem-stomp-1.1.6-1.el5.noarch.rpm",
"mcollective-common-#{version}.el5.noarch.rpm",
"mcollective-#{version}.el5.noarch.rpm",
"server.cfg.templ"].each do |pkg|
EC2Boot::Util.log("Fetching pkg #{pkg}")
EC2Boot::Util.get_url("http://foo.s3.amazonaws.com/#{pkg}", "/mnt/#{pkg}")
end
This code fetches a bunch of files from a S3 bucket and save them into /mnt. Each one gets logged to console and syslog. Using this GET helper has the advantage that it has sane retrying etc built in for you already.
It’s fairly early days for this code but it works and I am using it, I’ll probably be adding a few more features soon, let me know in comments if you need anything specific or even if you find it useful.
by R.I. Pienaar | Jul 7, 2010 | Code
Some time ago I wrote how to reuse Puppet providers in your Ruby script, I’ll take that a bit further here and show you to create any kind of resource.
Puppet works based on resources and catalogs. A catalog is a collection of resources and it will apply the catalog to a machine. So in order to do something you can do as before and call the type’s methods directly but if you wanted to build up a resource and say ‘just do it’ then you need to go via a catalog.
Here’s some code, I don’t know if this is the best way to do it, I dug around the code for ralsh to figure this out:
params = { :name => "rip",
:comment => "R.I.Pienaar",
:password => '......' }
pup = Puppet::Type.type(:user).new(params)
catalog = Puppet::Resource::Catalog.new
catalog.add_resource pup
catalog.apply |
params = { :name => "rip",
:comment => "R.I.Pienaar",
:password => '......' }
pup = Puppet::Type.type(:user).new(params)
catalog = Puppet::Resource::Catalog.new
catalog.add_resource pup
catalog.apply
That’s really simple and doesn’t require you to know much about the inner workings of a type, you’re just mapping the normal Puppet manifest to code and applying it. Nifty.
The natural progression – to me anyway – is to put this stuff into a MCollective agent and build a distributed ralsh.
Here’s a sample use case, I wanted to change my users password everywhere:
$ mc-rpc puppetral do type=user name=rip password='$1$xxx' |
$ mc-rpc puppetral do type=user name=rip password='$1$xxx'
And that will go out, find all my machines and use the Puppet RAL to change my password for me. You can do anything puppet can, manage /etc/hosts, add users, remove users, packages, services and anything even your own custom types can be used. Distributed and in parallel over any number of hosts.
Some other examples:
Add a user:
$ mc-rpc puppetral do type=user name=foo comment="Foo User" managehome=true |
$ mc-rpc puppetral do type=user name=foo comment="Foo User" managehome=true
Run a command using exec, with the magical creates option:
$ mc-rpc puppetral do type=exec name="/bin/date > /tmp/date" user=root timeout=5 creates="/tmp/date" |
$ mc-rpc puppetral do type=exec name="/bin/date > /tmp/date" user=root timeout=5 creates="/tmp/date"
Add an aliases entry:
$ mc-rpc puppetral do type=mailalias name=foo recipient="rip@devco.net" target="/etc/aliases" |
$ mc-rpc puppetral do type=mailalias name=foo recipient="rip@devco.net" target="/etc/aliases"
Install a package:
$ mc-rpc puppetral do type=package name=unix2dos ensure=present |
$ mc-rpc puppetral do type=package name=unix2dos ensure=present
by R.I. Pienaar | Jul 3, 2010 | Code
A very typical scenario I come across on many sites is the requirement to monitor something like Puppet across 100s or 1000s of machines.
The typical approaches are to add perhaps a central check on your puppet master or to check using NRPE or NSCA on every node. For this example the option exist to easily check on the master and get one check but that isn’t always easily achievable.
Think for example about monitoring mail queues on all your machines to make sure things like root mail isn’t getting stuck. In those cases you are forced to do per node checks which inevitably result in huge notification storms in the event that your mail server was down and not receiving the mail from the many nodes.
MCollective has had a plugin that can run NRPE commands for a long time, I’ve now added a nagios plugin using this agent to combine results from many hosts.
Sticking with the Puppet example, here are my needs:
- I want to know if anywhere some puppet machine isn’t successfully doing runs.
- I want to be able to do puppetd –disable and not get alerts for those machines.
- I do not want to change any configs when I am adding new machines, it should just work.
- I want the ability to do monitoring on subsets of machines on different probes
This is a pretty painful set of requirements for nagios on its own to achieve. Easy with the help of MCollective.
Ultimately, I just want this:
OK: 42 WARNING: 0 CRITICAL: 0 UNKNOWN: 0 |
OK: 42 WARNING: 0 CRITICAL: 0 UNKNOWN: 0
Meaning 42 machines – only ones currently enabled – are all running happily.
The NRPE Check
We put the NRPE logic on every node. A simple check command in /etc/nagios/nrpe.d/check_puppet_run.cfg:
command[check_puppet_run]=/usr/lib/nagios/plugins/check_file_age -f /var/lib/puppet/state/state.yaml -w 5400 -c 7200 |
command[check_puppet_run]=/usr/lib/nagios/plugins/check_file_age -f /var/lib/puppet/state/state.yaml -w 5400 -c 7200
In my case I just want to know there are successful runs happening, if I wanted to know the code is actually compiling correctly I’d monitor the local cache age and size.
Determining if Puppet is enabled or not
Currently this is a bit hacky, I’ve filed tickets with Puppet Labs to improve this. The way to determine if puppet is disabled is to check if the lock file exist and if its 0 bytes. If it’s not zero bytes it means a puppetd is currently doing a run – there will be a pid in it. Or the puppetd crashed and there’s a stale pid preventing other runs.
To automate this and integrate into MCollective I’ve made a fact puppet_enabled. We’ll use this in MCollective discovery to only monitor machines that are enabled. Get this onto all your nodes perhaps using Plugins in Modules.
The MCollective Agent
You want to deploy the MCollective NRPE Agent to all your nodes, once you’ve got it right you can test it easily using something like this:
% mc-nrpe -W puppet_enabled=1 check_puppet_run
* [ ============================================================> ] 47 / 47
Finished processing 47 / 47 hosts in 395.51 ms
OK: 47
WARNING: 0
CRITICAL: 0
UNKNOWN: 0 |
% mc-nrpe -W puppet_enabled=1 check_puppet_run
* [ ============================================================> ] 47 / 47
Finished processing 47 / 47 hosts in 395.51 ms
OK: 47
WARNING: 0
CRITICAL: 0
UNKNOWN: 0
Note we’re restricting the run to only enabled hosts.
Integrating into Nagios
The last step is to add this to nagios. I create SSL certs and a specific client configuration for Nagios and put these in it’s home directory.
The check-mc-nrpe plugin works best with Nagios 3 as it will return subsequent lines of output indicating which machines are in what state so you get the details hidden behind the aggregation in alerts. It also outputs performance data for total node, each status and also how long it took to do the check.
The nagios command would be something like this:
define command{
command_name check_mc_nrpe
command_line /usr/sbin/check-mc-nrpe --config /var/log/nagios/.mcollective/client.cfg -W $ARG1$ $ARG2$
} |
define command{
command_name check_mc_nrpe
command_line /usr/sbin/check-mc-nrpe --config /var/log/nagios/.mcollective/client.cfg -W $ARG1$ $ARG2$
}
And finally we need to make a service:
define service{
host_name monitor1
service_description mc_puppet-run
use generic-service
check_command check_mc_nrpe!puppet_enabled=1!check_puppet_run
notification_period awakehours
contact_groups sysadmin
} |
define service{
host_name monitor1
service_description mc_puppet-run
use generic-service
check_command check_mc_nrpe!puppet_enabled=1!check_puppet_run
notification_period awakehours
contact_groups sysadmin
}
Here are a few other command examples I use:
All machines with my Puppet class “pki”, check the age of certs:
check_command check_mc_nrpe!pki!check_pki |
check_command check_mc_nrpe!pki!check_pki
All machines with my Puppet class “bacula::node”, make sure the FD is running:
check_command check_mc_nrpe!bacula::node!check_fd |
check_command check_mc_nrpe!bacula::node!check_fd
…and that they were backed up:
check_command check_mc_nrpe!bacula::node!check_bacula_main |
check_command check_mc_nrpe!bacula::node!check_bacula_main
Using this I removed 100s of checks from my monitoring platform, saving on resources and making sure I can do my critical monitor tasks better.
Depending on the quality of your monitoring system you might even get a graph showing the details hidden behind the aggregation:
The above is a graph showing a series of servers where the backup ran later than usual, I had 2 alerts only, would have had more than 30 before aggregation.
Restrictions for Probes
The last remaining requirement I had was to be able to do checks on different probes and restrict them. My Collective is one big one spread all over the world which means sometimes things are a bit slow discovery wise.
So I have many nagios servers doing local checks. Using MCollective discovery I can now easily restrict checks, for example If I only wanted to check machines in the USA and I had a fact country I only have to change my command line in the service declaration:
check_command check_mc_nrpe!puppet_enabled=1 country=us!check_puppet_run |
check_command check_mc_nrpe!puppet_enabled=1 country=us!check_puppet_run
This will then via MCollective discovery just monitor machines in the US.
What to monitor this way
As this style of monitoring is done using Discovery you would need to think carefully about what you monitor this way. It’s totally conceivable that if a node is under high CPU load that it wont respond to discovery commands in time, and so wont get monitored!
You would then for example not want to monitor things like load averages or really critical services this way, but we all have a lot of peripheral things like zombie process counts and a lot of other places where aggregation makes a lot of sense, in those cases by all means consider this approach.