by R.I. Pienaar | Jul 31, 2015 | Code
Puppet 4 has been out a while but given the nature of the update – new packaging requiring new modules to manage it etc I’ve been reluctant to upgrade and did not have the time really. Ditto for Centos 7. But Docker will stop supporting Centos 6 Soon Now so this meant I had to look into both a bit closer.
Puppet 4 really is a whole new thing, it maintains backward compatibility but really in terms of actually using its features I think you’d be better off just starting fresh. I am moving the bulk of my services out of CM anyway so my code base will be tiny so not a big deal for me to just throw it all out and start fresh.
I came across a few really interesting new things amongst it’s many features and wanted to highlight a few of these. This is by no means an exhaustive list, it’s just a whirlwind tour of a few things I picked up on.
The Forge
Not really a Puppet 4 thing per se but more a general eco system comment. I have 23 modules in my new freshly minted Puppet repo with 13 of them coming from the forge. To my mind that is a really impressive figure, it makes the new starter experience so much better.
Things I still do on my own: exim, iptables, motd, pki, roles/profiles of course and users.
In the case of exim I have almost no config, it’s just a package/config/service and all it does is setup a local config that talks to my smart relays. It does use my own CA though and that’s why I also have my own PKI module to configure the CA and distribute certs and keys and such. The big one is iptables really and I just haven’t had the time to really consider a replacement – whatever I choose it needs to play well with docker and that’s probably going to be a tall order.
Anyway, big kudos on the forge team and shout outs to forge users: puppetlabs, jfryman, saz and garethr.
Still some things to fix – puppet module tool is pretty grim wrt errors and feedback and I think there’s work left to do on discoverability of good modules and finding ways to promote people investing time in making better ones, but this is a big change from 2 years ago for sure.
Puppet 4 Type System
Puppet 4 has a data type system, it’s kind of optional which is weird as things go but you can almost think of it like a built in way to do validate_hash and friends, almost. The implications of having it though are huge – it means down the line there will be a lot fewer edge cases with things just behaving weirdly.
Data used to go from hiera to manifests and ending up strings when the data was Boolean now Puppet knows about actual Booleans and does not mess it up – things will become pretty consistant and solid and it will be easy to write well behaved code.
For now though it’s the opposite, there are many more edge cases as a result of it.
Particularly functions that previously took a number and did something with it might have assumed the number was a string with a number in it. Now it’s going to get an actual number and this causes breakage. There are a few of these in stdlib but they are getting fixed – expect this will catch out many templates and functions so there will be a settling in period but it’s well worth the effort.
Here’s an example:
define users::user(
...
Enum["present", "absent"] $ensure = "present",
Optional[String] $ssh_authorized_file = undef,
Optional[String] $email = undef,
Optional[Integer] $uid = undef,
Optional[Integer] $gid = undef,
Variant[Boolean, String] $sudoer = false,
Boolean $setup_shell = false,
Boolean $setup_rbenv = false
) {
...
} |
define users::user(
...
Enum["present", "absent"] $ensure = "present",
Optional[String] $ssh_authorized_file = undef,
Optional[String] $email = undef,
Optional[Integer] $uid = undef,
Optional[Integer] $gid = undef,
Variant[Boolean, String] $sudoer = false,
Boolean $setup_shell = false,
Boolean $setup_rbenv = false
) {
...
}
If I passed ensure => bob to this I get:
Error: Expected parameter 'ensure' of 'Users::User[rip]' to have type Enum['present', 'absent'], got String |
Error: Expected parameter 'ensure' of 'Users::User[rip]' to have type Enum['present', 'absent'], got String
Pretty handy though the errors can improve a lot – something I know is on the road map already.
You can get pretty complex with this like describe the entire contents of a hash and Puppet will just ensure any hash you receive matches this, doing this would have been really hard even with all the stuff in old stdlib:
Struct[{mode => Enum[read, write, update],
path => Optional[String[1]],
NotUndef[owner] => Optional[String[1]]}] |
Struct[{mode => Enum[read, write, update],
path => Optional[String[1]],
NotUndef[owner] => Optional[String[1]]}]
I suggest you spend a good amount of time with the docs About Values and Data Types, Data Types: Data Type Syntax and Abstract Data Types. There are many interesting types like ones that do Pattern matching etc.
Case statements and Selectors have also become type aware as have normal expressions to test equality etc:
$enable_real = $enable ? {
Boolean => $enable,
String => str2bool($enable),
Numeric => num2bool($enable),
default => fail('Illegal value for $enable parameter'),
}
if 5 =~ Integer[1,10] {
notice("it's a number between 1 and 10")
} |
$enable_real = $enable ? {
Boolean => $enable,
String => str2bool($enable),
Numeric => num2bool($enable),
default => fail('Illegal value for $enable parameter'),
}
if 5 =~ Integer[1,10] {
notice("it's a number between 1 and 10")
}
It’s not all wonderful though, I think the syntax choices are pretty poor. I scan parameter lists: a) to discover module features b) to remind myself of the names c) to find things to edit. With the type preceding the variable name every single use case I have for reading a module code has become worse and I fear I’ll have to resort to lots of indention to make the var names stand out from the type definitions. I cannot think of a single case where I will want to know the variable data type before knowing it’s name. So from a readability perspective this is not great at all.
Additionally I cannot see myself using a Struct like above in the argument list – to which Henrik says they are looking to add a typedef thing to the language so you can give complex Struc’s a more convenient name and use that. This will help that a lot. Something like this:
type MyData = Struct[{ .... }]
define foo(MyData $bar) {
} |
type MyData = Struct[{ .... }]
define foo(MyData $bar) {
}
That’ll be handy and Henrik says this is high on the priority list, it’s pretty essential from a usability perspective.
UPDATE: As of 4.4.0 this has been delivered, see Puppet 4 Type Aliases
Native data merges
You can merge arrays and hashes easily:
$ puppet apply -e '$a={"a" => "b"}; $b={"c" => "d"}; notice($a+$b)'
Notice: Scope(Class[main]): {a => b, c => d}
$ puppet apply -e 'notice([1,2,3] + [4,5,6])'
Notice: Scope(Class[main]): [1, 2, 3, 4, 5, 6] |
$ puppet apply -e '$a={"a" => "b"}; $b={"c" => "d"}; notice($a+$b)'
Notice: Scope(Class[main]): {a => b, c => d}
$ puppet apply -e 'notice([1,2,3] + [4,5,6])'
Notice: Scope(Class[main]): [1, 2, 3, 4, 5, 6]
And yes you can now use a ; instead of awkwardly making new lines all the time for quick one-liner tests like this.
Resource Defaults
There’s a new way to do resource defaults. I know this is a widely loathed syntax but I quite like it:
file {
default:
mode => '0600',
owner => 'root',
group => 'root',
ensure => file,
'/etc/ssh_host_key':
;
'/etc/ssh_host_dsa_key.pub':
mode => '0644',
} |
file {
default:
mode => '0600',
owner => 'root',
group => 'root',
ensure => file,
'/etc/ssh_host_key':
;
'/etc/ssh_host_dsa_key.pub':
mode => '0644',
}
The specific mode on /etc/ssh_host_dsa_key.pub will override the defaults, pretty handy. And it address a previous issue with old style defaults that they would go all over the scope and make a mess of things. This is confined to just these files.
Accessing resource parameter values
This is something people often ask for, it’s seems exciting but I don’t think it will be of any practical use because it’s order dependant just like defined().
notify{"hello": message => "world"}
$message = Notify["hello"]["message"] # would be 'world' |
notify{"hello": message => "world"}
$message = Notify["hello"]["message"] # would be 'world'
So this fetches another resource parameter value.
You can also fetch class parameters this way but this seems redundant, there are several ordering caveats so test your code carefully.
Loops
This doesn’t really need comment, perhaps only OMFG FINALLY is needed.
["puppet", "facter"].each |$file| {
file{"/usr/bin/${file}":
ensure => "link",
target => "/opt/puppetlabs/bin/${file}"
}
} |
["puppet", "facter"].each |$file| {
file{"/usr/bin/${file}":
ensure => "link",
target => "/opt/puppetlabs/bin/${file}"
}
}
More complex things like map and select exist too:
$domains = ["foo.com", "bar.com"]
$domain_definition = $domains.reduce({}) |$memo, $domain| {
$memo + {$domain => {"relay" => "mx.${domain}"}}
} |
$domains = ["foo.com", "bar.com"]
$domain_definition = $domains.reduce({}) |$memo, $domain| {
$memo + {$domain => {"relay" => "mx.${domain}"}}
}
This yields a new hash made up of all the parts:
{
"foo.com" => {"relay" => "mx.foo.com"},
"bar.com" => {"relay" => "mx.bar.com"}
} |
{
"foo.com" => {"relay" => "mx.foo.com"},
"bar.com" => {"relay" => "mx.bar.com"}
}
See Iterating in Puppet for more details on this.
. syntax
If you’re from Ruby this might be a bit more bearable, you can use any function interchangably it seems:
$x = join(["a", "b"], ",")
$y = ["a", "b"].join(",") |
$x = join(["a", "b"], ",")
$y = ["a", "b"].join(",")
Both result in a,b
Default Ordering
By default it now does manifest ordering. This is a big deal, I’ve had to write no ordering code at all. None. Not a single require or ordering arrows. It’s just does things top down by default but parameters like notifies and specific requires influence it. Such an amazingly massive time saver. Good times when things that were always obviously dumb ideas goes away.
It’s clever enough to also do things in the order they are included. So if you had:
class myapp {
include myapp::install
include myapp::config
include myapp::service
} |
class myapp {
include myapp::install
include myapp::config
include myapp::service
}
Ordering will magically be right. Containment is still a issue though.
Facts hash
Ever since the first contributor summit I’ve been campaigning for $facts[“foo”] and it’s gone all round with people wanting to invent some new hash like construct and worse, but finally we have now a by default enabled facts hash.
Unfortunately we are still stuck with $settings::vardir but hopefully some new hash will be created for that.
It’s a reserved word everywhere so you can safely just do $facts[“location”] and not even have to worry about $::facts, though you might still do that in the interest of consistency.
Facter 3
Facter 3 is really fast:
$ time facter
facter 0.08s user 0.03s system 44% cpu 0.248 total |
$ time facter
facter 0.08s user 0.03s system 44% cpu 0.248 total
This makes everything better. It’s also structured data but this is still a bit awkward in Puppet:
$x = $facts["foo"]["bar"]["baz"] |
$x = $facts["foo"]["bar"]["baz"]
There seems to be no elegant way to handle a missing ‘foo’ or ‘bar’ key, things just fail badly in ways you can’t catch or recover from. On the CLI you can do facter foo.bar.baz so we’re already careful to not have “.” in a key. We need some function to extract data from hashes like:
$x = $facts.fetch("foo.bar.baz", "default") |
$x = $facts.fetch("foo.bar.baz", "default")
It’ll make it a lot easier to deal with.
Hiera 3
Hiera 3 is out and at first I thought it didn’t handle hashes well, but it does:
:hierarchy:
- "%{facts.fqdn}"
- "location_%{facts.location}"
- "country_%{facts.country}"
- common |
:hierarchy:
- "%{facts.fqdn}"
- "location_%{facts.location}"
- "country_%{facts.country}"
- common
That’s how you’d fetch values out of hashes and it’s pretty great. Notice I didn’t do ::facts that’s because facts is reserved so there’ll be no scope layering issues.
Much better parser
You can use functions almost everywhere:
$ puppet apply -e 'notify{hiera("rsyslog::client::server"): }'
Notice: loghost.example.net |
$ puppet apply -e 'notify{hiera("rsyslog::client::server"): }'
Notice: loghost.example.net
There are an immeasurable amount of small improvements in things the old parser did badly, now it’s really nice to use, things just work the way I expect them to do from other languages.
Even horrible stuff like this works:
$x = hiera_hash("something")["foo"] |
$x = hiera_hash("something")["foo"]
Which previously needed an intermediate variable.
puppet apply –test
A small thing but –test in apply now works like in agent – colors, verbose etc, really handy.
Data in Modules
I did a PoC to enable Hiera in modules a few years ago and many people loved the idea. This has finally landed in recent Puppet 4 versions and it’s pretty awesome. It lets you have a data directory and hiera.yaml in your module, this goes some way towards removing what is currently done with params.pp
I wrote a blog post about it: Native Puppet 4 Data in Modules. An additional blog post that covers this is params.pp in Puppet 4 which shows how it ties together with some other new things.
Native create_resources
create_resources is a hack that exists because it was easier to hack up than fix Puppet. Puppet has now been fixed, so this is the new create_resources.
each($domains) |$name, $domain| {
mail::domain{$name:
* => $domain
}
} |
each($domains) |$name, $domain| {
mail::domain{$name:
* => $domain
}
}
See Iterating in Puppet for extensive examples.
No more hiera_hash() and hiera_array()
There’s a new function called lookup() that’s pretty powerful. When combined with the new Data in Modules feature you can replace these functions AND have your automatic parameter lookups do merges.
See Puppet 4 data lookup strategies for an extensive look at these
Native language functions
You can now write name spaced functions using the Puppet DSL:
function site::fetch(
Hash $data,
String $key,
$default
) {
if $data[$key] {
$data[$key]
} else {
$default
}
} |
function site::fetch(
Hash $data,
String $key,
$default
) {
if $data[$key] {
$data[$key]
} else {
$default
}
}
And you’ll use this like any function really:
$h = {}
$item = site::fetch($h, "thing", "default") # $item is now 'default' |
$h = {}
$item = site::fetch($h, "thing", "default") # $item is now 'default'
It also has a splat argument handler:
function site::thing(*$args) {
$args
}
$list = site::thing("one", "two", "three") # $list becomes ["one", "two", "three"] |
function site::thing(*$args) {
$args
}
$list = site::thing("one", "two", "three") # $list becomes ["one", "two", "three"]
AIO Packaging
Of course by now almost everyone know we’re getting omnibus style packaging. I am a big supporter of this direction, the new bundled ruby is fast and easy to get onto older machines.
The execution of this is unspeakably bad though. It’s so half baked and leave so much to be desired.
Here’s a snippet from the current concat module:
if defined('$is_pe') and str2bool("${::is_pe}") { # lint:ignore:only_variable_string
if $::kernel == 'windows' {
$command_path = "${::env_windows_installdir}/bin:${::path}"
} else {
$command_path = "/opt/puppetlabs/puppet/bin:/opt/puppet/bin:${::path}"
}
} elsif $::kernel == 'windows' {
$command_path = $::path
} else {
$command_path = "/opt/puppetlabs/puppet/bin:${::path}"
}
exec{"...":
path => $command_path
} |
if defined('$is_pe') and str2bool("${::is_pe}") { # lint:ignore:only_variable_string
if $::kernel == 'windows' {
$command_path = "${::env_windows_installdir}/bin:${::path}"
} else {
$command_path = "/opt/puppetlabs/puppet/bin:/opt/puppet/bin:${::path}"
}
} elsif $::kernel == 'windows' {
$command_path = $::path
} else {
$command_path = "/opt/puppetlabs/puppet/bin:${::path}"
}
exec{"...":
path => $command_path
}
There are no words. Without this abomination it would try to use system ruby to run the #!/usr/bin/env ruby script. Seriously, if something ships that cause this kind of code to be written by users you’ve failed. Completely.
Things like the OS not being properly setup with symlinks into /usr/bin – can kind of understand it to avoid conflicts with existing Puppet, but meh, it just makes it feel unpolished and as if it comes without batteries included the RPM conflicts with puppet so it’s not that, it’s just comes without batteries included.
The file system choices are completely arbitrary:
# puppet apply --configprint vardir
/opt/puppetlabs/puppet/cache |
# puppet apply --configprint vardir
/opt/puppetlabs/puppet/cache
This is intuitive to exactly no-one who has ever used any unix or windows or any computer.
Again, I totally support the AIO direction but the UX of this is so poor that while I’ve been really positive about Puppet 4 up to now I’d say this makes the entire thing be Alpha quality. The team absolutely must go back to the drawing board and consider how this is done from the perspective of usability by people who have likely used Unix before.
Users have decades of experience to build on and the system as a whole need to be coherent and compliment them – it should be a natural and comfortable fit. This and many other layout choice just does not make sense. Sure the location is arbitrary it makes no technical different if it’s in /opt/puppetlabs/puppet/cache or some other directory.
It DOES though make a massive difference cognitively to users when thinking of the option vardir and think of their entire career experience of what that mean and then cannot for the life of them find the place these files go without having to invest effort in finding it and then having to remember it as a odd one out. Even knowing things are in $prefix you still can’t find this dir because it’s now been arbitrarily renamed to cache and instead of using well known tools like find I now have to completely context switch.
Not only is this a senseless choice but frankly it’s insulting that this software seems to think it’s so special that I have to remember their crappy paths differently from any of the other 100s of programs out there. It’s not, it’s just terrible and makes it a nightmare to use. Sure put the stuff in /opt/puppetlabs, but don’t just go and make things up and deviate from what we’ve learned over years of supporting Puppet. It’s an insult.
Your users have invested countless hours in learning the software, countless hours in supporting others and in some cases Paid for this knowledge. Arbitrarily changing vardir to mean cache trivialise that investment and puts a unneeded toll on those of us who support others in the community.
Conclusion
There’s a whole lot more to show about Puppet 4, I’ve only been at it for a few nights after work but overall I am super impressed by the work done on Puppet Core. The packaging lets the efforts down and I’d be weary of recommending anyone go to Puppet 4 as a result, it’s a curiosity to investigate in your spare time while hopefully things improve on the packaging front to the level of a production usable system.
by R.I. Pienaar | Mar 30, 2015 | Uncategorized
I recently blogged about my workflow improvements realised by using docker for some services. Like everyone else the full story about running containers in production is a bit of an unknown. I am running 7 or 8 things in containers at the moment but I have a lot of outstanding questions.
I could go the route of a private PaaS where you push an image or Dockerfile into it and forget about it. Hoping you never have to debug anything or dive deep into finding out why something is not performant as those tend to be very much closed systems. Some like deis are just Docker underneath but some others like the recently hyped lattice.cf unpacks the Docker container and transforms it into something else entirely that is much harder to interact with from a debug perspective. As a bit of an old school sysadmin this fire-and-hope-for-the-best approach leaves me a bit cold. I do not want to lose the ability to carefully observe my running containers using traditional tools if I have to. It’s great to strive for never having to do that, never having to touch a running app using any thing but your monitoring SaaS or that you can just always scale out horizontally but personally I feel I need a bit more closer to the bits interaction at times. Aim for that goal and get a much better overall system, but while you’ve not yet reached this nirvana like state you’re going to want to get at your running apps using strace if it has to.
So having ruled out just running one of the existing crop of private PaaS offerings locally I started thinking about what a container is really. I consider them to be analogous to a package so we need to first explore what Packages are. In it’s simplest form a package is just a bunch of files packaged up. So what makes it better than a tarball?
- Metadata like name, version, build time, build host, dependencies, descriptions, licence, signature and urls
- Built in logic like pre/post install scripts but also companion scripts like init system scripts, monitoring logic etc
- An API to interact with this – the rpm or apt/deb commands – but like in the case of Yum also libraries for interacting with these
All of the above combines to bring the biggest and ultimate benefit from a package: Strong set of companion tools to build, host, deploy, validate, update and inspect those packages. You cannot have the main benefit from packages without the mature implementations of the preceding points.
To really put it in perspective, the Puppet or Chef package resources only works because of the combination of the above 3 points. Without them it will fail which is why the daily attempts by people on #puppet for example to reinvent packaging with a exec running wget and make ends up failing and yield the predictable answer of packaging up your software instead.
When I look at the current state of a docker container and the published approaches for building them I am left a bit wanting when I compare them to a mature package manager wrt to the 3 points above. This means without improvement I am going to end up with a unsatisfactory set of tools and interactions with my running apps.
So to address this I started looking at standardising my builds and creating a framework for building containers the way I like to and what kind of information I would be able to make available to create the tooling I think is needed. I do this using a base image that has a script called container in it that can introspect metadata about the image. Any image downstream from this base image can just add more metadata and hook into the life cycle my container management script provides. It’s not OS dependent so I wouldn’t be forcing any group into a OS choice and can still gain a lot of the advantages Docker brings wrt to making heterogeneous environments less painful. My build system embeds the metadata into any container it builds as JSON files.
Metadata
There are lots going on in this space, Kubernetes has labels and Docker is getting metadata but these are tools to enable metadata, it is still up to users to decide what to do with it.
The reason you want to be able to really interact with and introspect packages come down to things like auditing them. Where do you have outdated SSL versions and the like. Likewise I want to know things about my containers and images:
- Where and when was it built and why
- What was it’s ancestor images
- How do I start, validate, monitor and update it
- What git repo is being built, what hash of that git repo was built
- What are all the tags this specific container is known as at time of build
- What’s the project name this belongs to
- Have the ability to have arbitrary user supplied rich metadata
All that should be visible to the inside and outside of the container and kept for every ancestor of the container. Given this I can create rich generic management tools: I can create tools that do not require configuration to start, update and validate the functionality as well as monitor and extract metrics of any container without any hard coded logic.
Here’s an example:
% docker exec -ti rbldnsd container --metadata|json_reformat
{
"validate_method": "/srv/support/bin/validate.sh",
"start_method": "/srv/support/bin/start.sh",
"update_method": "/srv/support/bin/update.sh"
"validate": true,
"build_cause": "TIMERTRIGGER",
"build_tag": "jenkins-docker rbldnsd-55",
"ci": true,
"image_tag_names": [
"hub.my.net/ripienaar/rbldnsd"
],
"project": "rbldnsd",
"build_time": "2015-03-30 06:02:10",
"build_time_stamp": 1427691730,
"image_name": "ripienaar/rbldnsd",
"gitref": "e1b0a445744fec5e584919711cafd8f4cebdee0e",
} |
% docker exec -ti rbldnsd container --metadata|json_reformat
{
"validate_method": "/srv/support/bin/validate.sh",
"start_method": "/srv/support/bin/start.sh",
"update_method": "/srv/support/bin/update.sh"
"validate": true,
"build_cause": "TIMERTRIGGER",
"build_tag": "jenkins-docker rbldnsd-55",
"ci": true,
"image_tag_names": [
"hub.my.net/ripienaar/rbldnsd"
],
"project": "rbldnsd",
"build_time": "2015-03-30 06:02:10",
"build_time_stamp": 1427691730,
"image_name": "ripienaar/rbldnsd",
"gitref": "e1b0a445744fec5e584919711cafd8f4cebdee0e",
}
Missing from this is monitoring and metrics related bits as those are still a work in progress. But you can see here metadata for a lot of the stuff I mentioned. Images I build embeds this into the image, this means when I FROM one of my images I get a history, that I can examine:
% docker exec -ti rbldnsd container --examine
Container first started at 2015-03-30 05:02:37 +0000 (1427691757)
Container management methods:
Container supports START method using command /srv/support/bin/start.sh
Container supports UPDATE method using command /srv/support/bin/update.sh
Container supports VALIDATE method using command /srv/support/bin/validate.sh
Metadata for image centos_base
Names:
Project Name: centos_base
Image Name: ripienaar/centos_base
Image Tag Names: hub.my.net/ripienaar/centos_base
Build Info:
CI Run: true
Git Hash: fcb5f3c664b293c7a196c9809a33714427804d40
Build Cause: TIMERTRIGGER
Build Time: 2015-03-24 03:25:01 (1427167501)
Build Tag: jenkins-docker centos_base-20
Actions:
START: not set
UPDATE: not set
VALIDATE: not set
Metadata for image rbldnsd
Names:
Project Name: rbldnsd
Image Name: ripienaar/rbldnsd
Image Tag Names: hub.my.net/ripienaar/rbldnsd
Build Info:
CI Run: true
Git Hash: e1b0a445744fec5e584919711cafd8f4cebdee0e
Build Cause: TIMERTRIGGER
Build Time: 2015-03-30 06:02:10 (1427691730)
Build Tag: jenkins-docker rbldnsd-55
Actions:
START: /srv/support/bin/start.sh
UPDATE: /srv/support/bin/update.sh
VALIDATE: /srv/support/bin/validate.sh |
% docker exec -ti rbldnsd container --examine
Container first started at 2015-03-30 05:02:37 +0000 (1427691757)
Container management methods:
Container supports START method using command /srv/support/bin/start.sh
Container supports UPDATE method using command /srv/support/bin/update.sh
Container supports VALIDATE method using command /srv/support/bin/validate.sh
Metadata for image centos_base
Names:
Project Name: centos_base
Image Name: ripienaar/centos_base
Image Tag Names: hub.my.net/ripienaar/centos_base
Build Info:
CI Run: true
Git Hash: fcb5f3c664b293c7a196c9809a33714427804d40
Build Cause: TIMERTRIGGER
Build Time: 2015-03-24 03:25:01 (1427167501)
Build Tag: jenkins-docker centos_base-20
Actions:
START: not set
UPDATE: not set
VALIDATE: not set
Metadata for image rbldnsd
Names:
Project Name: rbldnsd
Image Name: ripienaar/rbldnsd
Image Tag Names: hub.my.net/ripienaar/rbldnsd
Build Info:
CI Run: true
Git Hash: e1b0a445744fec5e584919711cafd8f4cebdee0e
Build Cause: TIMERTRIGGER
Build Time: 2015-03-30 06:02:10 (1427691730)
Build Tag: jenkins-docker rbldnsd-55
Actions:
START: /srv/support/bin/start.sh
UPDATE: /srv/support/bin/update.sh
VALIDATE: /srv/support/bin/validate.sh
This is the same information as above but also showing the ancestor of this rbldnsd image – the centos_base image. I can see when they were built, why, what hashes of the repositories and I can see how I can interact with these containers. From here I can audit or manage their life cycle quite easily.
I’d like to add to this a bunch of run-time information like when was it deployed, why, to what node etc and will leverage the docker metadata when that becomes available or hack something up with ENV variables.
Solving this problem has been key to getting to grips of the operational concerns I had with Docker and feeling I can get back to the level of maturity I had with packages.
Management
You can see from above that the metadata supports specifying START, UPDATE and VALIDATE actions. Future ones might be MONITOR and METRICS.
UPDATE requires some explaining. Of course the trend is toward immutable infrastructure where every change is a rebuild and this is a pretty good approach. I host things like a DNS based RBL and these tend to update all the time, I’d like to do so quicker and with less resource usage than a full rebuild and redeploy – but without ending up in a place where a rebuild loses my changes.
So the typical pattern I do this with is to make the data directories for these images be git checkouts using deploy keys on my git server. The build process will always take latest git and the update process will fetch latest git and reload the running config. This is a good middle ground somewhere between immutability and rapid change. I rebuild and redeploy all my containers every night so this covers the few hours in between.
Here’s my DNS server:
% sudo docker exec bind container --update
>> Fetching latest git checkout
From https://git.devco.net/ripienaar/docker_bind
* branch master -> FETCH_HEAD
Already up-to-date.
>> Validating configuration
>> Checking named.conf syntax in master mode
>> Checking named.conf syntax in slave mode
>> Checking zones..
>> Reloading name server
server reload successful |
% sudo docker exec bind container --update
>> Fetching latest git checkout
From https://git.devco.net/ripienaar/docker_bind
* branch master -> FETCH_HEAD
Already up-to-date.
>> Validating configuration
>> Checking named.conf syntax in master mode
>> Checking named.conf syntax in slave mode
>> Checking zones..
>> Reloading name server
server reload successful
There were no updates but you can see it would fetch the latest, validate it passes inspection and then reload the server if everything is ok. And here is the main part of the script implementing this action:
echo ">> Fetching latest git checkout"
git pull origin master
echo ">> Validating configuration"
container --validate
echo ">> Reloading name server"
rndc reload |
echo ">> Fetching latest git checkout"
git pull origin master
echo ">> Validating configuration"
container --validate
echo ">> Reloading name server"
rndc reload
This way I just need to orchestrate these standard container –update execs – webhooks does this in my case.
VALIDATE is interesting too, in this case validate uses the usual named-checkconf and named-checkzone commands to check the incoming config files but my more recent containers use serverspec and infrataster to validate the full end to end functionality of a running container.
% sudo docker exec -ti rbldnsd container --validate
.............................
Finished in 6.86 seconds (files took 0.39762 seconds to load)
29 examples, 0 failures |
% sudo docker exec -ti rbldnsd container --validate
.............................
Finished in 6.86 seconds (files took 0.39762 seconds to load)
29 examples, 0 failures
My dev process revolves around this like TDD would, my build process will run these steps end of every build in a running instance of the container, my deploy process runs this post deploy of anything it deploys. Operationally if anything is not working right my first port of call is just this command, it often gets me right down to the part that went wrong – if I have good tests that is, otherwise this is feedback to the dev cycle leading to improved tests. I mentioned I rebuild and redeploy the entire infrastructure daily – it’s exactly the investment in these tests that means I can do so while getting a good nights sleep.
Monitoring will likewise be extended around standardised introspectible commands so that a single method can be made to extract status and metric information out of any container built on this method.
Outcome
I’m pretty happy with where this got me, I found it much easier to build some tooling around containers given rich metadata and standardised interaction models. I kind of hoped this was what I would get from Docker itself but it’s either too early or what it provides is too low level – understandable as from it’s perspective it would want to avoid being too prescriptive or have limited sets of data it supports on limited operating systems. I think though as a team who want to build and deploy a solid infrastructure on Docker you need to invest in something along these lines.
Thus my containers now do not just contain their files and dependencies but more and more their operational life cycle is part of the container. Containers can be asked for their health, they can update themselves and eventually emit detailed reusable metrics and statuses. The API to do all of this is standardised and I can run this anywhere with confidence gained from having these introspective abilities and metadata anywhere. Like the huge benefit I got from an improved workflow I find this embedded operational life cycle is equally large and something that I found hard to achieve in my old traditional CM based approach.
I think PaaS systems need to get a bit more of this kind of thing in their pipelines, I’d like to be able to ask my PaaS to just run my validate steps regularly or on demand. Or have standardised monitoring status and metrics output so that the likes of Datadog etc can deliver agents that provide in depth application monitoring without configuration by just sitting in a container next to a set of these containers. Today the state of the art for PaaS health checks seem to be to just hit the exposed port, but real life management of services is much more intricate than that. If they had that I could adopt one of those and spare myself a lot of pain.
For now though this is what my systems will do and hopefully some of the ideas become generally accepted.
by R.I. Pienaar | Feb 24, 2015 | Code
I’ve moved a number of my more complex infrastructure components from being Puppet managed to being Docker managed. There are many reasons for this the main one being my Puppet code is ancient and faced with a rewrite to be Puppet 4 like or just rethinking things, I’m leaning towards rethinking. I don’t think CM is solving the right problem for me for certain aspects of my infrastructure and new approaches can bring more value for my use case.
There’s a lot of posts around talking about Docker and concentrating on the image building side of it or just the running of a container side – which I find quite uninteresting and in fact pretty terrible. The real benefit for me comes in workflow, the API, the events out of the daemon and the container stats. People look at the image and container aspects in isolation and go on about how this is not new technology, but that’s missing the point.
Mainly a workflow problem
I’ll look at an example moving rbldnsd from Puppet to Docker managed and what I gain from that. Along the way I’ll also throw in some examples of a more complex migration I did for my bind servers. In case you don’t know rbldnsd is a daemon that maintains a DNS based RBLs using config files that look something like this:
$DATASET dnset senderhost
.digitalmarketeer.com :127.0.0.2:Connection rejected after user complaints. |
$DATASET dnset senderhost
.digitalmarketeer.com :127.0.0.2:Connection rejected after user complaints.
You can then query it using the usual ways your MTA support and decide policy based on that.
The life cycle of this service is typical of the ones I am targeting:
- A custom RPM had to be built and maintained and served from yet another piece of infrastructure.
- The basic package, config, service triplet. So vanilla it’s almost not worth looking at the code, it looks like all other package, config, service code.
- Requires ongoing data management – I add/remove hosts from the blacklists constantly. But this overlaps with the config part above.
- Requires the ability to test DNS queries work in development before even committing the change
- Requires rapid updating of configuration data
The last 3 points here deserve some explanation. When I am editing these configuration files I want to be able to test them right there in my shell without even committing them to git. This means starting up a rbldnsd instance and querying it with dig. This is pretty annoying to do with the puppet work flow which I won’t go into here as it’s a huge subject on it’s own. Suffice to say it doesn’t work for me and end up not being production like at all.
When I am updating this config files onto the running service there’s a daemon that will load them into its running memory. I need to be pretty sure that daemon I am testing on is identical to what’s in production now. Ideally bit for bit identical. Again this is pretty hard as many/most dev environments tend to be a few steps ahead of production. I need a way to say give me the bits running production and throw this config at them and then do an end to end test with no hassles and in 5 seconds.
I need a way to orchestrate that config data update to happen when I need it to happen – and not when Puppet runs again – and ideally it has to be quick, not at the pace that Puppet manages my 600 resources. Services should let me introspect them to figure out how to update their data and a generic updater should be able to update all my services that match this flow.
I’ve never really solved the last 3 points with my Puppet workflows for anything I work on, it’s a fiendishly complex problem to solve correctly. Everyone does it with Vagrant instances or ever more complex environments. Or they do their change, commit it and make sure there are test coverage and only get feedback later when something like Beaker ran. This is way too slow for me in this scenario. I just want to block 1 annoying host. Vagrant especially does not work for me as I refuse to run things on my desktop or laptop, I develop on VMs that are remote, so Vagrant isn’t an option. Additionally Vagrant environments become so complex, basically a whole new environment. Yet built in annoyingly different ways so that keeping match with Production can be a challenge – or just prohibitively slow if you’re building them out with Puppet. So you end up again not testing in a environment that’s remotely production like.
These are pretty major things that I’ve never been able to solve to my liking with Puppet. I’ve first moved a bunch of my web sites then bind and now rbldnsd to Docker and think I’ve managed to come up with a workflow and toolchain that solves this for me.
Desired outcome
So maybe to demonstrate what I am after I should show what I want the outcome to look like. Here’s a rbldnsd dev session. I want to block *.mailingliststart.com, specifically I saw sh8.mailingliststart.com in my logs. I want to test the hosts are going to be blocked correctly before pushing to prod or even committing to git – it’s so embarrassing to make fix commits for obvious dumb things ๐
So I add to the zones/bl file:
.mailingliststart.com :127.0.0.2:Excessive spam from this host |
.mailingliststart.com :127.0.0.2:Excessive spam from this host
$ vi zones/bl
$ rake test:host
Host name to test: sh8.mailingliststart.com
Testing sh8.mailingliststart.com
Starting the rbldnsd container...
>>> Testing black list
docker exec rbldnsd dig -p 5301 +noall +answer any sh8.mailingliststart.com.senderhost.bl.rbl @localhost
sh8.mailingliststart.com.senderhost.bl.rbl. 2100 IN A 127.0.0.2
sh8.mailingliststart.com.senderhost.bl.rbl. 2100 IN TXT "Excessive spam from this host"
>>> Testing white list
.
.
.
Removing the rbldnsd container...
$ git commit zones -m 'block mailingliststart.com'
$ git push origin master |
$ vi zones/bl
$ rake test:host
Host name to test: sh8.mailingliststart.com
Testing sh8.mailingliststart.com
Starting the rbldnsd container...
>>> Testing black list
docker exec rbldnsd dig -p 5301 +noall +answer any sh8.mailingliststart.com.senderhost.bl.rbl @localhost
sh8.mailingliststart.com.senderhost.bl.rbl. 2100 IN A 127.0.0.2
sh8.mailingliststart.com.senderhost.bl.rbl. 2100 IN TXT "Excessive spam from this host"
>>> Testing white list
.
.
.
Removing the rbldnsd container...
$ git commit zones -m 'block mailingliststart.com'
$ git push origin master
Here I added the bits to the config file and want to be sure the hostname I saw in my logs/headers will actually be blocked.:
- It prepares the latest container by default and mounts my working directory into the container with -v ${PWD}:/service.
- Container starts up just like it would in production using the same bits that’s running production – but reads the new uncommitted config
- It uses dig to query the running rbldnsd and run any in-built validation steps the service has (this container has none yet)
- Cleans up everything
The whole thing takes about 4 seconds on a virtual machine running on virtualbox on circa 2009 Mac. I saw the host was blacklisted and not somehow also whitelisted, looks good, commit and push.
Once pushed a webhook triggers my update orchestration and the running containers get the new config files only. The whole edit, test and deploy process takes less than a minute. The data though is in git which means tonight when my containers get rebuilt from fresh they will get this change baked in and rolled out as new instances.
There’s one more pretty mind blowing early feedback story I wanted to add here. My bind zones used to be made with puppet defines:
bind::zone{"foo.com": owner => "Bob", masterip => "1.2.3.4", type => $server_type} |
bind::zone{"foo.com": owner => "Bob", masterip => "1.2.3.4", type => $server_type}
I had no idea what this actually did by reading that line of code. I could guess yeah sure. But you only know for sure with certainty when you run Puppet in production since no matter what the hype says, you’ll only see the diff against actual production file when that hits the production box using Puppet. Not OK. You also learn nothing with this, it’s always bothered me that Puppet end up being a crutch like a calculator, I have all these abstractions and so a junior using this define might never even know what it does or learn how bind works. Desirable in some cases, not for me.
In my Docker bind container I have a YAML file:
zones:
Bob:
options:
masterip: 1.2.3.4
domains:
- foo.com |
zones:
Bob:
options:
masterip: 1.2.3.4
domains:
- foo.com
It’s the same data I had in my manifest just structured a bit different. Same basic problem though I have no idea what this does by looking at it. In docker world though you need to bake this YAML into bind config. And this has to be done during development so that a docker build can get to the final result. So I add a new domain bar.com:
$ vi zones.yaml
$ rake construct:files
Reading specification file buildsettings.yaml
Reading scope file zones.yaml
Rendering conf/named_slave_zones with mode 644 using template templates/slave_zones.erb
Rendering conf/named_master_zones with mode 644 using template templates/master_zones.erb
conf/named_master_zones | 10 ++++++++++
conf/named_slave_zones | 9 +++++++++
2 files changed, 19 insertions(+)
$ git diff
+// Bob
+zone "bar.com" {
+ type slave;
+ file "/srv/named/zones/slave/bar.com";
+ masters {
+ 1.2.3.4;
+ };
+}; |
$ vi zones.yaml
$ rake construct:files
Reading specification file buildsettings.yaml
Reading scope file zones.yaml
Rendering conf/named_slave_zones with mode 644 using template templates/slave_zones.erb
Rendering conf/named_master_zones with mode 644 using template templates/master_zones.erb
conf/named_master_zones | 10 ++++++++++
conf/named_slave_zones | 9 +++++++++
2 files changed, 19 insertions(+)
$ git diff
+// Bob
+zone "bar.com" {
+ type slave;
+ file "/srv/named/zones/slave/bar.com";
+ masters {
+ 1.2.3.4;
+ };
+};
The rake construct:files just runs a bunch of ERB templates over the zones hash – it’s basically identical to the templates I had in Puppet with just a few var name changes and slightly different looping, no more or less complex.
This is the actual change that will hit production. No ifs or buts, that’s what will change in prod. When I rake test here without comitting this, this actual production change is being tested against the actual bits in the named binary that today runs production.
$ time rake test
docker run -ti --rm -v /home/rip/work/docker_bind:/srv/named -e TEST=1 ripienaar/bind
>> Checking named.conf syntax in master mode
>> Checking named.conf syntax in slave mode
>> Checking zones..
rake test 0.18s user 0.33s system 7% cpu 3.858 total |
$ time rake test
docker run -ti --rm -v /home/rip/work/docker_bind:/srv/named -e TEST=1 ripienaar/bind
>> Checking named.conf syntax in master mode
>> Checking named.conf syntax in slave mode
>> Checking zones..
rake test 0.18s user 0.33s system 7% cpu 3.858 total
Again my work dir is mounted into the container version currently running in production, my uncommitted change is tested using the bit for bit identical version of bind as currently in prod. This is a massive confidence boost and the feedback cycle is Implementation Details
I won’t go into all the Dockerfile details it’s just normal stuff. The image building and running of containers is not exciting. The layout of the services are something like this:
/service/bin/start.sh
/service/bin/update.sh
/service/bin/validate.sh
/service/zones/{bl,gl,wl}
/opt/rbldnsd-0.997a/rbldnsd |
/service/bin/start.sh
/service/bin/update.sh
/service/bin/validate.sh
/service/zones/{bl,gl,wl}
/opt/rbldnsd-0.997a/rbldnsd
What is exciting is that I can introspect a running container. The Dockerfile has lines like this:
ENV UPDATE_METHOD /service/bin/update.sh
ENV VALIDATE_METHOD /service/bin/validate.sh |
ENV UPDATE_METHOD /service/bin/update.sh
ENV VALIDATE_METHOD /service/bin/validate.sh
And an external tool can find out how this container likes to be updated or validated – and later monitored:
$ docker inspect rbldnsd
.
.
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"UPDATE_METHOD=/service/bin/update.sh",
"VALIDATE_METHOD=/service/bin/validate.sh",
"GIT_REF=fa9dd19d93e6d6cb7d5b2ebdc57f99cd2906df6f"
], |
$ docker inspect rbldnsd
.
.
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"UPDATE_METHOD=/service/bin/update.sh",
"VALIDATE_METHOD=/service/bin/validate.sh",
"GIT_REF=fa9dd19d93e6d6cb7d5b2ebdc57f99cd2906df6f"
],
My update webhook basically just does this:
mco rpc docker runtime_update container=rbldnsd -S container("rbldnsd").present=1 --batch 1 |
mco rpc docker runtime_update container=rbldnsd -S container("rbldnsd").present=1 --batch 1
So I use mcollective to target an update operation on all machines that runs the rbldnsd container – 1 at a time. The mcollective agent uses docker inspect to introspect the container. Once it knows how the container wants to be updated it calls that command using docker exec.
Outcome Summary
For me this turned out to be a huge win. I had to do a lot of work on the image building side of things, the orchestration, deployment etc – things I had to do with Puppet too anyway. But this basically ticks all the boxes for me that I had in the beginning of this post and quite a few more:
- A reasonable facsimile of the package, config, service triplet that yields idempotent builds
- A comfortable way to develop and test my changes locally with instant feedback like I would with unit tests for normal code but for integration tests of infrastructure components using the same bits as in production.
- Much better visibility over what’s actually going to change, especially in complex cases where config files are built using templates
- An approach where my services are standalone and they all have to think about their run, update and validation cadences. With those being introspectable and callable from the outside.
- My services are standalone artefacts and versioned as a whole. Not spread around the place on machines, in package repos, in data and in CM code that attempts to tie it all together. It’s one thing, from one git repo, stored in one place with a version.
- With validation built into the container and the container being a runnable artefact I get to do this during CI before rolling anything out just like I do on my CLI. And always the actual bits in use or proposed to be used in Production are used.
- Overall I have a lot more confidence in my production changes now than I had with the Puppet workflow.
- Changes can be rolled out to running containers very rapidly – less than 10 seconds and not at the slow Puppet run pace.
- My dev environment is hugely simplified yet much more flexible as I can run current, past and future versions of anything. With less complexity.
- Have a very nice middle ground between immutable server and the need for updating content. Containers are still rebuilt and redeployed every night on schedule and they are still disposable but not at the cost of day to day updates.
I’ve built this process into a number of containers now some like this that are services and even some web ones like my wiki where I edit markdown files and they get rolled out to the running containers immediately on push.
I still have some way to go with monitoring and these services are standalone and not complex multi-component ones but I don’t foresee huge issues with those.
I couldn’t solve this with all these outcomes without a rapid way to stand up and destroy production environments that are isolated from my machine I am developing on. Especially if the final service is some loosely coupled combination of parts from many different sources. I’d love to talk to people who think they have something approaching this without using Docker or similar and be proven wrong but for now, this is a huge step forward for me.
So Puppet and CM tools are irrelevant now?
Getting back to the Puppet part of this post. I could come up with some way to mix Puppet in here too. There are though other interesting aspects about the Docker life cycle that I might blog about later which I think makes it a bit of a square peg in a round hole to combine these two tools. Especially I think people today who think they should use Puppet to build containers or configure containers are a bit miss guided and missing out, I hope they keep working on that though and get somewhere interesting because omfg Dockerfiles but I don’t think the current attempts are interesting.
It kind of gets back to the old thing where it turns out Puppet is not a good choice to manage deployments of Applications but its ok for Infrastructure. I am reconsidering what is infrastructure and what are applications.
So I chose to rethink things from the ground up – how would a nameserver service looked if I considered it Application and not Infrastructure and how should a Application development life cycle around that service look?
This is not a new realisation for me, I’ve often wished and expressed the desire that Puppet Labs should focus a lot more on the workflow and the development cycle and work on providing tools and hooks for that and think about how to make that better, I don’t think that’s really happened. So the conclusion for me was that for this Application or Service development and deployment life cycle Puppet was the wrong tool. I also realise I don’t even remotely resemble their paying target audience.
I am also not saying Puppet or other CM tools are irrelevant due to Docker that’s just madness. I think there’s a place where the 2 worlds meet and for me I am starting to notice that a lot of what I thought was Infrastructure are actually Applications and these have different development and deployment needs which CM and Puppet especially do not address.
Soon there will not be a single mention of DNS related infrastructure in my Puppet code. The container and related files are about equal in complexity and lines of code to what was in Puppet, the final outcome is about the same and it’s as configurable to my environments. The workflow though is massively improved because now I have the advantages that Application developers had for this piece of Infrastructure. Finally a much larger part of the Infrastructure As Code puzzle is falling together and it actually feels like I am working on code with the same feedback cycles and single verifiable artefact outcomes. And that’s pretty huge. Infrastructure are still being CM managed – I just hope to have a radically reduced Infrastructure footprint.
The big take away here isn’t that Docker is some technological magical bullet killing off vast parts of the existing landscape or destroying a subset of tools like CM completely. It brings workflow and UX improvements that are pretty unique and well worth exploring. And this is especially a part where the CM folk have basically just not focussed on. The single biggest win is probably the single artefact aspect as this enables everything I mentioned here.
It also brings a lot of other things from the daemon side – the API, the events, the stats etc that I didn’t talk about here and those are very big deals too wrt what future work they enable. But that’s for future posts.
Technically I think I have a lot of bad things to say about almost every aspect of Docker but those are out weighed by this rapid feedback and increased overall confidence in making change at the pace I would like to.