by R.I. Pienaar | Feb 3, 2016 | Code
I recently wrote about the new Data in Modules support in Puppet 4, there’s another new feature that goes hand in hand with this to finally rid us of functions like hiera_hash() and such.
Up to now we’ve had to do something ugly like this to handle merged class parameters:
class users($local = hiera_hash("users::local", {}) {
...
} |
class users($local = hiera_hash("users::local", {}) {
...
}
This is functional but quite ugly and ties your module to having hiera. While these days it’s a reasonably safe assumption but with the ability to specify different environment data sources this will not always be the case. For example there’s a new kid on the block called Jerakia that lives in this world so having Hiera specific calls in modules is going to be a limiting strategy.
A much safer abstraction is to be able to rely on the automatic parameter lookup feature – but it had no way to know about the fact that this item should be a hash merge and so the functions were used as above.
Worse things like merge strategies were set globally, a module could not say a certain key should be deep merged and others just shallow merged etc, and if a module required a specific way it had no control over this.
A solution for this problem landed in recent Puppet 4 via a special merged hash called lookup_options. This is documented quite lightly in the official docs so I thought I’ll put up a example here.
lookup() function
To understand how this work you first have to understand the lookup() function, it’s documented here. But this is basically the replacement for the various hiera() functions and have a matching puppet lookup CLI tool.
If you wanted to do a hiera_hash() lookup that is doing the old deeper hash merge you’d do something like:
$local = lookup("users::local", Hash, {"strategy" => "deep", "merge_hash_arrays" => true}) |
$local = lookup("users::local", Hash, {"strategy" => "deep", "merge_hash_arrays" => true})
This would merge just this key rather than say setting the merge strategy to deeper globally in hiera and it’s something the module author can control. The Hash above describes the data type the result should match and support all the various complex composite type definitions so you can really in detail describe the desired result data – almost like a Schema.
There are much more to the lookup function and it’s CLI, they’re both pretty awesome and you can now see where data comes from etc, I guess there’s a follow up blog post about that coming.
lookup_options hiera key
We saw above how to instruct the lookup() function to do a hiera_hash() but wouldn’t it be great if we could somehow tell Puppet that a specific key should always be merged in this way? That way a simple lookup(“users::local”) would do the merge and crucially so would the automatic parameter lookups – even across backends and data providers.
We just want:
class users(Hash $local = {}) {
...
} |
class users(Hash $local = {}) {
...
}
For this to make sense the users module must be able to indicate this in the data layer. And since we now have data in modules there’s a obvious place to put this.
If you set up the users module here to use the hiera data service for data in modules as per my previous blog post you can now specify the merge strategy in your data:
# users/data/common.yaml
lookup_options:
users::local:
merge:
strategy: deep
merge_hash_arrays: true |
# users/data/common.yaml
lookup_options:
users::local:
merge:
strategy: deep
merge_hash_arrays: true
Note how this match exactly the following lookup():
$local = lookup("users::local", Hash, {"strategy" => "deep", "merge_hash_arrays" => true}) |
$local = lookup("users::local", Hash, {"strategy" => "deep", "merge_hash_arrays" => true})
The data type validation is done on the class parameters where it will also validate specifically specified data and the strategies for processing the data is in the module data level.
The way this works is that puppet will do a lookup_options lookup from the data source that is merged together – so you could set this at site level as well – but there is a check to ensure a module can only set keys for itself so it can not change behaviours of other modules.
At this point a simple lookup(“users::local”) will do the merge and therefore so will this code:
class users(Hash $local = {}) {
...
} |
class users(Hash $local = {}) {
...
}
No more hiera_hash() here. The old hiera() function is not aware of this – it’s a lookup() feature but with this in place we’ll hopefully never see hiera*() functions being used in Puppet 4 modules.
This is a huge win and really shows what can be done with the Data in Modules features and something that’s been impossible before. This really brings the automatic parameter lookup feature a huge way forward and combines for me to be one of the most compelling features of Puppet 4.
I am not sure who proposed this behaviour, the history is a bit muddled but if someone can tweet me links to mailing list threads or something I’ll link them here for those who want to discover the background and reasoning that went into it. UPDATE: Henrik informs me that Rob Nelson was the driving force on this – it’s something they wanted to do for a while but really without Rob sticking with it and working with the devs it would not have been done.
Wishlist
The lookup function and the options are a great move forward however I find the UX of the various lookup options and merge strategies etc quite bad. It’s really hard for me to go from reading the documentation to knowing what a certain option will do with my data – in fact I still have no idea what some of these do the only way to discover it seems to be just spending time playing with it which I haven’t had, it would be great for new users to get some more clarity there.
Some doc updates that provide a translation from old Hiera terms to new strategies would be great and maybe some examples of what these actually do.
by R.I. Pienaar | Jan 8, 2016 | Code
Back in August 2012 I requested an enhancement to the general data landscape of Puppet and a natural progression on the design of Hiera to enable it to be used in modules that are shared outside of your own environments. I called this Data in Modules. There was lots of community interest in this but not much movement, eventually I made a working POC that I released in December 2013.
The basic idea around the feature is that we want to be able to use Hiera to model internal data found in modules as well as site specific data and that these 2 sets of data coexist and compliment each other. Full details of this can be found in my post titled Better Puppet Modules Using Hiera Data and some more background can be found in The problem with params.pp. These posts are a bit old now and some things have moved on but they’re good background reading.
It’s taken a while but as part of the Puppet 4 rework effort the data ingesting mechanisms have also been rewritten in finally in Puppet 4.3.0 native data in modules have arrived. The original Jira for this is 4474. It’s really pretty close to what I had in mind in my proposals and my POC and I am really happy with this. Along the way a new function called lookup() have been introduced to replace the old collection of hiera(), hiera_array() and hiera_hash().
The official docs for this feature can be found at the Puppet Labs Docs site. Here I’ll more or less just take my previous NTP example and show how you could use the new Data in Modules to simplify it as per the above mentioned posts.
This is the very basic Puppet class we’ll be working with here:
class ntp (
String $config,
String $keys_file
) {
...
} |
class ntp (
String $config,
String $keys_file
) {
...
}
In the past these variables would have needed to interact with the params.pp file like $config = $ntp::params::config, but now it’s just a simple class. At this point it’ll not yet use any data in the module, to do that you have to activate it in the metadata.json:
# ntp/metadata.json
{
...
"data_provider": "hiera"
} |
# ntp/metadata.json
{
...
"data_provider": "hiera"
}
At this point Puppet knows you want to use the hiera data in the module. But key to the feature and really the whole reason it exists is because a module needs to be able to specify it’s own hierarchy. Imagine you want to set $keys_file here, you’ll have to be sure the hierarchy in question includes the OS Family and you must have control over that data. In the past with the hierarchy being controlled completely by the site hiera.yaml this was not possible at all and the outcome was that if you wanted to share a module outside of your environment you have to go the params.pp route as that was the only portable solution.
So now your modules can have their own hiera.yaml. It’s slightly different from the past but should be familiar to past hiera users, it goes in your module so this would be ntp/hiera.yaml:
---
version: 4
datadir: data
hierarchy:
- name: "OS family"
backend: yaml
path: "os/%{facts.os.family}"
- name: "common"
backend: yaml |
---
version: 4
datadir: data
hierarchy:
- name: "OS family"
backend: yaml
path: "os/%{facts.os.family}"
- name: "common"
backend: yaml
This is the new format for the hiera configuration, it’s more flexible and a future version of hiera will have some changing semantics that’s quite nice over the original design I came up with so you have to use that new format here.
Here you can see the module has it’s own OS Family tier as well as a common tier. Lets see the ntp/data/common.yaml:
---
ntp::config: "/etc/ntp.conf"
ntp::keys_file: "/etc/ntp.keys" |
---
ntp::config: "/etc/ntp.conf"
ntp::keys_file: "/etc/ntp.keys"
These are sane defaults to use for any non specifically supported operating systems.
Below are examples for AIX and Debian:
# data/os/AIX.yaml
---
ntp::config: "/etc/ntpd.conf" |
# data/os/AIX.yaml
---
ntp::config: "/etc/ntpd.conf"
# data/os/Debian.yaml
---
ntp::keys_file: "/etc/ntp/keys" |
# data/os/Debian.yaml
---
ntp::keys_file: "/etc/ntp/keys"
At this point the need for params.pp is gone – at least in this simplistic example – and this data along with the environment specific or site specific data cohabit really nicely. If you specified any of these data items in your site Hiera data your site data will override the module. The advantages of this might not be immediately obvious. I have a very long list of advantages over params.pp in my Better Puppet Modules Using Hiera Data post, be sure to read that for background.
There’s an alternative approach where you write a Puppet function that returns a hash of data and the data system will fetch the keys from there. This is really powerful and might end up being a interesting solution to something along the lines of a module specific custom hiera backend – but a lighter weight version of that. I might write that up later, this post is already a bit long.
The remaining problem is to do with data that needs to be merged as traditionally Hiera and Puppet has no idea you want this to happen when you do a basic lookup – hence these annoying hiera_hash() functions etc – , there’s a solution for this and I’ll post a blog post about that next week once the next Puppet 4 release is out and a bug I found that makes it unusable is fixed in that version.
This feature is a great addition to Puppet and I am really glad to finally see this land. My hacky modules in data code was used quite extensively with 72 000 downloads from the forge but I was never really happy with it and was desperate to see this land natively. This is a big step forward and I hope it sees wide adoption in the community.
A note about the old ripienaar-module_data module
As seen above the new built in feature is great and a very close match to what I had envisioned when creating the proof of concept module.
It would not be a good idea to support both these methods on Puppet 4 and turns out it is also quite difficult because we both use the hiera.yaml file in the module but with small differences in format. So the transition period will no doubt be a bit painful especially for those attempting to use this while supporting both Puppet 3 and 4 users.
Further the old module actually broke the Puppet 4 feature for a while in a way that was really difficult to debug. Puppet Labs kindly reached out and notified me of this and helped me fix it in MODULES-3102. So there is now a new release of the old module that works again on Puppet 4 BUT it warns very loudly that this is a bad idea.
The old module is now deprecated and unsupported. You should stop using it and imho stop using Puppet 3, but whatever you do stop using it on Puppet 4. I wish the metadata.json supported a supported Puppet version requirement so I can force this but alas it doesn’t so I can’t.
I will after a few months make a release that will raise an error on Puppet 4 and refuse to work there. You should move forward and adopt the excellent native implementation of this feature.
by R.I. Pienaar | Dec 16, 2015 | Code
Iteration in Puppet has been a long standing pain point, Puppet 4 address this by adding blocks, loops etc. Here I capture the various approaches to working with some complex data in Puppet before and after Puppet 4
To demonstrate this I’ll take some data from a previous blog post and see how to deal with it, here’s the data that will be in $domains in the examples blow:
{
"x.net": {
"nexthop": "70.x.x.x",
"spamdestination": "rip@devco.net",
"spamthreshold": 1500,
"enable_antispam": 1
},
"x.co.uk": {
"nexthop": "70.x.x.x",
"spamdestination": "rip@devco.net",
"spamthreshold": 1500,
"enable_antispam": 1
},
} |
{
"x.net": {
"nexthop": "70.x.x.x",
"spamdestination": "rip@devco.net",
"spamthreshold": 1500,
"enable_antispam": 1
},
"x.co.uk": {
"nexthop": "70.x.x.x",
"spamdestination": "rip@devco.net",
"spamthreshold": 1500,
"enable_antispam": 1
},
}
First we’re going to need some defined type that can create an individual domain, we’ll call that mail::domain but I won’t show the code here, as that’s not really important.
Puppet 3 + stdlib
The first approach I’ll show your basic Puppet 3 approach. The basic idea here is to get a list of domains and use the array iteration Puppet has always had on name.
The trick here is to get the domain names using the keys() function and then pass all the data into every instance of the define – the instance fetch it’s data from the data passed into the define.
$domain_names = keys($domains)
mail::domains{$domain_names:
domains => $domains
}
define mail::domains($domains) {
$domain = $domains[$name]
mail::domain{$name:
nexthop => $domain["nexthop"]
.
.
}
} |
$domain_names = keys($domains)
mail::domains{$domain_names:
domains => $domains
}
define mail::domains($domains) {
$domain = $domains[$name]
mail::domain{$name:
nexthop => $domain["nexthop"]
.
.
}
}
Puppet 3 + create_resources
A hacky riff on eval() was added to Puppet during 3 to make it a bit easier to deal with data from Hiera or similar, it takes some data in a standard format and create instances of a defined type:
create_resources("mail::domain", $domains, {"spamthreshold" => 1500, "enable_antispam" => 1}) |
create_resources("mail::domain", $domains, {"spamthreshold" => 1500, "enable_antispam" => 1})
This replaces all the code above plus adds some default handling in the case that the data is not uniform. Some people love it, some hate it, I think it’s a bit too magical so prefer to avoid it.
Puppet 4 – each loop
This is the approach you’d probably want to use in Puppet 4 it uses a simple each loop over the data:
$domains.each |$name, $domain| {
mail::domain{$name:
nexthop => $domain["nexthop"]
.
.
}
} |
$domains.each |$name, $domain| {
mail::domain{$name:
nexthop => $domain["nexthop"]
.
.
}
}
It’s quite readable and obvious what’s happening here, it’s more typing than the create_resources example but I think this is the preferred way due to clarity etc
Below this we get into the more academic solutions to the problem, mainly showing off some Puppet 4 features.
Puppet 4 – wildcard shortcut
If listing every key is tedious like above and if you know your hashes map 1:1 to the defined type parameters you can short circuit things a bit, this is quite close to the create_resources convenience:
each($domains) |$name, $domain| {
mail::domain{$name:
* => $domain
}
} |
each($domains) |$name, $domain| {
mail::domain{$name:
* => $domain
}
}
The splat operator takes all the data in the hash and maps it right onto properties of the define type, quite handy
Puppet 4 – wildcard and defaults
Your data might not all be complete so you’d want to get some defaults merged in, this is something create resources also supports so this is how you’d do it without create_resources:
$defaults = {
"spamthreshold" => 1500,
"enable_antispam" => 1
}
$domains.each |$name, $domain| {
mail::domain{$name:
* => $defaults + $domain # + now merge hashes
}
} |
$defaults = {
"spamthreshold" => 1500,
"enable_antispam" => 1
}
$domains.each |$name, $domain| {
mail::domain{$name:
* => $defaults + $domain # + now merge hashes
}
}
Puppet 4 – wildcard and resource defaults
An alternative to the above that’s a bit more verbose but might be more readable can be seen below:
$defaults = {
"spamthreshold" => 1500,
"enable_antispam" => 1
}
$domains.each |$name, $domain| {
mail::domain{
default:
* => $defaults;
$name:
* => $domain
}
} |
$defaults = {
"spamthreshold" => 1500,
"enable_antispam" => 1
}
$domains.each |$name, $domain| {
mail::domain{
default:
* => $defaults;
$name:
* => $domain
}
}
Puppet 4 – Native DSL create_resources()
Puppet 4 supports functions written in the native DSL, this means you can use the above and generalize it a bit and end up with a reimplementation of create_resources. Not sure I’d recommend this but it does show some techniques that’s related:
function my::create_resources (
String $type,
Hash $instances,
Hash $defaults = {}
) {
$instances.each |$r_name, $r_properties| {
Resource[$type] {$r_name:
* => $defaults + $r_properties
}
}
} |
function my::create_resources (
String $type,
Hash $instances,
Hash $defaults = {}
) {
$instances.each |$r_name, $r_properties| {
Resource[$type] {$r_name:
* => $defaults + $r_properties
}
}
}
The magic here is the Resource[$type] that lets you reference a type programatically. It also works for classes.
So this is close as I can tell an equivalent to create_resources.
Conclusion
That’s about it, there are many more iteration tricks in Puppet 4 but this shows you how to achieve what you did with create_resources in the past and a couple of possible approaches to solving that problem.
Not sure which I’d recommend, but I suspect the choice comes down to personal style and situation.
by R.I. Pienaar | Sep 24, 2015 | Code
I find myself currently writing a lot of orchastration code that manages hardware. This is very difficult because I like doing little test.rb scripts or testing things out in irb or pry to see if APIs are comfortable to use.
The problem with hardware is in order to properly populate my objects I need to query things like the iDRACs or gather inventories from all my switches to figure out where a piece of hardware is and this take a lot of time and requires constant access to my entire lab.
Of course my code has unit tests and so all the objects that represents servers and switches etc are already designed to be somewhat comfortable to load stub data for and to be easy to mock. So I ended up using rspec as my test.rb environment of choice.
I figured there has to be a way to use mocha in a non rspec environment, and turns out there is and it’s quite easy.
The magic here is line 1 and line 5, including Mocha::API will extend Object and Class with all the stubbing and mocking methods. I’d avoid using expectations and instead use stubs in this scenario.
At this point I’d be dropped into a pry shell loaded up with the service fixture in my working directory where I can quickly interact with my faked up hardware environment. I have many such hardware captures for each combination of hardware I support – and the same data is used in my unit tests too.
Below I iterate all the servers and find their mac addresses of the primary interfaces in each partition and then find all the switches they are connected to. Behind the scenes in real life this would walk all my switches looking for the port each mac is connected to and so forth, quite a time consuming operation and would require me to dedicate this lab hardware to me. Now I can just snapshot the hardware and load up my models later and it’s really quick.
I found this incredibly handy and will be using it pretty much all the time now, so thought it worth sharing ๐
by R.I. Pienaar | Aug 13, 2015 | Code
Webhooks are great, so many services now support them but I found actually doing anything with them a pain as there are no standards for what goes in them and any 3rd party service you wish to integrate with has to support the particular hooks you are producing.
For instance I want to use SignalFX for my metrics and events but they have very few integrations. A translator could take an incoming hook and turn it into a SignalFX event and pass it onward.
For a long time I’ve wanted to build a translator but never got around to doing it because I did not feel like self hosting it and write a whole bunch of supporting infrastructure. With the release of AWS API Gateway this has become quite easy and really convenient as there are no infrastructure or instances to manage.
I’ll show a bit of a walk through on how I built a translator that sends events to Signal FX. Note I do not do any kind of queueing or retrying on the gateway at present so it’s lossy and best efforts.
AWS Lambda runs stateless functions on demand. At launch it only supported ingesting their own Events but the recently launched API Gateway lets you front it using a REST API of your own design and this made it a lot easier.
For the rest of this post I assume you’re over the basic hurdles of signing up for AWS and are already familiar with the basics, so some stuff will be skipped but it’s not really that complex to get going.
The Code
To get going you need some JS code to handle the translation, here’s a naive method to convert a GitHub push notification into a SignalFX event:
This will be the meat of the of the processing and it includes a bit of code to create a request using the https module which includes the SignalFX authentication header.
Note this creates dimensions to the event that is being sent, I guess you can think of them like some kind of key=val tags for the event. In the Signal FX UI I can select events like this:
And any other added dimension can be used too, events shows up as little diamonds on graphs, so if I am graphing a service using these dimensions I can pick out events that relate to the branches and repositories that influence the data.
This is called as below:
There’s some stuff not shown here for brevity, it’s all in GitHub. The entry point here is handleGitHubPushNotifications, this is the Lambda function that will be run. I can put many different ones in here and in the previous code and share this same zip file across many functions. All I have to do is tell Lambda to run handleGitHubPushNotifications or handleOpsGeniePushNotifications etc. so this is a library of functions. See the next section for how.
Setting up the Lambda functions
We have to create a Lambda function, for now I’ll use the console but you can use terraform for this it helps quite a lot.
As this repo is made up of a few files your only option is to zip it up. You’ll have to clone it and make your own config.js based on the sample prior to creating the zip file.
Once you have it just create a Lambda function which I’ll call gitHubToSFX and choose your zip file as source. While setting it up you have to supply a handler. This is how Lambda finds your function to call.
In my case I specify index.handleGitHubPushNotifications – uses the handleGitHubPushNotifications function found in index.js.
It ends up looking like this:
Once created you can test it right there if you have a sample GitHub commit message.
The REST End Point
Now we need to create somewhere for GitHub to send the POST request to. Gateway works with resources and methods. A resource is something like /github-hook and a method is POST.
I’ve created the resource and method, and told it to call the Lambda function here:
You have to deploy your API – just hit the big Deploy API button and follow the steps, you can create stages like development, staging, production and deploy API’s through such a life cycle. I just went straight to prod.
Once deployed it gives you a URL like https://12344xnb.execute-api.eu-west-1.amazonaws.com/prod and your GitHub hook would be configured to hit https://12344xnb.execute-api.eu-west-1.amazonaws.com/prod/github-hook .
Conclusion
That’s about it, once you’ve configured GitHub you’ll start seeing events flow through.
Both Lambda and API Gateway can write logs to Cloud Watch and from the JS side you can see do something like console.log(“hello”) and this will show up in the Cloud Watch logs to help with debugging.
I hope to start gathering a lot of translations like these and am still learning Node, so not really sure yet how to make packages or classes but so far this seems really easy to use.
Cost wise it’s really cheap. You’d pay $3.50 per million API calls received on the Gateway and $0.09/GB for the transfer costs, but given the nature of these events this will be negligible. Lambda is free for the first 1 million requests and you’ll pay some tiny amount for the time used. They are both eligible for the free tier too in case you’re new to AWS.
There are many advantages to this approach:
- It’s very cheap as there are no instances to run, just the requests
- Adding webhooks to many services is a clickfest hell. This gives me a API that I can change the underlying logic of without updating GitHub etc
- Today I use SignalFX but it’s event feature is pretty limited, I can move all the events elsewhere on the backend without any API changes
- I can use my own domain and SSL certs
- As the REST API is pretty trivial I can later move it in-house if I need, again without changing any 3rd parties – assuming I set up my own domain
I have 2 outstanding issues to address:
- How to secure it, API Gateway supports headers as tokens but this is not something webhooks tend to support
- Monitoring it, I do not want to some webhook sender to get in a loop and send 100s of thousands of requests without it going unnoticed