About a month ago I blogged about Choria Playbooks – a way to write series of actions like MCollective, Shell, Slack, Web Hooks and others – contained within a YAML script with inputs, node sets and more.
Since then I added quite a few tweaks, features and docs, it’s well worth a visit to choria.io to check it out.
Today I want to blog about a major new integration I did into them and a major step towards version 1 for Choria.
Overview
In the context of a playbook or even a script calling out to other system there’s many reasons to have a Data Source. In the context of a playbook designed to manage distributed systems the Data Source needed has some special needs. Needs that tools like Consul and etcd fulfil specifically.
So today I released version 0.0.20 of Choria that includes a Memory and a Consul Data Source, below I will show how these integrate into the Playbooks.
I think using a distributed data store is important in this context rather than expecting to pass variables from the Playbook around like on the CLI since the business of dealing with the consistency, locking and so forth are handled and I can’t know all the systems you wish to interact with, but if those can speak to Consul you can prepare an execution environment for them.
For those who don’t agree there is a memory Data Store that exists within the memory of the Playbook. Your playbook should remain the same apart from declaring the Data Source.
Using Consul
Defining a Data Source
Like with Node Sets you can have multiple Data Sources and they are identified by name:
data_stores: pb_data: type: consul timeout: 360 ttl: 20 |
This creates a Consul Data Source called pb_data, you need to have a local Consul Agent already set up. I’ll cover the timeout and ttl a bit later.
Playbook Locks
You can create locks in Consul and by their nature they are distributed across the Consul network. This means you can ensure a playbook can only be executed once per Consul DC or by giving a custom lock name any group of related playbooks or even other systems that can make Consul locks.
--- locks: - pb_data - pb_data/custom_lock |
This will create 2 locks in the pb_data Data Store – one called custom_lock and another called choria/locks/playbook/pb_name where pb_name is the name from the metadata.
It will try to acquire a lock for up to timeout seconds – 360 here, if it can’t the playbook run fails. The associated session has a TTL of 20 seconds and Choria will renew the sessions around 5 seconds before the TTL expires.
The TTL will ensure that should the playbook die, crash, machine die or whatever, the lock will release after 20 seconds.
Binding Variables
Playbooks already have a way to bind CLI arguments to variables called Inputs. Data Sources extend inputs with extra capabilities.
We now have two types of Input. A static input is one where you give the data on the CLI and the data stays static for the life of the playbook. A dynamic input is one bound against a Data Source and the value of it is fetched every time you reference the variable.
inputs: cluster: description: "Cluster to deploy" type: "String" required: true data: "pb_data/choria/kv/cluster" default: "alpha" |
Here we have a input called cluster bound to the choria/kv/cluster key in Consul. This starts life as a static input and if you give this value on the CLI it will never use the Data Source.
If however you do not specify a CLI value it becomes dynamic and will consult Consul. If there’s no such key in Consul the default is used, but the input remains dynamic and will continue to consult Consul on every access.
You can force an input to be dynamic which will mean it will not show up on the CLI and will only speak to a data source using the dynamic: true property on the Input.
Writing and Deleting Data
Of course if you can read data you should be able to write and delete it, I’ve added tasks to let you do this:
locks: - pb_data inputs: cluster: description: "Cluster to deploy" type: "String" required: true data: "pb_data/choria/kv/cluster" default: "alpha" validation: ":shellsafe" hooks: pre_book: - data: action: "delete" key: "pb_data/choria/kv/cluster" tasks: - shell: description: Deploy to cluster {{{ inputs.cluster }}} command: /path/to/script --cluster {{{ inputs.cluster }}} - data: action: "write" value: "bravo" key: "pb_data/choria/kv/cluster" - shell: description: Deploy to cluster {{{ inputs.cluster }}} command: /path/to/script --cluster {{{ inputs.cluster }}} |
Here I have a pre_book task list that ensures there is no stale data, the lock ensures no other Playbook will mess around with the data while we run.
I then run a shell command that uses the cluster input, with nothing there it uses the default and so deploys cluster alpha, it then writes a new value and deploys cluster brova.
This is a bit verbose I hope to add the ability to have arbitrarily named tasks lists that you can branch to, then you can have 1 deploy task list and use the main task list to set up variables for it and call it repeatedly.
Conclusion
That’s quite a mouthful, the possibilities of this is quite amazing. On one hand we have a really versatile data store in the Playbooks but more significantly we have expanded the integration possibilities by quite a bit, you can now have other systems manage the environment your playbooks run in.
I will soon add task level locks and of course Node Set integration.
For now only Consul and Memory is supported, I can add others if there is demand.