I recently blogged about my workflow improvements realised by using docker for some services. Like everyone else the full story about running containers in production is a bit of an unknown. I am running 7 or 8 things in containers at the moment but I have a lot of outstanding questions.
I could go the route of a private PaaS where you push an image or Dockerfile into it and forget about it. Hoping you never have to debug anything or dive deep into finding out why something is not performant as those tend to be very much closed systems. Some like deis are just Docker underneath but some others like the recently hyped lattice.cf unpacks the Docker container and transforms it into something else entirely that is much harder to interact with from a debug perspective. As a bit of an old school sysadmin this fire-and-hope-for-the-best approach leaves me a bit cold. I do not want to lose the ability to carefully observe my running containers using traditional tools if I have to. It’s great to strive for never having to do that, never having to touch a running app using any thing but your monitoring SaaS or that you can just always scale out horizontally but personally I feel I need a bit more closer to the bits interaction at times. Aim for that goal and get a much better overall system, but while you’ve not yet reached this nirvana like state you’re going to want to get at your running apps using strace if it has to.
So having ruled out just running one of the existing crop of private PaaS offerings locally I started thinking about what a container is really. I consider them to be analogous to a package so we need to first explore what Packages are. In it’s simplest form a package is just a bunch of files packaged up. So what makes it better than a tarball?
- Metadata like name, version, build time, build host, dependencies, descriptions, licence, signature and urls
- Built in logic like pre/post install scripts but also companion scripts like init system scripts, monitoring logic etc
- An API to interact with this – the rpm or apt/deb commands – but like in the case of Yum also libraries for interacting with these
All of the above combines to bring the biggest and ultimate benefit from a package: Strong set of companion tools to build, host, deploy, validate, update and inspect those packages. You cannot have the main benefit from packages without the mature implementations of the preceding points.
To really put it in perspective, the Puppet or Chef package resources only works because of the combination of the above 3 points. Without them it will fail which is why the daily attempts by people on #puppet for example to reinvent packaging with a exec running wget and make ends up failing and yield the predictable answer of packaging up your software instead.
When I look at the current state of a docker container and the published approaches for building them I am left a bit wanting when I compare them to a mature package manager wrt to the 3 points above. This means without improvement I am going to end up with a unsatisfactory set of tools and interactions with my running apps.
So to address this I started looking at standardising my builds and creating a framework for building containers the way I like to and what kind of information I would be able to make available to create the tooling I think is needed. I do this using a base image that has a script called container in it that can introspect metadata about the image. Any image downstream from this base image can just add more metadata and hook into the life cycle my container management script provides. It’s not OS dependent so I wouldn’t be forcing any group into a OS choice and can still gain a lot of the advantages Docker brings wrt to making heterogeneous environments less painful. My build system embeds the metadata into any container it builds as JSON files.
Metadata
There are lots going on in this space, Kubernetes has labels and Docker is getting metadata but these are tools to enable metadata, it is still up to users to decide what to do with it.
The reason you want to be able to really interact with and introspect packages come down to things like auditing them. Where do you have outdated SSL versions and the like. Likewise I want to know things about my containers and images:
- Where and when was it built and why
- What was it’s ancestor images
- How do I start, validate, monitor and update it
- What git repo is being built, what hash of that git repo was built
- What are all the tags this specific container is known as at time of build
- What’s the project name this belongs to
- Have the ability to have arbitrary user supplied rich metadata
All that should be visible to the inside and outside of the container and kept for every ancestor of the container. Given this I can create rich generic management tools: I can create tools that do not require configuration to start, update and validate the functionality as well as monitor and extract metrics of any container without any hard coded logic.
Here’s an example:
% docker exec -ti rbldnsd container --metadata|json_reformat { "validate_method": "/srv/support/bin/validate.sh", "start_method": "/srv/support/bin/start.sh", "update_method": "/srv/support/bin/update.sh" "validate": true, "build_cause": "TIMERTRIGGER", "build_tag": "jenkins-docker rbldnsd-55", "ci": true, "image_tag_names": [ "hub.my.net/ripienaar/rbldnsd" ], "project": "rbldnsd", "build_time": "2015-03-30 06:02:10", "build_time_stamp": 1427691730, "image_name": "ripienaar/rbldnsd", "gitref": "e1b0a445744fec5e584919711cafd8f4cebdee0e", } |
Missing from this is monitoring and metrics related bits as those are still a work in progress. But you can see here metadata for a lot of the stuff I mentioned. Images I build embeds this into the image, this means when I FROM one of my images I get a history, that I can examine:
% docker exec -ti rbldnsd container --examine Container first started at 2015-03-30 05:02:37 +0000 (1427691757) Container management methods: Container supports START method using command /srv/support/bin/start.sh Container supports UPDATE method using command /srv/support/bin/update.sh Container supports VALIDATE method using command /srv/support/bin/validate.sh Metadata for image centos_base Names: Project Name: centos_base Image Name: ripienaar/centos_base Image Tag Names: hub.my.net/ripienaar/centos_base Build Info: CI Run: true Git Hash: fcb5f3c664b293c7a196c9809a33714427804d40 Build Cause: TIMERTRIGGER Build Time: 2015-03-24 03:25:01 (1427167501) Build Tag: jenkins-docker centos_base-20 Actions: START: not set UPDATE: not set VALIDATE: not set Metadata for image rbldnsd Names: Project Name: rbldnsd Image Name: ripienaar/rbldnsd Image Tag Names: hub.my.net/ripienaar/rbldnsd Build Info: CI Run: true Git Hash: e1b0a445744fec5e584919711cafd8f4cebdee0e Build Cause: TIMERTRIGGER Build Time: 2015-03-30 06:02:10 (1427691730) Build Tag: jenkins-docker rbldnsd-55 Actions: START: /srv/support/bin/start.sh UPDATE: /srv/support/bin/update.sh VALIDATE: /srv/support/bin/validate.sh |
This is the same information as above but also showing the ancestor of this rbldnsd image – the centos_base image. I can see when they were built, why, what hashes of the repositories and I can see how I can interact with these containers. From here I can audit or manage their life cycle quite easily.
I’d like to add to this a bunch of run-time information like when was it deployed, why, to what node etc and will leverage the docker metadata when that becomes available or hack something up with ENV variables.
Solving this problem has been key to getting to grips of the operational concerns I had with Docker and feeling I can get back to the level of maturity I had with packages.
Management
You can see from above that the metadata supports specifying START, UPDATE and VALIDATE actions. Future ones might be MONITOR and METRICS.
UPDATE requires some explaining. Of course the trend is toward immutable infrastructure where every change is a rebuild and this is a pretty good approach. I host things like a DNS based RBL and these tend to update all the time, I’d like to do so quicker and with less resource usage than a full rebuild and redeploy – but without ending up in a place where a rebuild loses my changes.
So the typical pattern I do this with is to make the data directories for these images be git checkouts using deploy keys on my git server. The build process will always take latest git and the update process will fetch latest git and reload the running config. This is a good middle ground somewhere between immutability and rapid change. I rebuild and redeploy all my containers every night so this covers the few hours in between.
Here’s my DNS server:
% sudo docker exec bind container --update >> Fetching latest git checkout From https://git.devco.net/ripienaar/docker_bind * branch master -> FETCH_HEAD Already up-to-date. >> Validating configuration >> Checking named.conf syntax in master mode >> Checking named.conf syntax in slave mode >> Checking zones.. >> Reloading name server server reload successful |
There were no updates but you can see it would fetch the latest, validate it passes inspection and then reload the server if everything is ok. And here is the main part of the script implementing this action:
echo ">> Fetching latest git checkout" git pull origin master echo ">> Validating configuration" container --validate echo ">> Reloading name server" rndc reload |
This way I just need to orchestrate these standard container –update execs – webhooks does this in my case.
VALIDATE is interesting too, in this case validate uses the usual named-checkconf and named-checkzone commands to check the incoming config files but my more recent containers use serverspec and infrataster to validate the full end to end functionality of a running container.
% sudo docker exec -ti rbldnsd container --validate ............................. Finished in 6.86 seconds (files took 0.39762 seconds to load) 29 examples, 0 failures |
My dev process revolves around this like TDD would, my build process will run these steps end of every build in a running instance of the container, my deploy process runs this post deploy of anything it deploys. Operationally if anything is not working right my first port of call is just this command, it often gets me right down to the part that went wrong – if I have good tests that is, otherwise this is feedback to the dev cycle leading to improved tests. I mentioned I rebuild and redeploy the entire infrastructure daily – it’s exactly the investment in these tests that means I can do so while getting a good nights sleep.
Monitoring will likewise be extended around standardised introspectible commands so that a single method can be made to extract status and metric information out of any container built on this method.
Outcome
I’m pretty happy with where this got me, I found it much easier to build some tooling around containers given rich metadata and standardised interaction models. I kind of hoped this was what I would get from Docker itself but it’s either too early or what it provides is too low level – understandable as from it’s perspective it would want to avoid being too prescriptive or have limited sets of data it supports on limited operating systems. I think though as a team who want to build and deploy a solid infrastructure on Docker you need to invest in something along these lines.
Thus my containers now do not just contain their files and dependencies but more and more their operational life cycle is part of the container. Containers can be asked for their health, they can update themselves and eventually emit detailed reusable metrics and statuses. The API to do all of this is standardised and I can run this anywhere with confidence gained from having these introspective abilities and metadata anywhere. Like the huge benefit I got from an improved workflow I find this embedded operational life cycle is equally large and something that I found hard to achieve in my old traditional CM based approach.
I think PaaS systems need to get a bit more of this kind of thing in their pipelines, I’d like to be able to ask my PaaS to just run my validate steps regularly or on demand. Or have standardised monitoring status and metrics output so that the likes of Datadog etc can deliver agents that provide in depth application monitoring without configuration by just sitting in a container next to a set of these containers. Today the state of the art for PaaS health checks seem to be to just hit the exposed port, but real life management of services is much more intricate than that. If they had that I could adopt one of those and spare myself a lot of pain.
For now though this is what my systems will do and hopefully some of the ideas become generally accepted.