Configuration Management Tools Still Fall Short

I have a gripe with almost every configuration management tool I’ve used. I’m most familiar with chef, but I’ve used puppet a bit, so I apologize to the fine people at OpsCode in advance since my examples will be chef based.

The Cake is a Lie

Every time I run chef I tell my self a lie. My system will be in a known state when chef finishes running. The spirit of the DevOps movement is that we are building repeatable processes, and tools, freeing our companies from unknown, undocumented production environments, but in practice we may be making it worse.

The One Constant is Change

This should be a surprise to no one, but occasionally broken recipes get checked in, and run. Sometimes these effect state, sometimes they don’t; it really depends on the text of your recipe. Sometimes recipes run, and are removed. This is the natural cycle, since developing environments change over time. We remove and fix these recipes cavalierly, and to eliminate unneeded packages, cut run times, and to make our configuration management tool work.

The Server is an Accumulator Pattern Without Scope

What we generally forget is that servers are javascript. We intend for all of our changes to modify the system in a known way, but since (particularly with persistent images) we may have run several generations of scripts, we may not know our starting state. From the moment we have an instance/server/image we are accumulating changes that our configuration management utilities rely on to operate. Long forgotten recipes may still be haunting your server, with an old package, or config file, that unknowingly you are now using. A new instance may be equally hard to recreate because, despite your base assumption every chef run modified state, and you’ve been relying on those side effects in every run since.

Is it your Mies en Plas or Chef’s?

Chef doesn’t clean up, it leaves it to you. You have to be the disciplined one, and make sure your work place is clean. If you have physical hardware, this is more challenging than with virtual instances, but if you persist images you can suffer from the same problems as well.

Whats Missing?

All of these tools lack state verification. I’d love for these tools to be transactional, but I’m realistic, that will never happen. When a run is completed, I would like to verify that some state condition is met, rather than knowing that all my commands succeeded. Unfortunately, I’m not sure this is realistic.

Protect Your Neck

So, given that we have these accumulators, my preferred solution is to zero them out; reinstall early and often, or start new images whenever you can. The only state that is known is a clean install, and when you make major changes reinstall.

One comment

  • March 6, 2012 - 5:13 am | Permalink

    Good post. In the system we are establishing at work, we do exactly what you mention at the end–clean installs, starting from provisioning the VM. Besides helping with configuration drift, it gives you a level of assurance that you can reestablish your server if disaster strikes.

    As far as verifying state–you can run automated smoke tests following a deployment. It’s not a perfect indicator, but at least it tells you whether your app is up and running.

  • Comments are closed.