Monthly Archives: February 2012

DevOps EC2

A few things you should know about EC2

Availability Zones are Randomized Between Accounts

I had someone from Amazon tell me this, so I assume this to be true. In order to prevent people from gaming the system availability and over allocating instances in a singe az, zones ids are randomized across customers. So for any two accounts us-east-1a != us-east-1a. Amazon promises availability zones to be separate for your account, it makes no promises about keeping these consistent across accounts. If you’re using multiple accounts, don’t assume you can choose the same availability zone.

No Instance is Single Tenant

We all want to game they system, and I’ve heard rummors that XL instance, and 4XL instances are single tenant, one VM per hardware instance. I’ve come to believe that no EC2 instances are single tenant, even the cluster compute instances. Its a fair bet that systems can easily be purchases with 96GB+ of memory, so AWS has likely been using configurations like this for the past 2+ years. Its always possible to have a noisy neighbor, don’t assume you can buy your way out.

Micro Instances Aren’t Good for Production Use

If you do anything at any kind of scale, don’t use micro instances. They have variable performance, and you shouldn’t rely on them for anything.

EBS Should only be Allocated 1TB at a Time

This is one area where it seems you can game the system. Many people have reported that by using 1TB volumes you get better performance. The conventional wisdom is that you are allocating a drive, or at least most of one. So, don’t skimp; over allocate if you need EBS.

DevOps

Configuration Management Tools Still Fall Short

I have a gripe with almost every configuration management tool I’ve used. I’m most familiar with chef, but I’ve used puppet a bit, so I apologize to the fine people at OpsCode in advance since my examples will be chef based.

The Cake is a Lie

Every time I run chef I tell my self a lie. My system will be in a known state when chef finishes running. The spirit of the DevOps movement is that we are building repeatable processes, and tools, freeing our companies from unknown, undocumented production environments, but in practice we may be making it worse.

The One Constant is Change

This should be a surprise to no one, but occasionally broken recipes get checked in, and run. Sometimes these effect state, sometimes they don’t; it really depends on the text of your recipe. Sometimes recipes run, and are removed. This is the natural cycle, since developing environments change over time. We remove and fix these recipes cavalierly, and to eliminate unneeded packages, cut run times, and to make our configuration management tool work.

The Server is an Accumulator Pattern Without Scope

What we generally forget is that servers are javascript. We intend for all of our changes to modify the system in a known way, but since (particularly with persistent images) we may have run several generations of scripts, we may not know our starting state. From the moment we have an instance/server/image we are accumulating changes that our configuration management utilities rely on to operate. Long forgotten recipes may still be haunting your server, with an old package, or config file, that unknowingly you are now using. A new instance may be equally hard to recreate because, despite your base assumption every chef run modified state, and you’ve been relying on those side effects in every run since.

Is it your Mies en Plas or Chef’s?

Chef doesn’t clean up, it leaves it to you. You have to be the disciplined one, and make sure your work place is clean. If you have physical hardware, this is more challenging than with virtual instances, but if you persist images you can suffer from the same problems as well.

Whats Missing?

All of these tools lack state verification. I’d love for these tools to be transactional, but I’m realistic, that will never happen. When a run is completed, I would like to verify that some state condition is met, rather than knowing that all my commands succeeded. Unfortunately, I’m not sure this is realistic.

Protect Your Neck

So, given that we have these accumulators, my preferred solution is to zero them out; reinstall early and often, or start new images whenever you can. The only state that is known is a clean install, and when you make major changes reinstall.

DevOps Linux

SSH Do’s and Don’ts

Do Use SSH Keys

When ever you can use a key for SSH. Once you create it, you can distribute the public side widely to enable access where ever you need it. Generating one is easy:


ssh-keygen -t dsa

Don’t Use a Blank Passphrase on Your Key

This key is now your identity. Protect it. Select a sufficiently safe password, and enter it when prompted. This is basic security, plus allows you to “safely” move your keys between hosts without compromising the key security.

Do Use Multiple Keys

Its probably best to use a few keys when setting up access from different hosts. This makes it possible to shutdown a key without locking your self out.

Don’t Copy Your Private Key Around

Remember this is your identity, and authorization to access systems. Its never a good idea to copy it from system to system.

Do Use SSH Agents

Enabling the ssh agent on you laptop or desktop can save you from the tedium of password entry. Launching the agent is easy, then you just need to add key files to it.


# starts the agent, and sets up your environment variables
exec ssh-agent bash
# add your identities to the agent by using ssh-add
ssh-add

Don’t Leave You Agents Running After You Log Out

If you leave your agent running, this is like leaving your keys in a running car. Anyone can now assume your identity if they can gain access to your agent.

Do Make A Custom ~/.ssh/config

You’ll find from time to time that you’ll need special settings. You have a few options, like entering a very long command string, or creating a custom ~/.ssh/config file. I use this for short hostnames when I’m on a VPN, or when my username on my system doesn’t match my account on the remote system.


# A wild card quick example
Host *.production
User geoffp
IdentityFile ~/.ssh/prod_id_dsa
ForwardAgent yes

# Shortening a Host’s Name
# so ssh my-short-name will work
Host my-short-name
User gpapilion
ForwardAgent yes
Hostname my.fully.qualified.hostname.com

Do Use ForwardAgent

This approximates single sign-on using ssh keys. As long as you are forwarding agent requests back to your original host, you should never be prompted for a password. I set my ~/.ssh/config to do this, but I also will use ssh -a on remote systems to keep from reentering password information.

*** EDIT ***

I’ve received a lot of feed back about this point. Some people have pointed out that this should not be used on untrusted systems. Essentially your agent will always respond when prompted to a agent forward request with the response to a challenge. If an attacker has compromised the system or the file systems enforcement of permissions is poor, your credential can be used in a sophisticated man in the middle attack.

Basically, don’t ever SSH to non-trusted systems with this option enabled, and I’d extend this an say don’t ever login to non-trusted systems.

This article does a good job of explaining how agent forwarding works. This article on Wikipedia explains the security issue.

 

 

 

Don’t Only Keep Online Copies of Your Keys

Keep an offline backup. You may need to get access to a private key, and it always good to keep an offline copy for an emergency.

DevOps

Techincal Debt Better Than Not Doing It

Its time to admit that sometimes it’s okay to incur technical debt, particularly when it comes to getting it done. So many times, I’ve run into to places that have constipated operations environments, or automation processes because something is hard to do automatically.

If you can’t automated it, don’t block all other tasks because of one issue. It better to have a partially automated solution, than none at all. Just make sure you can document it, and come back later when you have more time. Don’t let your tools be your excuse for not doing it, it only makes you look bad.

DevOps

User Acceptance Testing for Successful Failovers

Things fail, we all know that. What most people don’t take into account is that things fail in combination and unexpected ways. We spend time and effort planning redundancy and failover schemes to seamlessly continue operations, but often neglect to fully test these plans before rolling services and equipment into production. What inevitably happens is that the service fails, because the fail-over plan never worked, or had not considered what issues might arise while failing over. So, borrowing the concept of User Acceptance Testing (UAT) from software development, we can develop a system of tests where we can feel confident out redundancy plans will work when we need them.

Test Cases

Build a test plan, its that simple. Start by identifying the dependent components of your system, then look all the typical failure scenarios that may happen in those components. If you have two switches, what happens if one dies? Bonded network interfaces, what happens if you loose an uplink on one of your switches?

After you identify the failure scenarios, specify the expected behavior in for the scenario. If a switch dies, network traffic should continue to be sent through the remaining switch. If interface one looses its ability to route traffic, interface two should be the primary interface in the bond.

Combining the two pieces should give you a specification of how you expect the system to behave in the case of these failures. You can really organize these anyway you want, but I typically use a user-story like format to describe the failure and expected outcome.

Example Test case:

  • Switch 1 stops functioning
    • Switch 2 takes over VRRP address
    • Switch 2 passes traffic with minimal interruption, within 3 seconds.
    • Nagios alerts that switch 1 has failed
  • App server looses DB connection
    • load-balancer detects error, and removes host
    • load-balancer continues to pass traffic to other app-servers
    • Nagios alerts that app-server has failed

Once you’ve completed your plan, get buy-in for it. You’ll want a few of your peers to review it, and look over it for any failures you may have missed. Once you have agreement on this being the right test set, its time for the next step.

Writing Artificial Tests

Start brainstorming ways to test failure modes. Simple non-destructive tests are best; emulate a switch failure by unplugging a switch. A hosts network interface fails, block its port on the switch. A system freezes, block the load balancer from connecting to it via a host level firewall. You may want to take things a step farther, like pulling a disk to test raid recovery.

Remember you’re trying to test your failover plans, and you should no be terribly concerned if you break a configuration in the process, because this may happen when you something goes down. Write all the steps to test down, and its also a good idea to write down how you get back to the know state.

Review your test cases, and make sure you have tests that address each failure mode. If its impossible to test a scenario, note it, and exclude it from your UAT. Once you’ve done that, your ready to test.

Performing the Tests

Any one involved in the day to day technical operations should be able to run through the tests. Its not a bad idea to have a whole team participate, so that people can get used to seeing how the system behaves when components are failing. Step through the test methodically, and record whether the test passed or failed, and how the system behaved during the process. For example, if you’re testing the failure of an app server, did any errors show up on http clients, and if so for how long?

Failing

This is going to happen, and when it does it is time to figure out why. Firstly, was this a configuration error, or the artifact of a previous test? If so, fix it, update your test plan, and start testing again. Did you redundancy plan have a fatal flaw? Thats ok too, that’s why we test. If you missed something in your plan, address the issue, and restart the test from scratch. You’re much better off catching problems in UAT then after you’ve pushed the service to production.

Passing

Keep a copy of the UAT somewhere, so if questions come up later you can discuss it. I use wikis for this, but any document will do. Once you have that sorted, you can roll your fancy new service into production.

Summary

UAT is a useful concept for software development, and also useful for production environments. Take your time and develop a good plan, and you’ll endup with longer up-times, and meeting you’re SLA requirements. As an added bonus, you gain experience seeing how your equipment on instances behave when something has gone wrong.

DevOps solr

Solr Query Change Beats JVM Tuning

I’ve been spending the last few days at work trying to improve our search performance, and have been banging my head against the dismax query target and parser in Solr. For those not familiar with the Dismax, its a simplified parser for Solr that eliminates the complexity from the Standard query parser. Instead of search terms like “field_name:value” you can simple enter “value”, but you can no longer search for a specific term in a specific field.

Our search index has grown in the last few months by 20% and our JVM and Solr setups were beginning to groan under the weight of the data. I went through a few rounds of JVM tuning, which reduced garbage collection time to less than 2%, and with some Solr configuration options managed to bring our typical query back under 5 seconds. This felt like a major win, until I adjusted the query.

Looking at our query parameters on search I noticed we were using the “fq” parameter to specify the id of the particular site we were looking for. These queries were taking anywhere from 5-15 seconds across our 360GB index, and I suspected that we were pulling in data to the JVM only to filter it away. The garbage collection graphs seemed to indicate this as well, since we had a very slow growing heap, and our eden space was emptying very quickly even with 20G allocated to it. When I changed from dismax to the standard target and specified the site id, I noticed my search time went from 5 seconds to .06 seconds, so started reading, and came across an article on nested queries. My idea was that this would allow me to apply a constraint to the initial set of data returned, using the standard search target, and then perform a full text search using dismax and achieve the same results.

Original Query(grossly simplified):
http://search-server/solr/select?fl=title%2Csite_id%2Ctext&qf=title%5E7+text&qt=dismax&fq=site_id:147&timeAllowed=2500&q=SearchTerm+&start=0&rows=20"

Becomes the following nested query:

http://search-server/solr/select?fl=title%2Csite_id%2Ctext&qf=title%5E7+text&timeAllowed=2500&q=site_id:147+_query_:%22{!dismax}SearchTerm%22&start=0&rows=20

Original Query Time : 5 seconds
Nested Query Time : 87 milliseconds

Both return identical results. So, if performing a query against a large index and you want to use dismax, you should try using a nested search. You’re likely see much better performance, particularly if you’re filtering based on a facet. And this gives you a relatively easy way to specify the value of a field, and still want to use a dismax query.

 

Uncategorized

Language Importance for DevOps Engineers

First and foremost this is a biased article. These are all my opinions, and come from my working experience.

Bash(or Posix shell)

Importance 10/10

If you’re working with *nix and can’t toss together a simple init.d script in 5 minutes, you haven’t done enough bash. It’s everywhere, and it should still be your first automation choice. It has a simple syntax, and is designed specifically to execute programs in a non-interactive manner. You’ll be annoyed that it lacks unit tests, and complex error handling, but its purpose built to automate administrative tasks.

Perl

Importance 9/10

This is the language that you will run into if you work in operations. There will be backup scripts, nagios tests, and a large collection of digital duck tape written by co-workers, that do very important jobs. Its syntax is ugly, and you may find yourself writing a eval to handle exceptions, but its everywhere. CPAN makes it fairly easy to get things done, and you can’t beat this for string handling.

C/C++

Importance 5/10

This is the latin of the *nix world, and is basically portable assembly language. I refrain from writing C whenever possible, since I rarely need the raw performance, and the security and stability consequences are pretty severe. You should understand the syntax(its ALGO right), and be able to read a simple application. It would be great if you could submit a patch to a open-source project, but I would never turn down an ops hire because they didn’t know C well enough.

PHP

Importance 7/10

PHP more important than C?! Yep. Like perl its everywhere, people use it for prototyped webapps, and full blow production systems. Its another ALGO syntax language, except you can put together a simple web page in 2-3 minutes; its almost as magical as the Twilio API. You’ll find yourself poking at it on more than one occasion, so you might as well know what you’re doing.

Ruby

Importance 6/10

Doing something with puppet or chef? You probably should know some ruby, and in fact it probably more important to know ruby than chef of puppet. Its relatively easy to pick up, and so many of the automation tools people love are written int it. As an extra bonus, you could write rails and sinatra apps. It’s good to have in your back pocket.

Python

Importance 4/10

People love to love python, but the truth is that its a bit of a diva. Its a language that favors reading over writing, and has a very bloated standard library with lots of broken components(which is the right http library to use?). It wants to be a simpler perl, but I never find it as useful, and it always takes longer. I know a lot of companies say they want to use it as their “scripting” language, but in practice I’ve not seen the value(i stil want to rewrite everyones code).

Chef/Puppet

Importance 2/10

These are DSLs for configuration management. They are supposed to be simple to learn, and if you can’t figure them out with a web browser and a few minutes, they are failing.

Java

Importance 6/10

More ALGO syntax, and more prevalent in high scale web applications. Minimally you should be able to read this language, but its useful to be able to pound out a few lines of Java. It has many rich frameworks, and you’ll likely find it sneaking into your stack where you need something done fast. Also, it is really useful when it comes time to tune the JVM.

Haskel

Importance 0/10

When I’ve run into it running someplace serious I’ll update its score.

Javascript

Importance 8/10

I hate this language, but I can’t deny its growing importance. Its more common to see in a web browser, but its starting to creep into the backend with things like node.js. If you can understand javascript, you can help resolve whether the issue is a frontend or backend problem; you will have total stack awareness.

SQL

Importance 10/10

You have to know SQL. You will work with SQL databases, and you will want to move things in and out of them. You may want to know a dialect like MySQL very well, but you should understand the basics, and at a minimum be able to join a few tables, or create an aggregate query.

Uncategorized

Stupid Bash Expansion Trick

I got asked a question regarding filename expansion in bash the other day, and was stumped. It turns out to be something I should have considered a long time ago, and will always keep in mind when writing a script.

Question 1:

What does the following script do if there is a file abc in the current directory?

#!/bin/bash
for i in a*
do
  echo $i
done

Answer:

This a* matches abc and expands to abc, and the script outputs:
abc

Question 2:

What if you run the same script in a directory without any files?

Answer:

The script outputs:

a*

Why?

According to The Bash Reference Manual:

Bash scans each word for the characters ‘*’, ‘?’, and ‘[’. If one of these characters appears, then the word is regarded as a pattern, and replaced with an alphabetically sorted list of file names matching the pattern. If no matching file names are found, and the shell option nullglob is disabled, the word is left unchanged.

So bash will output ‘a*’, because that is how filename expansion works.

Question 3:

What if you run the following script and in a directory with no filename beginning with a:


#!/bin/bash
for i in a*
do
  echo /usr/bin/$i
done

Answer:

The script outputs:


/usr/bin/a2p /usr/bin/a2p5.10.0 /usr/bin/a2p5.8.9 /usr/bin/aaf_install /usr/bin/aclocal /usr/bin/aclocal-1.10 /usr/bin/addftinfo /usr/bin/afconvert /usr/bin/afinfo /usr/bin/afmtodit /usr/bin/afplay /usr/bin/afscexpand /usr/bin/agvtool /usr/bin/alias /usr/bin/allmemory /usr/bin/amavisd /usr/bin/amavisd-agent /usr/bin/amavisd-nanny /usr/bin/amavisd-release /usr/bin/amlint /usr/bin/ant /usr/bin/applesingle /usr/bin/appletviewer /usr/bin/apply /usr/bin/apr-1-config /usr/bin/apropos /usr/bin/apt /usr/bin/apu-1-config /usr/bin/ar /usr/bin/arch /usr/bin/as /usr/bin/asa /usr/bin/at /usr/bin/atos /usr/bin/atq /usr/bin/atrm /usr/bin/atsutil /usr/bin/autoconf /usr/bin/autoheader /usr/bin/autom4te /usr/bin/automake /usr/bin/automake-1.10 /usr/bin/automator /usr/bin/autoreconf /usr/bin/autoscan /usr/bin/autospec /usr/bin/autoupdate /usr/bin/auval /usr/bin/auvaltool /usr/bin/awk

Why?

Because you’re re-evaluating ‘/usr/bin/$i’ which is now ‘/usr/bin/a*’, which expands to the order list above due to shell filename expansion rules. If you want to avoid this you need to protect your variables using quotes. Here is the safe version of the script:


#!/bin/bash
for i in a*
do
  echo /usr/bin/"$i"
done

Just something simple to think about when writing your bash scripts. Expect to enter loops on globs that don't match anything, always protect your variables, and consider setting the failglob option in your scripts.