Stop apologizing for bugs

Look out, honey, ’cause I’m using technology

Ain’t got time to make no apology

The Stooges, “Search and Destroy”

For the last year or so, I’ve made a conscious effort to stop apologizing for bugs in my code.

Apologizing for bugs is very tempting. I used to do it a lot. When my code was involved in a failure that screwed up a coworker’s day or caused a user-facing problem, I’d say “Whoops! Sorry! I should’ve thought about that.”

The motivation for apologizing is sound: you want to do your best for the team. In principle, you could have prevented a problem if you’d just done something slightly different. Through this lens, apologizing for bugs may seem innocuous. But it contributes to a bunch of cultural anti-patterns:

  • It reinforces the idea that any one person or piece of code can be blamed for a given failure. Short of malice, this is never the case.
  • It gives the impression that, when you wrote the code, you should have written it better. This is a counterfactual that rarely holds up to examination.
  • It positions shame as the correct emotion to feel about bugs in your code: if you were a better engineer – a better teammate – the bug wouldn’t exist.
  • If you’re a more senior engineer on your team, the effects of these anti-patterns are magnified: people see you apologizing for bugs, so they think that they should be striving to write bug-free code. They may feel ashamed if their code has bugs.

Even if you don’t intellectually believe any of these fallacies about bugs, the act of apologizing reinforces them. Your teammates can’t know what you really believe; they can only know what you say and do.

Everyone knows that all code has bugs. Code is written under constraints. Deadlines. Goals other than quality. Imperfect knowledge of the future. Even your own skill as an engineer is a constraint. If we all tried to write perfect, bugless code, we’d never accomplish anything. So how does it make sense to apologize for bugs?

This rule I’ve made for myself forces me to distinguish between problems caused by constraints and problems caused by my own faults. If I really think I caused a problem through some discrete action (or lack of action), then that’s something I’ll apologize for. But if I wrote code that got something done, and it just so happens that it didn’t work in a given situation, then I have nothing to apologize for. There was always bound to be something.

Make a resolution not to apologize for bugs. Especially if you’re in a leadership position. It’s a simple way to tweak attitudes about mistakes and failure in a positive way.

Do-nothing scripting: the key to gradual automation

Every ops team has some manual procedures that they haven’t gotten around to automating yet. Toil can never be totally eliminated.

Very often, the biggest toil center for a team at a growing company will be its procedure for modifying infrastructure or its procedure for provisioning user accounts. Partial instructions for the latter might look like this:

  1. Create an SSH key pair for the user.
  2. Commit the public key to Git and push to master.
  3. Wait for the build job to finish.
  4. Find the user’s email address in the employee directory.
  5. Send the user their private key via 1Password.

This is a relatively short example. Sometimes there are 20 steps in the process. Sometimes there are branches and special cases to keep track of as you go. Over time, these procedures can become unmanageably large and complex.

Procedures like this are frustrating because they’re focus-intensive yet require very little thought. They demand our full attention, but our attention isn’t rewarded with interesting problems or satisfying solutions – just another checkbox checked. I have a word for a procedure like this: a slog.

We know that this procedure is ripe for automation. We can easily see how to automate any given step. And we know that a computer could carry out the instructions with far greater speed and accuracy than we can, and with less tendency toward practical drift.

However, automating slogs sometimes feels like an all-or-nothing proposition. Sure, we could write a script to handle step 2, or step 5. But that wouldn’t really make the procedure any less cumbersome. It would lead to a proliferation of single-purpose scripts with different conventions and expectations, and you’d still have to follow a documented multi-step procedure for using those scripts.

This perception of futility is the problem we really need to solve in order to escape from these manual slogs. I’ve found an approach that works pretty reliably: do-nothing scripting.

Do-nothing scripting

Almost any slog can be turned into a do-nothing script. A do-nothing script is a script that encodes the instructions of a slog, encapsulating each step in a function. For the example procedure above, we could write the following do-nothing script:

import sys

def wait_for_enter():
    raw_input("Press Enter to continue: ")

class CreateSSHKeypairStep(object):
    def run(self, context):
        print("Run:")
        print("   ssh-keygen -t rsa -f ~/{0}".format(context["username"]))
        wait_for_enter()

class GitCommitStep(object):
    def run(self, context):
        print("Copy ~/new_key.pub into the `user_keys` Git repository, then run:")
        print("    git commit {0}".format(context["username"]))
        print("    git push")
        wait_for_enter()

class WaitForBuildStep(object):
    build_url = "http://example.com/builds/user_keys"
    def run(self, context):
        print("Wait for the build job at {0} to finish".format(self.build_url))
        wait_for_enter()

class RetrieveUserEmailStep(object):
    dir_url = "http://example.com/directory"
    def run(self, context):
        print("Go to {0}".format(self.dir_url))
        print("Find the email address for user `{0}`".format(context["username"]))
        context["email"] = raw_input("Paste the email address and press enter: ")

class SendPrivateKeyStep(object):
    def run(self, context):
        print("Go to 1Password")
        print("Paste the contents of ~/new_key into a new document")
        print("Share the document with {0}".format(context["email"]))
        wait_for_enter()

if __name__ == "__main__":
    context = {"username": sys.argv[1]}
    procedure = [
        CreateSSHKeypairStep(),
        GitCommitStep(),
        WaitForBuildStep(),
        RetrieveUserEmailStep(),
        SendPrivateKeyStep(),
    ]
    for step in procedure:
        step.run(context)
    print("Done.")

This script doesn’t actually do any of the steps of the procedure. That’s why it’s called a do-nothing script. It feeds the user a step at a time and waits for them to complete each step manually.

At first glance, it might not be obvious that this script provides value. Maybe it looks like all we’ve done is make the instructions harder to read. But the value of a do-nothing script is immense:

  • It’s now much less likely that you’ll lose your place and skip a step. This makes it easier to maintain focus and power through the slog.
  • Each step of the procedure is now encapsulated in a function, which makes it possible to replace the text in any given step with code that performs the action automatically.
  • Over time, you’ll develop a library of useful steps, which will make future automation tasks more efficient.

A do-nothing script doesn’t save your team any manual effort. It lowers the activation energy for automating tasks, which allows the team to eliminate toil over time.

An Incident Command Training Handbook

One of the first jobs I took on at Hashicorp was to create a training document for our corps of Incident Commanders. It was a super interesting task, because it gave me an opportunity to synthesize a whole bunch of thoughts I’ve been exposed to during my many years of responding to incidents.

Below is the training document I wrote, mostly unedited. I hope it can be of some use to you, whether you’re defining an incident response policy or just noodling on incident response in general. Enjoy!

So you want to be an Incident Commander.

Our incident response protocol indicates that every incident must have an Incident Commander. Incident command is a rewarding job. But it’s a job that takes skills that you probably haven’t exercised in your job before – skills that you may not even have recognized as skills!

This document explains what being an Incident Commander entails. It then presents an overview of things that commonly go wrong during an incident, and some strategies for dealing with them.

To be an Incident Commander, you need to do these things:

  1. Read this document
  2. Familiarize yourself with the Incident Commander reference sheet (I’ll try to get permission to publish this reference sheet in a future post. It’s basically a step-by-step runbook that walks through some formal procedures)
  3. Shadow an incident commander on a real incident
  4. Get added to the @inccom Slack group

Once you’ve completed these steps: congratulations! You’re a member of the Incident Command Team. Next time somebody pings the @inccom group, you can go ahead and volunteer. Just remember to refer to the Incident Commander reference sheet, which you can bring up in Slack by typing start incident in any channel.

What does an Incident Commander do?

An Incident Commander’s job is to keep the incident moving toward resolution. But an Incident Commander’s job is not to fix the problem.

As Incident Commander, you shouldn’t touch a terminal or search for a graph or kick off a deploy unless you’re absolutely the only person available to do it. This may feel uncomfortable, especially if your background is in engineering. It will probably feel like you’re not doing enough to help. What you need to remember is this: whatever your usual job, when you’re the Incident Commander, your job is to be the Incident Commander.

How are you supposed to keep an incident moving toward resolution without fixing things? Well, incidents move forward when a team of people works together. The Incident Commander’s job is to form that team and keep the team on the same page. It’s a demanding task that will take your full attention, and it comprises three main pieces:

  1. Populating the incident hierarchy
  2. Being the ultimate decision maker
  3. Facilitating the spread of information

We’ll talk about each of these pieces in turn.

Populating the incident hierarchy

In an incident, everybody who’s working on the problem has a specific role. As a corollary, anyone who hasn’t been explicitly assigned to a role should not work on the problem. It’s critical that everyone involved knows what they’re accountable for and who they’re accountable to.

There are four main roles in incident response, which should be filled as soon as possible and remain filled until the incident is closed. As detailed in the Incident Commander reference sheet, the Incident Commander is responsible for maintaining in the topic of the Slack channel an up-to-date accounting of who’s in which of the main roles.

The main roles are as follows:

  • Incident Commander – you!
  • Primary SME (Subject Matter Expert) – in charge of technical investigation into the problem
  • External Liaison – in charge of communicating with customers about the incident
  • Scribe – responsible for taking notes in Slack about the incident and keeping track of follow-up items

At the very beginning of an incident, you may, as Incident Commander, need to handle more than one of these responsibilities. But at the earliest opportunity, you should assign others to these roles. Or, if it happens that you’re the person with the best shot at fixing the problem, you should assign someone else to be the Incident Commander and make yourself the Primary SME.

When you assign a person to a role, it’s good to be assertive. Instead of asking “Can anyone volunteer to serve as scribe?” try picking a specific person and saying “Name, can you be the Scribe for this incident?”

It’s very common for incident response to require more than just four people. But like we said above, it’s critical that everyone involved know what they’re accountable for and who they’re accountable to. For this reason, whenever someone new wants to start work on the incident, you must either assign them one the main roles or make them subordinate to someone who already has a role. For example, if the External Liaison has their hands full doing customer communications and someone volunteers to help, you can say something to the effect of “Name, you are now a Deputy External Liaison. You report to the External Liaison. Please confirm.”

Everyone in an incident should look to the Incident Manager for the final word on who’s responsible for what. It’s a part of being the ultimate decision maker.

Flexibility is your prerogative

The predefined incident hierarchy is designed to fit most scenarios. Sometimes, there will be incidents that aren’t well served by it. As the Incident Commander, you have the ability to modify the hierarchy on a case-by-case basis so it best fits the incident at hand. It’s important that your effectiveness is not hampered by a strict adherence to formula. You are empowered to temporarily change the system as you see fit.

For example, it may make sense to have multiple Primary SMEs. If an incident impacts multiple systems, you might want to pick an SME for each one. The procedure for assigning multiple SMEs would be the same as in the case of a single Primary SME, as would their responsibilities. You might want to say, “Name, you are now the Primary SME for Area of Responsibility. Please confirm.”

Being the ultimate decision maker

When we say the IC is the “ultimate decision maker,” we don’t mean that they’re somehow expected to make better decisions. What we mean is that everyone involved in an incident treats the IC’s decisions as final and binding. What matters is not so much that you always make the correct decision; what matters is that you make a decision.

Having an Incident Commander available to make decisions enables others to act in the way an incident demands. Rather than second-guessing themselves and spending lots of time weighing pros and cons, people can surface important decisions to you. Given the information available, you can then choose what seems like the best path forward. And since your decision is final and binding, the decision itself keeps everyone on the same page.

There’s another, less obvious benefit to the Incident Commander’s role as ultimate decision maker. When people are actively digging into production problems or liaising with customers, they’re constantly building context that others don’t have. In order to have the IC make important decisions, those people have to explain enough of their mental context to make the decision tractable. This is one of the ways we make sure that information gets spread around during an incident.

Facilitating the spread of information

Managing information flow is the single most important responsibility of the Incident Commander.

When we think about information during an incident, we’re usually thinking about the data that comes out of our telemetry systems or the output of commands that we run. That’s the kind of information we tend to spread around most readily. But as the Incident Commander, this concrete, sought-out information is not the only kind of information you need to be concerned about.

There’s also information inside the heads of all the people involved in incident response. Everyone has a different perspective on the incident. And, in general, people don’t know what pieces of the picture they have that others are missing. Therefore, the IC should always be looking for opportunities to get useful context out of people’s heads and into the sphere of shared knowledge.

The key to facilitating the spread of information is to manage signal-to-noise ratio. “Signal” means information that can be used to move incident resolution forward, and “noise” is information that can’t. So when there’s a piece of information that needs to get to somebody, the Incident Commander’s job is to boost the signal and make sure it gets to the right place. Conversely, when there’s an information stream that nobody can use – for example, someone in the channel posting updates on some irrelevant system’s behavior, or someone in the video call asking for duplicate status updates – your job is to suppress that noise.

To put it simply, the IC is responsible for making sure all incident communication channels remain high-signal, low-noise environments.

How incidents get off track

Every incident is different, but it’s useful to know some common ways in which incident response can go astray. When you recognize these anti-patterns, try applying the tactics described below to get the team back on track.

Thematic vagabonding

Perhaps the most common incident response anti-pattern is thematic vagabonding. This is when responders keep moving from one general area of investigation to another. When thematic vagabonding is happening, you’ll notice that:

  • Responders look for clues in various places without stating any specific idea about what could be wrong.
  • Ideas about the nature of the problem remain vague, like “something could be wrong at the API layer.” There doesn’t seem to be any momentum toward developing those vague ideas into actionable theories.
  • It’s hard to follow the Primary SME’s train of thought.

Thematic vagabonding is a source of noise. It generates a lot of information, but that information doesn’t get used in any coherent way.

When you notice thematic vagabonding, a good thing to do is to start asking the Primary SME to elaborate on their motivation for each action they perform. For example, if they say “I’m looking through the database error logs,” you might reply “What did you see that makes you think there’d be database errors?” Challenge them to explain how database issues could cause the problem under investigation, and why database error logs seem likely to lead them to the root cause.

If the thematic vagabonding originates not from the Primary SME but from others, it might be good to redirect those people’s attention to whatever the Primary SME is looking into. For example, if the Primary SME is investigating load balancer anomalies and someone suggests “I’m going to look at recent deploys to see if anything big was changed,” you might say “Before you do that, I want to make sure we’ve got enough eyes on these load balancer anomalies. <Primary SME>, can you use help interpreting the weird log entries you found?”

Tunnel vision

Tunnel vision is, in a way, the opposite of thematic vagabonding. It’s when responders get stuck on a particular idea about what might be wrong, even though that idea is no longer productive. Tunnel vision happens when investigators fail to get a signal that should push them on to the next phase of investigation.

Despite having opposite symptoms from those of thematic vagabonding, tunnel vision can be addressed with a similar approach: asking investigators to elaborate on their motivations. Sometimes simply repeating back their own explanation is all it takes to make them realize that they’re going down a rabbit hole.

Another useful tactic for putting an end to tunnel vision is to get responders to seek out disconfirming evidence. For example, if the Primary SME is stuck on the idea that a particular code change is responsible for the problem under investigation, but that idea doesn’t seem to be bearing fruit, you might ask them “If we wanted to prove that this change was not the cause of the problem we’re seeing, how could we prove that?” By making this conceptual shift, investigators are forced to engage with ideas outside their tunnel, and this will often allow them to start making progress again.

Inconsistent mental models

In order to collaborate effectively, incident responders need to have a shared set of ideas about how the problem under investigation could be caused. These ideas are called hypotheses.

When hypotheses are in short supply or insufficiently communicated, incidents tend to stall out. As Incident Commander, part of your job is to make sure that responders are all on the same page about which hypotheses are being entertained and which hypotheses have already been disproven. In addition, it’s a good idea to keep track of what hypothesis is motivating each investigative action. If you don’t fully understand why the Primary SME is digging into queueing metrics, maybe you should ask them to explain their thought process before continuing.

Sometimes progress on an incident will slow to a halt because there are no clear hypotheses left to investigate. Unless this situation is addressed, the response can devolve into thematic vagabonding or tunnel vision. When you become aware of hypothesis scarcity, it can be useful to call a pause to any active investigation while the group brainstorms new hypotheses. You may get some push-back because it will feel to some like a waste of time. But sometimes, in order to move forward with concrete break-fix work, you need to do some abstract ideation work first.

Disconnect between IC and Primary SME

The worst incident train wrecks happen when the Incident Commander and the Primary SME get out of sync. Of all the relationships that make up an incident response effort, theirs is the most important. It’s so important, in fact, that we have a process just for ensuring that the relationship between IC and Primary SME stays solid. It’s called the hands-off status update.

At the very beginning of an incident – as soon as IC and Primary SME are both assigned and present in the video call – the IC should ask for a hands-off status update. The “hands off” means that, until the update is over, neither person should be typing or clicking or reading. Both should be focused entirely on communicating with each other.

The hands-off status update consists of five questions:

  • Are you ready for a hands-off status update? This question serves as a reminder that the hands-off status update is beginning, and that both the IC and the Primary SME should be focused on it. If the Primary SME says they’re not ready for a hands-off status update, ask them if they can be ready 60 seconds from now.
  • What’s your best guess at impact? It won’t always be clear how many customers are affected by the problem under investigation, or how badly the customer experience is disrupted. But it’s always useful for the Primary SME to venture a guess.
  • What possible root causes are you thinking about? This question helps obviate thematic vagabonding and tunnel vision. When the Primary SME states their thought process out loud, everyone in the video call – including the Primary SME themself – gets a clearer sense of the path forward.
  • What’s your next move? While the possible root causes are still fresh in everyone’s mind, we take an opportunity to establish the next step of problem solving. As the IC, it’s your job to make sure that the Primary SME’s next move makes sense in the context set by the answers to the previous two questions.
  • Is there any person you would like brought in? Finally, we give the Primary SME an opportunity to consider whether there are specific individuals whose skills would be useful in moving incident response forward. If they name anyone, you should do your best to bring that person into the incident response channel and the video call and – if possible – assign them an SME role subordinate to the Primary SME.

Once you’re confident that you understand the Primary SME’s answers to all of these questions, the last step of the hands-off status update is to schedule the next hands-off status update. Pick a time between 5 and 20 minutes from now, and tell the Primary SME “I’ll ask you for another hands-off status update in <that many> minutes.” Finally, set a timer to remind yourself. Repeat this cycle until the incident is resolved.

The Incident Commander reference sheet

To ensure consistent handling of roles and information during incidents, we have defined some standard procedures in the Incident Commander reference sheet. You should review the reference sheet before you sign up to be an Incident Commander. As you read it, remember the three main responsibilities of the IC:

  1. Populating the incident hierarchy
  2. Being the ultimate decision maker
  3. Facilitating the spread of information

Recommended resources

  • 📄 Common Ground and Coordination in Joint Activity (Klein, Feltovich, Woods 2004). This paper analyzes joint cognition – which is what we do when we work together to resolve incidents – from the perspective of “common ground.” It describes one of the ways in which this “common ground” most frequently falls apart. The authors call this the Fundamental Common Ground Breakdown, and by understanding it and recognizing it, you can become a more effective Incident Commander.
  • 🎬 How to Create a Differential Diagnosis. What we do in incident response has a lot in common with what doctors do when they’re trying to make a diagnosis. In both cases, the investigator is faced with a highly complex system, a ticking clock, and a limited arsenal of tactics for obtaining explanations of the system’s behavior. Although this video is targeted at medical students rather than software engineers, Incident Commanders can benefit enormously from learning the principles of differential diagnosis and applying the formalism to incident response.

Believing in the value of conceptual labor

Ideas are funny things. It can take hours or days or months of noodling on a concept before you’re even able to start putting your thoughts into a shape that others will understand. And by then, you’ve explored the contours of the problem space enough that the end result of your noodling doesn’t seem interesting anymore: it seems obvious.

It’s a lot easier to feel like you’re doing real work when you’re writing code or collecting data or fixing firewall rules. These are tasks with a visible, concrete output that correlates roughly with the amount of time you put into them. When you finish writing a script, you can hold it up to your colleagues and they can see how much effort you put into it.

But as you get into more senior-type engineering roles, your most valuable contributions start to take the form not of concrete labor, but of conceptual labor. You’re able to draw on a rich mental library of abstractions, synthesizing and analyzing concepts in a way that only someone with your experience can do.

One of the central challenges in growing into a senior engineer consists in recognizing and acknowledging the value of the conceptual labor. We have to learn to identify and discard discouraging self-talk, like:

  • Time spent thinking is time spent not doing. When you’re a senior engineer, thinking is often the most valuable kind of doing you can do.
  • It’s not worth telling anyone about this connection I thought of. As a senior engineer, you will make conceptual connections that no one else will make. If they seem worthwhile to you, then why shouldn’t it be worthwhile to communicate them to others?
  • My coworkers are all grinding through difficult work and I’m just over here ruminating. Working with concepts is difficult work, and it’s work whose outcome can be immensely beneficial to your team and your org, but only if you follow through with it. Don’t talk yourself down from it.

Perhaps for the first time in your career, one of the skills you have to develop is faith: faith in the value of your conceptual labor. When an idea looks important to you, have faith that your experience is leading you down a fruitful path. Find that idea’s shape, see how it fits into your cosmos, and put that new knowledge – knowledge that you created by the sweat of your own brow – out into the world.

No observability without theory

Imagine you’re an extremely bad doctor. Actually, chances are you don’t even have to imagine. Most people are extremely bad doctors.

Beautiful dog dressed as a vet
I love dogs, but they make bad doctors.

But imagine you’re a bad doctor with a breathtakingly thorough knowledge of the human body. You can recite chapter and verse of your anatomy and physiology textbooks, and you’re always up to date on the most important research going on in your field. So what makes you a bad doctor? Well, you never order tests for your patients.

What good does your virtually limitless store of medical knowledge do you? None at all. Without data from real tests, you’ll almost never pick the right interventions for your patients. Every choice you make will be a guess.

There’s another way to be an extremely bad doctor, though. Imagine you don’t really know anything about how the human body works. But you do have access to lots of fancy testing equipment. When a patient comes in complaining of abdominal pain and nausea, you order as many tests as you can think of, hoping that one of them will tell you what’s up.

This rarely works. Most tests just give you a bunch of numbers. Some of those numbers may be outside of normal ranges, but without a coherent understanding of how people’s bodies behave, you have no way to put those numbers into context with each other. They’re just data – not information.

In medicine, data is useless without theory, and theory is useless without data. Why would we expect things to be any different in software?

Observability as signal and theory

The word “observability” gets thrown around a lot, especially in DevOps and SRE circles. Everybody wants to build observable systems, then make their systems more observable, and then get some observability into their observability so they can observe while they observe.

But when we look for concrete things we can do to increase observability, it almost always comes down to adding data. More metrics, more logs, more spans, more alerts. Always more. This makes us like the doctor with all the tests in the world but no bigger picture to fit their tests results into.

Observability is not just data. Observability comprises two interrelated and necessary properties: signal and theory. The relationship between these two properties is as follows:

  • Signal emerges from data when we interpret it within our theory about the system’s behavior.
  • Theory reacts to signal, changing and adapting as we use it to process new information.

In other words, you can’t have observability without both a rich vein of data and a theory within which that data can be refined into signal. Not enough data and your theory can’t do its job; not enough theory and your data is meaningless. Theory is the alchemy that turns data into knowledge.

What does this mean concretely?

Screen Shot 2019-05-03 at 1.59.18 PM

It’s all well and good to have a definition of observability that looks nice on a cocktail napkin. But what can we do with it? How does this help us be better at our job?

The main takeaway from the understanding that observability consists of a relationship between data and theory, rather than simply a surfeit of the former, is this: a system’s observability may be constrained by deficiencies in either the data stream or our theory. This insight allows us to make better decisions when promoting observability.

Making better graph dashboards

However many graphs it contains, a metric dashboard only contributes to observability if its reader can interpret the curves they’re seeing within a theory of the system under study. We can facilitate this through many interventions, a few of which are to:

  • Add a note panel to the top of every dashboard which give an overview of how that dashboard’s graphs are expected to relate to one another.
  • Add links to dashboards for upstream and downstream services, so that data on the dashboard can be interpreted in a meaningful context.
  • When building a dashboard, start with a set of questions you want to answer about a system’s behavior, and then choose where and how to add instrumentation; not the other way around.

Making better alerts

Alerts are another form of data that we tend to care about. And like all data, they can only be transmogrified into signal by being interpreted within a theory. To guide this transmogrification, we can:

  • Present alerts along with links to corresponding runbooks or graph dashboards.
  • Document a set of alerts that, according to our theory, provides sufficient coverage of the health of the system.
  • Delete any alerts whose relevance to our theory can’t be explained succinctly.

Engaging in more effective incident response

When there’s an urgent issue with a system, an intuitive understanding of the system’s behavior is indispensable to the problem solving process. That means we depend on the system’s observability. The incident response team’s common ground is their theory of the system’s behavior – in order to make troubleshooting observations meaningful, that theory needs to be kept up to date with the data.

To maintain common ground over the course of incident response, we can:

  • Engage in a regular, structured sync conversation about the meaning of new data and the next steps.
  • Seek out data only when you can explicitly state how the data will relate to our theory (e.g. “I’m going to compare these new log entries with the contents of such-and-such database table because I think the latest deploy might have caused an inconsistency”).
  • Maintain an up-to-date, explicit record of the current state of problem solving, and treat it as the ultimate source of truth.

Delivering meaning

Data is just data until theory makes it signal.

The next time you need to build an observable system, or make a system more observable, take the time to consider not just what data the system produces, but how to surface a coherent theory of the system’s workings. Remember that observability is about delivering meaning, not just data.

The paradox of the bloated backlog

I want to point out a paradox you may not have noticed.

A team of software engineers or SREs invariably has more good ideas than time. We know this very well. Pick any system we own, and we’ll come up with a list of 10 things we could do to make it better. Off the top of our head.

On the other hand, when our team is confronted with the opportunity to purge some old features or enhancements out of the backlog, there’s resistance. We say, “we might get around to this some day,” or “this still needs to get done.”

These two beliefs, taken together, reveal a deep lack of team self-confidence.

If our team always has more good ideas than time, then we’re never going to implement all the good ideas in our backlog. If we add more people, we’ll just get more good ideas, and the backlog will just get more bloated.

Why are we reluctant to ruthlessly remove old tickets, then? We know that we’re constantly generating good ideas. In fact, the ideas we’re generating today are necessarily (on average) better than most of the ideas in the backlog:

  1. We have more information about the system now then we used to, so our new ideas are more aligned with real-world facts, and
  2. We have more experience as engineers now, so we have developed better intuition about what kind of interventions will create the most value.

Seen in this light, a hesitance to let go of old ideas is revealed as a symptom of a deep pathology. It means we don’t believe in our own creativity and agency. If we did, we would have easy answers to the questions we always ask when we consider closing old tickets:

  • What if we forget about this idea? That’s okay. We’ll have plenty of other, better, more relevant ideas. We never stop generating them.
  • What if this problem gets worse over time? If the risk is enough that we should prioritize this ticket over our other work, then let’s do that now. Otherwise, we can cross that bridge if we ever get to it.
  • Will the reporter of this ticket even let us close it? Nobody owns our backlog but us. When we decide to close a ticket, all we owe the reporter is an honest explanation of why.

Leave behind the paradox of the bloated backlog and start believing in your team’s own agency and creativity. Hell, maybe even cap your backlog. A team with faith in its competence is a team unleashed.

Latency- and Throughput-Optimized Clusters Under Load

It’s good to have accurate and thorough metrics for our systems. But that’s only where observability starts. In order to get value out of metrics, we have to focus on the right ones: the ones that tell us about outcomes that matter to our users.

In The Latency/Throughput Tradeoff, we talked about why a given cluster can’t be optimized for low latency and high throughput at the same time. In conclusion, we decided that separate clusters should be provisioned for latency-centric use cases and throughput-centric use cases. And since these different clusters are optimized to provide different outcomes, we’ll need to interpret their metrics in accordingly different ways.

Let’s consider an imaginary graph dashboard for each type of cluster. We’ll walk through the relationships between metrics in each cluster, both under normal conditions and under excessive load. And then we’ll wrap up with some ideas about evaluating the capacity of each cluster over the longer term.

Metrics for comparison

In order to contrast our two types of clusters, we’ll need some common metrics. I like to employ the USE metrics at the top of my graph dashboards (more on that in a future post). So let’s use those:

  • Utilization: The total CPU usage for all hosts in the cluster.
  • Saturation: The number of queued requests.
  • Error rate: The rate at which lines are being added to hosts’ error logs across the cluster.

In addition to these three metrics, we want to see a fourth metric. For the latency-optimized cluster, we want to see latency. And for the throughput-optimized cluster, we want to see throughput.

One of the best things about splitting out latency-optimized and throughput-optimized clusters is that the relationships between metrics become much clearer. It’s hard to tell when something’s wrong if you’re not sure what kind of work your cluster is supposed to be doing at any given moment. But separation of concerns allows us to develop intuition about our systems’ behavior and stop groping around in the dark.

Latency-optimized cluster

Let’s look at the relationships between these four important metrics in a latency-optimized cluster under normal, healthy conditions:

2019-03-11 blog charts latency-oriented healthyUtilization will vary over time depending on how many requests are in flight. Saturation, however, should stay at zero. Any time a request is queued, we take a latency hit.\

Error rate would ideally be zero, but come on. We should expect it to be correlated to utilization, following the mental model that all requests carry the same probability of throwing an error.

Latency, then – the metric this cluster is optimized for – should be constant. It should not be affected by utilization. If latency is correlated to utilization, then that’s a bug. There will always be some wiggle in the long tail (the high percentiles), but for a large enough workload, the median and the 10th percentile should pretty much stay put.

Now let’s see what happens when the cluster is overloaded:

2019-03-12 blog charts latency-oriented loaded

We start to see plateaus in utilization, or at least wider, flatter peaks. This means that the system is low on “slack”: the idle resources that it should always have available to serve new requests. When all resources are busy, saturation rises above zero.

Error rate may still be correlated with utilization, or it may start to do wacky things. That depends on the specifics of our application. In a latency-optimized cluster, saturation is always pathological, so it shouldn’t be surprising to see error rates climb or spike when saturation rises above zero.

Finally, we start to see consistent upward trends in latency. The higher percentiles are affected first and most dramatically. Then, as saturation rises even higher, we can see the median rise too.

Throughput-optimized cluster

The behavior of our throughput-optimized cluster, on the other hand, is pretty different. When it’s healthy, it looks like this:

2019-03-22 blog charts throughput-oriented healthy.png

Utilization – which, remember, we’re measuring via CPU usage – is no longer a fluffy cloud. Instead, it’s a series of trapezoids. When no job is in progress, utilization is at zero. When a job starts, utilization climbs up to 100% (or as close to 100% as reality allows it to get), and then stays there until the job is almost done. Eventually, the number of active tasks drops below the number of available processors, and utilization trails off back to zero.

Saturation (the number of queued tasks) follows more of a sawtooth pattern. A client puts a ton of jobs into the system, bringing utilization up to its plateau, and then as we grind through jobs, we see saturation slowly decline. When saturation reaches zero, utilization starts to drop.

Unlike that of a latency-optimized cluster, the error rate in a throughput-optimized cluster shouldn’t be sensitive to saturation. Nonzero saturation is an expected and desired condition here. Error rate is, however, expected to be follow utilization. If it bumps around at all, it should plateau, not sawtooth.

And finally, in the spot where the other cluster’s dashboard had a latency graph, we now have throughput. This we should measure in requests per second per job queued! What our customers really care about is not how many requests per second our cluster is processing, but how many of their requests per second. We’ll see why this matters so much in a bit, when we talk about this cluster’s behavior under excessive load.

Throughput should be tightly correlated to utilization and nothing else. If it also seems to exhibit a negative correlation to saturation, then that’s worth looking into: it could mean that queue management is inappropriately coupled to job processing.

Now what if our throughput-optimized cluster starts to get too loaded? What does that look like?

2019-03-23 blog charts throughput-oriented loaded

Utilization by itself doesn’t tell us anything about the cluster’s health. Qualitatively, it looks just like before: big wide trapezoids. Sure, there a bit wider than they were before, but we probably won’t notice that – especially since it’s only on average that they’re wider.

Saturation is where we really start to see the cracks. Instead of the sawtooth pattern we had before, we start to get more sawteeth-upon-sawteeth. These are jobs starting up on top of jobs that are already running. A few of these are to be expected even under healthy circumstances, but if they become more frequent, it’s an indication that there may be too much work for the cluster to handle without violating throughput SLOs.

Error rate may not budge much in the face of higher load. After all, this cluster is supposed to hold on to big queues of requests. As far as it’s concerned, nothing is wrong with that.

And that’s why we need to measure throughput in requests per second per job queued. If all we looked at was requests per second, everything would look hunky dory. But when there are two jobs queued and they’re constrained to use the same pool of processors, somebody’s throughput is going to suffer. On this throughput graph, we can see that happen right before our eyes.

Longer-term metrics

Already, we’ve seen how decoupling these two clusters gives us a much clearer mental model of their expected behavior. As we get used to the relationships between the metrics we’ve surfaced, we’ll start to build intuition. That’s huge.

But we don’t have to stop there! Having a theory (which is what we built above) also allows us to reason about long-term changes in the observable properties of these clusters as their workloads shift.

Take the throughput-optimized cluster, for instance. As our customers place more and more demand on it, we expect to see more sawteeth-upon sawtooth and longer intervals between periods of zero saturation. This latter observation is key, since wait times grow asymptotically as utilization approaches 100%. So, if we’re going to want to do evaluate the capacity needs of our throughput-optimized cluster, we should start producing these metrics on day one and put them on a dashboard:

  • Number of jobs started when another job was already running, aggregated by day. This is the “sawteeth-upon-sawteeth” number.
  • Proportion of time spent with zero saturation, aggregated by day.
  • Median (or 90th percentile) time-to-first-execution for jobs, aggregated by day. By this I mean: for each customer job, how much time passed between the first request being enqueued and the first request starting to be processed. This, especially compared with the previous metric, will show us how much our system is allowing jobs to interfere with one another.

An analogous thought process will yield useful capacity evaluation metrics for the latency-optimized cluster.

TL;DR

UnsteadyBlueAardvark-size_restricted

Separating latency- and throughput-optimized workloads doesn’t just make it easier to optimize each. It carries the added benefit of making it easier to develop a theory about our system’s behavior. And when you have a theory that’s consistent with the signal and a signal that’s interpretable within your theory, you have observability.