Believing in the value of conceptual labor

Ideas are funny things. It can take hours or days or months of noodling on a concept before you’re even able to start putting your thoughts into a shape that others will understand. And by then, you’ve explored the contours of the problem space enough that the end result of your noodling doesn’t seem interesting anymore: it seems obvious.

It’s a lot easier to feel like you’re doing real work when you’re writing code or collecting data or fixing firewall rules. These are tasks with a visible, concrete output that correlates roughly with the amount of time you put into them. When you finish writing a script, you can hold it up to your colleagues and they can see how much effort you put into it.

But as you get into more senior-type engineering roles, your most valuable contributions start to take the form not of concrete labor, but of conceptual labor. You’re able to draw on a rich mental library of abstractions, synthesizing and analyzing concepts in a way that only someone with your experience can do.

One of the central challenges in growing into a senior engineer consists in recognizing and acknowledging the value of the conceptual labor. We have to learn to identify and discard discouraging self-talk, like:

  • Time spent thinking is time spent not doing. When you’re a senior engineer, thinking is often the most valuable kind of doing you can do.
  • It’s not worth telling anyone about this connection I thought of. As a senior engineer, you will make conceptual connections that no one else will make. If they seem worthwhile to you, then why shouldn’t it be worthwhile to communicate them to others?
  • My coworkers are all grinding through difficult work and I’m just over here ruminating. Working with concepts is difficult work, and it’s work whose outcome can be immensely beneficial to your team and your org, but only if you follow through with it. Don’t talk yourself down from it.

Perhaps for the first time in your career, one of the skills you have to develop is faith: faith in the value of your conceptual labor. When an idea looks important to you, have faith that your experience is leading you down a fruitful path. Find that idea’s shape, see how it fits into your cosmos, and put that new knowledge – knowledge that you created by the sweat of your own brow – out into the world.

No observability without theory

Imagine you’re an extremely bad doctor. Actually, chances are you don’t even have to imagine. Most people are extremely bad doctors.

Beautiful dog dressed as a vet
I love dogs, but they make bad doctors.

But imagine you’re a bad doctor with a breathtakingly thorough knowledge of the human body. You can recite chapter and verse of your anatomy and physiology textbooks, and you’re always up to date on the most important research going on in your field. So what makes you a bad doctor? Well, you never order tests for your patients.

What good does your virtually limitless store of medical knowledge do you? None at all. Without data from real tests, you’ll almost never pick the right interventions for your patients. Every choice you make will be a guess.

There’s another way to be an extremely bad doctor, though. Imagine you don’t really know anything about how the human body works. But you do have access to lots of fancy testing equipment. When a patient comes in complaining of abdominal pain and nausea, you order as many tests as you can think of, hoping that one of them will tell you what’s up.

This rarely works. Most tests just give you a bunch of numbers. Some of those numbers may be outside of normal ranges, but without a coherent understanding of how people’s bodies behave, you have no way to put those numbers into context with each other. They’re just data – not information.

In medicine, data is useless without theory, and theory is useless without data. Why would we expect things to be any different in software?

Observability as signal and theory

The word “observability” gets thrown around a lot, especially in DevOps and SRE circles. Everybody wants to build observable systems, then make their systems more observable, and then get some observability into their observability so they can observe while they observe.

But when we look for concrete things we can do to increase observability, it almost always comes down to adding data. More metrics, more logs, more spans, more alerts. Always more. This makes us like the doctor with all the tests in the world but no bigger picture to fit their tests results into.

Observability is not just data. Observability comprises two interrelated and necessary properties: signal and theory. The relationship between these two properties is as follows:

    • Signal emerges from data when we interpret it within our theory about the system’s behavior.
  • Theory reacts to signal, changing and adapting as we use it to process new information.

In other words, you can’t have observability without both a rich vein of data and a theory within which that data is interpretable as signal. Not enough data and your theory can’t do its job; not enough theory and your data is meaningless. Theory is the alchemy that turns data into knowledge.

What does this mean concretely?

Screen Shot 2019-05-03 at 1.59.18 PM

It’s all well and good to have a definition of observability that looks nice on a cocktail napkin. But what can we do with it? How does this help us be better at our job?

The main takeaway from the understanding that observability consists of a relationship between data and theory, rather than simply a surfeit of the former, is this: a system’s observability may be constrained by deficiencies in either the data stream or our theory. This insight allows us to make better decisions when promoting observability.

Making better graph dashboards

However many graphs it contains, a metric dashboard only contributes to observability if its reader can interpret the curves they’re seeing within a theory of the system under study. We can facilitate this through many interventions, a few of which are to:

    • Add a note panel to the top of every dashboard which give an overview of how that dashboard’s graphs are expected to relate to one another.
    • Add links to dashboards for upstream and downstream services, so that data on the dashboard can be interpreted in a meaningful context.
  • When building a dashboard, start with a set of questions you want to answer about a system’s behavior, and then choose where and how to add instrumentation; not the other way around.

Making better alerts

Alerts are another form of data that we tend to care about. And like all data, they can only be transmogrified into signal by being interpreted within a theory. To guide this transmogrification, we can:

    • Present alerts along with links to corresponding runbooks or graph dashboards.
    • Document a set of alerts that, according to our theory, provides sufficient coverage of the health of the system.
  • Delete any alerts whose relevance to our theory can’t be explained succinctly.

Engaging in more effective incident response

When there’s an urgent issue with a system, an intuitive understanding of the system’s behavior is indispensable to the problem solving process. That means we depend on the system’s observability. The incident response team’s common ground is their theory of the system’s behavior – in order to make troubleshooting observations meaningful, that theory needs to be kept up to date with the data.

To maintain common ground over the course of incident response, we can:

    • Engage in a regular, structured sync conversation about the meaning of new data and the next steps.
    • Seek out data only when you can explicitly state how the data will relate to our theory (e.g. “I’m going to compare these new log entries with the contents of such-and-such database table because I think the latest deploy might have caused an inconsistency”).
  • Maintain an up-to-date, explicit record of the current state of problem solving, and treat it as the ultimate source of truth.

Delivering meaning

Data is just data until theory makes it signal.

The next time you need to build an observable system, or make a system more observable, take the time to consider not just what data the system produces, but how to surface a coherent theory of the system’s workings. Remember that observability is about delivering meaning, not just data.

The paradox of the bloated backlog

I want to point out a paradox you may not have noticed.

A team of software engineers or SREs invariably has more good ideas than time. We know this very well. Pick any system we own, and we’ll come up with a list of 10 things we could do to make it better. Off the top of our head.

On the other hand, when our team is confronted with the opportunity to purge some old features or enhancements out of the backlog, there’s resistance. We say, “we might get around to this some day,” or “this still needs to get done.”

These two beliefs, taken together, reveal a deep lack of team self-confidence.

If our team always has more good ideas than time, then we’re never going to implement all the good ideas in our backlog. If we add more people, we’ll just get more good ideas, and the backlog will just get more bloated.

Why are we reluctant to ruthlessly remove old tickets, then? We know that we’re constantly generating good ideas. In fact, the ideas we’re generating today are necessarily (on average) better than most of the ideas in the backlog:

  1. We have more information about the system now then we used to, so our new ideas are more aligned with real-world facts, and
  2. We have more experience as engineers now, so we have developed better intuition about what kind of interventions will create the most value.

Seen in this light, a hesitance to let go of old ideas is revealed as a symptom of a deep pathology. It means we don’t believe in our own creativity and agency. If we did, we would have easy answers to the questions we always ask when we consider closing old tickets:

  • What if we forget about this idea? That’s okay. We’ll have plenty of other, better, more relevant ideas. We never stop generating them.
  • What if this problem gets worse over time? If the risk is enough that we should prioritize this ticket over our other work, then let’s do that now. Otherwise, we can cross that bridge if we ever get to it.
  • Will the reporter of this ticket even let us close it? Nobody owns our backlog but us. When we decide to close a ticket, all we owe the reporter is an honest explanation of why.

Leave behind the paradox of the bloated backlog and start believing in your team’s own agency and creativity. Hell, maybe even cap your backlog. A team with faith in its competence is a team unleashed.

Latency- and Throughput-Optimized Clusters Under Load

It’s good to have accurate and thorough metrics for our systems. But that’s only where observability starts. In order to get value out of metrics, we have to focus on the right ones: the ones that tell us about outcomes that matter to our users.

In The Latency/Throughput Tradeoff, we talked about why a given cluster can’t be optimized for low latency and high throughput at the same time. In conclusion, we decided that separate clusters should be provisioned for latency-centric use cases and throughput-centric use cases. And since these different clusters are optimized to provide different outcomes, we’ll need to interpret their metrics in accordingly different ways.

Let’s consider an imaginary graph dashboard for each type of cluster. We’ll walk through the relationships between metrics in each cluster, both under normal conditions and under excessive load. And then we’ll wrap up with some ideas about evaluating the capacity of each cluster over the longer term.

Metrics for comparison

In order to contrast our two types of clusters, we’ll need some common metrics. I like to employ the USE metrics at the top of my graph dashboards (more on that in a future post). So let’s use those:

  • Utilization: The total CPU usage for all hosts in the cluster.
  • Saturation: The number of queued requests.
  • Error rate: The rate at which lines are being added to hosts’ error logs across the cluster.

In addition to these three metrics, we want to see a fourth metric. For the latency-optimized cluster, we want to see latency. And for the throughput-optimized cluster, we want to see throughput.

One of the best things about splitting out latency-optimized and throughput-optimized clusters is that the relationships between metrics become much clearer. It’s hard to tell when something’s wrong if you’re not sure what kind of work your cluster is supposed to be doing at any given moment. But separation of concerns allows us to develop intuition about our systems’ behavior and stop groping around in the dark.

Latency-optimized cluster

Let’s look at the relationships between these four important metrics in a latency-optimized cluster under normal, healthy conditions:

2019-03-11 blog charts latency-oriented healthyUtilization will vary over time depending on how many requests are in flight. Saturation, however, should stay at zero. Any time a request is queued, we take a latency hit.\

Error rate would ideally be zero, but come on. We should expect it to be correlated to utilization, following the mental model that all requests carry the same probability of throwing an error.

Latency, then – the metric this cluster is optimized for – should be constant. It should not be affected by utilization. If latency is correlated to utilization, then that’s a bug. There will always be some wiggle in the long tail (the high percentiles), but for a large enough workload, the median and the 10th percentile should pretty much stay put.

Now let’s see what happens when the cluster is overloaded:

2019-03-12 blog charts latency-oriented loaded

We start to see plateaus in utilization, or at least wider, flatter peaks. This means that the system is low on “slack”: the idle resources that it should always have available to serve new requests. When all resources are busy, saturation rises above zero.

Error rate may still be correlated with utilization, or it may start to do wacky things. That depends on the specifics of our application. In a latency-optimized cluster, saturation is always pathological, so it shouldn’t be surprising to see error rates climb or spike when saturation rises above zero.

Finally, we start to see consistent upward trends in latency. The higher percentiles are affected first and most dramatically. Then, as saturation rises even higher, we can see the median rise too.

Throughput-optimized cluster

The behavior of our throughput-optimized cluster, on the other hand, is pretty different. When it’s healthy, it looks like this:

2019-03-22 blog charts throughput-oriented healthy.png

Utilization – which, remember, we’re measuring via CPU usage – is no longer a fluffy cloud. Instead, it’s a series of trapezoids. When no job is in progress, utilization is at zero. When a job starts, utilization climbs up to 100% (or as close to 100% as reality allows it to get), and then stays there until the job is almost done. Eventually, the number of active tasks drops below the number of available processors, and utilization trails off back to zero.

Saturation (the number of queued tasks) follows more of a sawtooth pattern. A client puts a ton of jobs into the system, bringing utilization up to its plateau, and then as we grind through jobs, we see saturation slowly decline. When saturation reaches zero, utilization starts to drop.

Unlike that of a latency-optimized cluster, the error rate in a throughput-optimized cluster shouldn’t be sensitive to saturation. Nonzero saturation is an expected and desired condition here. Error rate is, however, expected to be follow utilization. If it bumps around at all, it should plateau, not sawtooth.

And finally, in the spot where the other cluster’s dashboard had a latency graph, we now have throughput. This we should measure in requests per second per job queued! What our customers really care about is not how many requests per second our cluster is processing, but how many of their requests per second. We’ll see why this matters so much in a bit, when we talk about this cluster’s behavior under excessive load.

Throughput should be tightly correlated to utilization and nothing else. If it also seems to exhibit a negative correlation to saturation, then that’s worth looking into: it could mean that queue management is inappropriately coupled to job processing.

Now what if our throughput-optimized cluster starts to get too loaded? What does that look like?

2019-03-23 blog charts throughput-oriented loaded

Utilization by itself doesn’t tell us anything about the cluster’s health. Qualitatively, it looks just like before: big wide trapezoids. Sure, there a bit wider than they were before, but we probably won’t notice that – especially since it’s only on average that they’re wider.

Saturation is where we really start to see the cracks. Instead of the sawtooth pattern we had before, we start to get more sawteeth-upon-sawteeth. These are jobs starting up on top of jobs that are already running. A few of these are to be expected even under healthy circumstances, but if they become more frequent, it’s an indication that there may be too much work for the cluster to handle without violating throughput SLOs.

Error rate may not budge much in the face of higher load. After all, this cluster is supposed to hold on to big queues of requests. As far as it’s concerned, nothing is wrong with that.

And that’s why we need to measure throughput in requests per second per job queued. If all we looked at was requests per second, everything would look hunky dory. But when there are two jobs queued and they’re constrained to use the same pool of processors, somebody’s throughput is going to suffer. On this throughput graph, we can see that happen right before our eyes.

Longer-term metrics

Already, we’ve seen how decoupling these two clusters gives us a much clearer mental model of their expected behavior. As we get used to the relationships between the metrics we’ve surfaced, we’ll start to build intuition. That’s huge.

But we don’t have to stop there! Having a theory (which is what we built above) also allows us to reason about long-term changes in the observable properties of these clusters as their workloads shift.

Take the throughput-optimized cluster, for instance. As our customers place more and more demand on it, we expect to see more sawteeth-upon sawtooth and longer intervals between periods of zero saturation. This latter observation is key, since wait times grow asymptotically as utilization approaches 100%. So, if we’re going to want to do evaluate the capacity needs of our throughput-optimized cluster, we should start producing these metrics on day one and put them on a dashboard:

  • Number of jobs started when another job was already running, aggregated by day. This is the “sawteeth-upon-sawteeth” number.
  • Proportion of time spent with zero saturation, aggregated by day.
  • Median (or 90th percentile) time-to-first-execution for jobs, aggregated by day. By this I mean: for each customer job, how much time passed between the first request being enqueued and the first request starting to be processed. This, especially compared with the previous metric, will show us how much our system is allowing jobs to interfere with one another.

An analogous thought process will yield useful capacity evaluation metrics for the latency-optimized cluster.

TL;DR

UnsteadyBlueAardvark-size_restricted

Separating latency- and throughput-optimized workloads doesn’t just make it easier to optimize each. It carries the added benefit of making it easier to develop a theory about our system’s behavior. And when you have a theory that’s consistent with the signal and a signal that’s interpretable within your theory, you have observability.

The Latency/Throughput Tradeoff: Why Fast Services Are Slow And Vice Versa

Special thanks to the graceful and cunning Ben Ng for consulting on this post.

I’m finally getting around to reading that DevOps* book everybody’s been raving about, Site Reliability Engineering: How Google Runs Production Systems. My verdict so far: it’s pretty good.

Here’s one of the first passages to jump out to me, from Chapter 3: Embracing Risk:

The low-latency user wants Bigtable’s request queues to be (almost always) empty so that the system can process each outstanding request immediately upon arrival. (Indeed, inefficient queuing is often a cause of high tail latency.) The user concerned with offline analysis is more interested in system throughput, so that user wants request queues to never be empty. To optimize for throughput, the Bigtable system should never need to idle while waiting for its next request.

This is a profound and general insight. When I read this passage, my last decade of abject suffering suddenly came into focus for me.

When I say “abject suffering,” I’m of course talking about ElasticSearch administration. When a storage system like ElasticSearch has to serve both high-latency and high-throughput workloads, it is guaranteed to get ugly. This fact is super important, which is why I’m devoting this blog post to exploring the relationships among latency, throughput, and capacity from a queueing perspective. I hope I can make these relationships stick in your mind like they’ve stuck in mine.

* Go ahead. Tell me DevOps and SRE aren’t the same thing. I dare you.

The tradeoff between throughput and latency

Consider a service that responds to requests. As an example, let’s say it’s a service that takes as input a picture of a dog and returns a picture of that dog wearing a silly hat.

2019-02-25 dog wearing hat
Artist’s rendering

Like almost any service (exception: Tourbillon), our service can only handle a certan number of requests per second [to put hats on dogs (RPSTPHOD)]. We’ll call this number its capacity. If we have 200 processes devoted to dog-hatting, and dogs take on average 400 milliseconds to haberdash, then the theoretical capacity of the system is

(200) / (0.4s) = 500 hats per second

Now let’s consider the two types of users that depend on our service:

  • On-the-spot dog hatters. At any given time, these users have a single dog picture that requires a hat as soon as possible. Perhaps they’re using our service to support a website that generates a single dog-hat picture per page load, and they want their page to load quickly. These users are interested primarily in how quickly they can get a hat on a dog. In a word: latency.
  • Bulk dog-hatters. These users tend to have massive data sets that they want processed as quickly as possible. The most obvious example would be a law enforcement agency wanting to compare their large database of pet photos to surveillance footage of a particular dog robbing a bank while wearing a hat. Bulk dog-hatters care not about the latency of any individual dog-hatting, but about the throughput they can achieve. In other words: how close they can get to our service’s theoretical capacity of 500 hats per second.

But here’s the problem: no single cluster of dog-hatting servers can be optimal for both types of users. And the better we make the service for one kind of user, the worse we make it for the other.

The needs of on-the-spot users

In order to minimize latency for our on-the-spot users (without dropping any of their requests), we need to make sure that there’s always a processor idle when their request comes in. If we fail to make sure of this, then new requests will have to be queued while we wait for a spot to open up, thus inflating latency. The system needs some “slack.”

2019-02-25 blog charts low-latency

Since we need slack, we don’t ever want throughput to approach capacity. The closer we get to our system’s capacity, the more drastically latencies will balloon, like I talked about in this post.

The needs of bulk users

Our bulk dog-hatters, on the other hand, don’t care so much about request latency. Some of their individual requests might take seconds, or minutes, or even hours to complete. What they care about is how quickly our service can process their entire data set. In other words, they care about getting throughput as close as possible to capacity.

This means that, whenever a job is running, bulk dog-hatters want there to be (virtually) zero slack. Every processor should be active at all times. Consequently, our queue sizes will explode as soon as the job starts, and our queues will stay occupied until the job is almost done.

2019-02-25 blog charts high-throughput

In this case, we want our queues to be full whenever there’s a bulk job running. Anything else would give sub-optimal throughput.

Splitting the cluster up

The needs of on-the-spot and bulk users are incompatible. One group needs minimal latency, while the other group needs maximal throughput.

If both of these groups are using the same cluster, we’re going to have serious problems. On-the-spot users’ latencies will vary widely depending on whether there’s currently a bulk job in progress, and bulk users’ job times will vary depending on the number of on-the-spot users currently using the system. No matter how much we scale or tweak tuning parameters, neither group will get what they need. And what’s worse, we’ll be stuck in a perpetual tug-of-war between the priorities of these two groups.

So let’s split our cluster in two: a “low latency” cluster and a “high throughput” cluster. And let’s let our users pick the right one for their use case. This way, we’ll have much clearer expectations about the performance and scaling characteristics of our service, and we’ll avoid the frustrating priority tug-of-war that characterized our mixed-use cluster.

The split doesn’t have to be complete. Instead of having two wholly separate clusters, we could have some kind of load balancer that reserves a certain portion of our fleet for low-latency traffic and slots bulk jobs onto dedicated segments of the cluster. The details of every solution will vary. What matters is that on-the-spot and bulk dog-hatters aren’t drawing on the same pool of resources.

Once we do split up our cluster, then, what should we expect the performance characteristics of the new clusters to be? What will their graph dashboards look like when they’re healthy, or near capacity, or over capacity? In an upcoming post, I’ll use some more queueing reasoning to answer these questions. So get hype for that!

[UPDATE: It’s here!]

What makes a good alert?

Ever since the late 2000s, I’ve been implementing “alert review” processes on ops teams. As a team, we go through all the alerts we’ve received in the last week, and identify those that are bad.

But what makes an alert bad? How do we distinguish the good ones from the bad?

I use a simple framework wherein a good alert has the following properties:

  • Actionable: Indicates a problem for which the recipient is well placed to take immediate corrective action.
  • Investigable: (yes, I made this word up) Indicates a problem whose solution is not yet known by the organization.

These properties can be present or absent independently, so this framework identifies four types of alerts:

alert_types

Actionability

Actionability has been widely touted as a necessary condition for good alerts, and for good reason. Why send an alert about something you can’t fix? Just to make the recipient anxious?

Referring to my definition of an actionable alert:

Indicates a problem for which the recipient is well placed to take immediate corrective action.

we can see three main ways in which non-actionability shows up:

  • Someone is well placed to take immediate corrective action, but not the recipient. For example, an alert that indicates a problem with the Maple Syrup service instead pages the Butter team, which can’t do anything about it. In cases like these, the fix is often simple: change the alert’s recipient. Sometimes, though, you’ll first have to refactor your config to let it distinguish between Maple Syrup problems and Butter problems.
  • There is an action to take, but it can’t be taken immediately. For example, Apache needs to be restarted, but the recipient of the alert isn’t sure whether this will cause an outage. This type of non-actionable alert often calls for either improved documentation (e.g. a “runbook” indicating steps to perform). Another example might be a disk space alert that has been slowly climbing for a while and just crossed a threshold: action can’t be taken immediately, because the team needs to agree upon an action to take.
  • There is no action to take. For example, “CPU utilization” or “Packet loss.” These are your classic FYI alerts. Instead of alerting, these things should appear on a dashboard for use in troubleshooting when a problem is already known to exist.

Investigability

An alert is non-investigable if its implications are obvious at first glance. Here are the two most common types of non-investigable alerts:

  • “Chief O’Brien” alerts. If you look at your phone and instantly know the commands you have to run to fix it, that’s a “Chief O’Brien” alert. There’s no need to bother a human to fix the issue; the response should be automated.
  • Redundant alerts. Sometimes you get an alert for increased error rates from one of your services, and by the time you get up and get to your laptop, you’ve gotten 8 more similar alerts. The first one might well have been a perfectly good alert, but the other 8 are likely non-investigable. Whatever you learn in investigating the first one will apply to the rest in exactly the same way. The correct response to alerts like these is to use dependencies or grouping.

What to do with this framework

Like I said, I like to have my team go through all the alerts they’ve received in the last week and discuss them. Imagine a spreadsheet where each alert is a row and there are columns labeled Actionable? and Investigable?

Actually, don’t bother. I imagined one for you:

spreadsheet.png

This actionability/investigability framework helps the team identify bad alerts and agree on the precise nature of their badness. And as a bonus, the data in these spreadsheets can shine a light on trends in alert quality over time.

I’ve had a lot of success with this framework throughout the years, and I’d like to hear how it works for others. Let me know if you try it out, or if you have a different model for addressing the same sorts of questions!

Ticket Cutoff Ages Are Silly

CHARLIE:   There are too many tickets in our backlog!
JULIET:       I know, right? So many of them are old-ass feature requests that’ll never get done.
MIKE:          I’ve got an idea! Why don’t we auto-delete tickets older than 6 months?

(Everyone looks at each other in shock. Confetti wipe. MIKE is being carried through the office to thunderous applause.)

A maximum ticket age seems like a good idea. Big ticket backlogs are demoralizing, and it’s impossible to get any useful information out of them. Furthermore, we know with certainty that most things in the backlog are never getting done.

The submitter of a ticket knows more about why a piece of work is important than you do, right? Why not rely on that knowledge for triage?

No. Ticket cutoff ages are silly.

Ticket cutoff ages are silly because they rest on the assumption that a reporter will notice a ticket has been closed, read the notification, evaluate the ticket’s importance in the appropriate context, and reopen the ticket if – and only if – it’s important. This assumption is wildly flawed, as anyone with an inbox full of JIRA updates can attest. The evaluation step is especially absurd: how can ticket reporters be expected to consistently place requested work in the context of your team’s ever-shifting priorities and constraints?

Ticket cutoff ages are silly because the age of a ticket is a terrible proxy metric for value. There’s a crucial difference between urgent and important. Tasks that get resolved at a young age tend to be urgent ones, which are usually associated with achieving someone else’s goals. By enforcing a maximum age, your team will naturally focus on urgent tickets to the exclusion of important ones: generally those that advance the team’s goals. A ticket’s age has little connection with its importance.

Ticket cutoff ages are silly because they don’t even solve the problem they set out to solve: that demoralizing, disorganized pile of work that’s always in your peripheral vision. Sure, some tickets get closed, but you can never explain why. Important tasks disappear according to the whims of entropy, and unimportant tasks are always mixed in with everything else. Each time you try to understand your team’s commitments, you must reevaluate the merits of every ticket in the queue.

This method of constraining a ticket backlog is nothing short of an abdication of ownership. Instead of respecting our own time and that of the ticket reporter by making an explicit, up-front decision about a ticket’s fate, we let chance and frustration shape our product.

Wouldn’t it be better to take matters into our own hands and cap the backlog?