Cap Your Ticket Backlog

Unbounded ticket backlogs are one of the most pernicious influences facing an engineering team. Consider:

  • A team’s capacity (the amount of work it’s capable of doing in a given week) is limited. As the ticket backlog grows, so grows the number of tasks that will never be finished.
  • Keeping tickets that will never be finished amounts to a tacit denial of the team’s limited capacity. It’s pure fantasy to imagine that every ticket in the backlog is one that the team might eventually get around to.
  • Keeping tickets that will never be finished hides information about the team’s capacity and agenda, making it near impossible to reason about priorities. A giant backlog offers no information about what the team considers important, and it invites stakeholders to share in the delusion that all work will eventually get done.
  • A large ticket backlog is just demotivating. No one even contemplates digging into the backlog because of how Sisyphean that would feel.
  • To top it all off, a large, unbounded ticket backlog perpetuates itself. The more tickets you have, the less empowered you feel to say no.

Why ticket backlogs grow

Like any queueing system, a ticket backlog will grow without bound if tasks arrive faster (on average) than they leave.

Every engineering team has more ideas than time. There are many things that your team should probably do, but you’ll never have enough time to do them all. Add to this the firehose of feature requests, bug reports, and maintenance tasks, and it’s easy to see why tickets arrive in a backlog like… well… a firehose.

But we already know where tickets come from. That’s only one of two important variables. Why don’t tickets ever leave faster than they arrive?

Because refuse to close them, of course! When a ticket represents a neat idea our team had, we don’t want to close it because the idea is still neat. And when it represents a request from a different team, we don’t feel empowered to close it because we feel it’s our team’s job to fulfill that request.

So tickets enter our system faster than we get rid of them. This usually leads to one of two situations:

  1. Growth without bound. Your ticket queue has 4000 tickets in it, and it’s become a running joke on the team that “oh sure, we’ll get around to it when we finish the rest of the ticket queue.” Sound familiar?
  2. Automatic closing. You write a cron job to auto-close tickets that haven’t been touched in a year, on the assumption that “if it’s important, they’ll re-open it.” Instead of making informed, thoughtful decisions about what work is most important to your team, you leave the problem to the vagaries of ticket reporters and the march of entropy.

It doesn’t have to be like this. But the solution is so obvious that most people will dismiss it out of hand.

Cap your ticket backlog.

Letting your backlog grow without bound is a form of information hiding. It serves only to obfuscate. So why not just cap its size? Say “from now on, there can’t be more than 10 tickets sitting in our backlog.”

Think of all the benefits:

  • You’ll force yourself to make decisions about which tasks are critical to your team’s success.
  • You’ll develop a sense of how much work you can realistically commit to.
  • You’ll know at a glance which types of tasks are in most dire need of streamlining.
  • You’ll understand, all at once and without summarizing, what’s in the backlog.
  • You’ll start looking at the backlog and thinking “I can put a dent in that.”

At first it will be hard to come to terms with the fact that most of the things in your backlog are never going to get done. But that’s a fact. The sooner you accept it, the better for everyone: not just your team, but also the people who make requests of you. It doesn’t serve anyone to pretend that you can finish every piece of work that would be nice to do.

Putting it into practice

My team at Etsy has started capping its backlog at 10 issues.

Each Tuesday morning, we meet in Slack for 15 minutes. We look at a JIRA filter that shows all unassigned tickets that haven’t been updated in the last 2 days. If there are more than 10, we prune the list.

There are a bunch of different ways we can remove a ticket from the backlog, including:

  • Mark for clobbering. Apply the label clobbering-time to an issue. This adds it to the list of issues that we’ll draw from during a Clobbering Time session. Clobbering Time is a monthly event where we try to knock out as many non-urgent tasks as possible. As such, Clobbering Time issues should be resolvable by a single engineer within an hour.
  • Move to another project. If it’s not a thing our team needs to do, move the ticket to the appropriate project.
  • Assign to someone. If someone’s ready to work on the issue immediately, assign it to that person.
  • Request more details. If the issue isn’t clearly defined enough to begin work, assign it to the creator and request more details. Make sure to un-assign the issue once enough details are received.
  • Close. Just close outright, with a comment explaining that we don’t have enough bandwidth.

At first, we found a lot of low-hanging fruit. There were many clobbering-friendly tickets we could remove immediately, and there were many others that simply were no longer relevant.

As we use this process more, we’re going to start having to make tougher decisions. We’ll have to choose between tasks that seem equally essential. It won’t always be pleasant, but I believe it’s necessary and we’ll be glad we did it, because ignoring those difficult decisions is worse.

Let’s Stop Pretending Estimates Are Exact

Some days I feel like estimates and plans are worthless. Other days I find myself idly speculating that if only we were better at planning and estimating, everything would be beautiful.

In an engineering organization, we want realistic plans. Realistic plans and roadmaps will hypothetically help us utilize our capacity effectively and avoid overcommitting. And our plans are usually built on estimates; that’s why we believe that “correct” estimates are more likely to generate a realistic plan. Therefore, we believe that correct estimates are useful for planning.

On the other hand, if you’ve ever worked with engineers, you also believe that estimates are never correct. Something always gets in the way, or we pad the estimate out to avoid being held responsible for missing a deadline, or (occasionally) the task ends up being much simpler than we expected. The time we estimate is never the time it actually takes to do the job.C7ofJ_scaled

These two statements seem to be in contradiction, but they are not. There is a key to holding both these beliefs in your head at once. And I believe that if an entire engineering org were to grasp this key together, it would unlock enormous potential.

The key is to treat uncertainty as a first-class citizen.

Why This Is Hard

Developing organizational intuition in about estimates is really hard. It is in fact so hard that I’m not aware of any organization that has done it. I see two big reasons for this:

  1. We don’t have good awareness of the amount of uncertainty in our estimates.
  2. We pay lip service to the idea that estimates aren’t perfect, but we plan as if they are.

1. Uncalibrated estimates

When we give estimates, those estimates aren’t calibrated to a particular uncertainty level. Everybody has different levels of confidence in their estimates, and those levels are neither discussed nor adjusted. As a result, engineers and project managers find it impossible to turn estimates into meaningful plans.

2. Planning with incorrect use of uncertainty

You’ve surely noticed uncertainty being ignored. It usually goes something like this:

PM: How soon do you think you can have the power converters re-triangulated?
ENG: I’m not sure. It could take a day, or it could take five.
PM: I understand. What do you think is most likely? Can we say three days?
ENG: Sure, three or four.
PM: Okay. I’ll put it down for four days, just to be safe.

Both sides acknowledge that there is a great deal of uncertainty in this estimate. But a number needs to get written down, so they write down 4. In this step, information about uncertainty is lost. So it shouldn’t be a surprise later when re-triangulating the power converters takes 7 days and the whole plan gets thrown off.

Example

Suppose 5 teams are working independently on different parts of a project. Each team has said they’re 80% confident that their part will be done by the end of the year. The project manager will want to know how likely is it that all five parts will be done by the end of the year?

When we answer questions like this, we don’t usually think in terms of probability. And even if we do think in terms of probability, it’s easy to get it wrong. If you ask non-mathy people this question (and most project managers I’ve worked with are not mathy), their intuition will be: 80%.

DreidelsBut that intuition is extremely misleading. What we’re dealing with is a set of 5 independent events, each with 80% probability. Imagine rolling five 5-sided dice. Or, since 5-sided dice are not really a thing, imagine spinning five 5-sided dreidels (which are also not really a thing, but are at least geometrically possible). You spin your five dreidels and if any of them comes up ש‎, the project fails to meet its end-of-year deadline. The probability of success, then, is:

(80%)⁵ = 80% ⋅ 80% ⋅ 80% ⋅ 80% ⋅ 80% = 32.8%

Awareness of the probabilistic nature of estimates makes it obvious that a bunch of high-confidence estimates do not add up to a high-confidence estimate.

To make matters even worse, most organizations don’t even talk about how confident they are. In real life, we wouldn’t even know the uncertainties associated with our teams’ estimates (this is problem 1: uncalibrated estimates). Nobody would even be talking about probabilities. They’d be saying “All five teams said they’re pretty sure they can be done by the end of the year, so we can be pretty sure we’ll hit an end-of-year target for the project.” But they’d be dead wrong.

Why This Is Worth Fixing

I believe that poor handling of uncertainty is a major antagonist not just of technical collaboration, but of social cohesion at large. When uncertainty is not understood:

  • Project managers feel let down. Day in and day out they ask for reasonable estimates, and they’re left explaining to management why the estimates were all incorrect and the project is going to be late.
  • Engineers feel held to unreasonable expectations. They have to commit to a single number, and when that number is too low they have to work extra hard to hit it anyway. It feels like being punished for something that’s no one’s fault.
  • Managers feel responsible for failures. Managers are supposed to help their teams collaborate and plan, but planning without respect for uncertainty leads to failure every time. And what’s worse, if you don’t understand the role of uncertainty, you can’t learn from the failure.
  • Leadership makes bad investments. Plans may look certain when they’re not at all. Return on an investment often degrades quickly with delays, making what looked like a good investment on paper turn out to be a dud.

Fostering an intuition for uncertainty in an organization could help fix all these problems. I think it would be crazy not to at least try.

How To Fix It

Okay, here’s the part where I admit that I don’t quite know what to do about this problem. But I’ve got a couple ideas.

1. Uncalibrated estimates

To solve the problem of uncalibrated estimates (estimates whose uncertainty is wrong or never acknowledged), what’s needed is practice. Teams need to make 80%-confidence estimates for everything – big and small – and record those estimates somewhere. Then when the work is actually done, they can review their estimates and compare to the real-world completion time. If they didn’t get about 80% of their estimates correct, they can learn to adjust their confidence.

Estimating with a certain level of confidence is a skill that can be learned through practice. But practice, as always, takes a while and it needs to be consistent. I’m not sure how one would get an organization to put in the work, but I do believe it would pay off if they did.

2. Planning with incorrect use of uncertainty

Managers and project managers should get some training in how to manipulate estimates with uncertainty values attached.

Things get tricky when tasks need to be done one after the other instead of in parallel. But I like to think we’re not adding complexity to the process of planning; rather we’re just revealing the ways in which the current process is broken. And recognizing the limits of our knowledge is a great way to keep our batch sizes small and our planning horizons close.

I think we need to get used to calibrating estimates before we can see our way clear to address the planning challenges.

Can It Be Fixed?

It strikes me as super hard to fix the problems I’ve outlined here in a company that already has its old habits. Perhaps a company needs to be built from the ground up with a culture of uncertainty awareness. But I hope not.

If there is a way to make uncertainty a first-class citizen, I truly believe that engineering teams will benefit hugely from it.

Falsifiability: why you rule things out, not in

This June, I had the honor of speaking at O’Reilly Velocity 2016 in Santa Clara. My topic was Troubleshooting Without Losing Common Ground, which I’ve written about and written about before that too.

I was pretty happy with my talk, especially the Star Trek: The Next Generation vignette in the middle. It was a lot of ideas to pack into a single talk, but I think a lot of people got the point. However, I did give a really unsatisfactory answer (30m46s) to the first question I received. The question was:

In the differential diagnosis steps, you listed performing tests to falsify assumptions. Are you borrowing that from medicine? In tech are we only trying to falsify assumptions, or are we sometimes trying to validate them?

I didn’t have a real answer at the time, so I spouted some bullshit and moved on. But it’s a good question, and I’ve thought more about it, and I’ve come up with two (related) answers: a common-sense answer and a pretentious philosophical answer.

The Common Sense Answer

My favorite thing about differential diagnosis is that it keeps the problem-solving effort moving. There’s always something to do. If you’re out of hypotheses, you come up with new ones. If you finish a test, you update the symptoms list. It may not always be easy to make progress, but you always have a direction to go, and everybody stays on the same page.

But when you seek to confirm your hypotheses, rather than to falsify others, it’s easy to fall victim to tunnel vision. That’s when you fixate on a single idea about what could be wrong with the system. That single idea is all you can see, as if you’re looking at it through a tunnel whose walls block everything else from view.

Tunnel vision takes that benefit of differential diagnosis – the constant presence of a path forward – and negates it. You keep running tests to try to confirm your hypothesis, but you may never prove it. You may just keep getting tests results that are consistent with what you believe, but that are also consistent with an infinite number of hypotheses you haven’t thought of.

A focus on falsification instead of verification can be seen as a guard against tunnel vision. You can’t get stuck on a single hypothesis if you’re constrained to falsify other ones. The more alternate hypotheses you manage to falsify, the more confident you get that you should be treating for the hypotheses that might still be right.

Now, of course, there are times when it’s possible to verify your hunch. If you have a highly specific test for a problem, then by all means try it. But in general it’s helpful to focus on knocking down hypotheses rather than propping them up.

The Pretentious Philosophical Answer

I just finished Karl Popper’s ridiculously influential book The Logic of Scientific Discovery. If you can stomach a dense philosophical tract, I would highly recommend it.

karl-popper-4.jpg
Karl “Choke Right On It, Logical Positivism” Popper

Published in 1959 – but based on Popper’s earlier book Logik der Forschung from 1934 – The Logic Of Scientific Discovery makes a then-controversial [now widely accepted (but not universally accepted, because philosophers make cats look like sheep, herdability-wise)] claim. I’ll paraphrase the claim like so:

Science does not produce knowledge by generalizing from individual experiences to theories. Rather, science is founded on the establishment of theories that prohibit classes of events, such that the reproducible occurrence of such events may falsify the theory.

Popper was primarily arguing against a school of thought called logical positivism, whose subscribers assert that a statement is meaningful if and only if it is empirically testable. But what matters to our understanding of differential diagnosis isn’t so much Popper’s absolutely brutal takedown of logical positivism (and damn is it brutal), as it is his arguments in favor of falsifiability as the central criterion of science.

I find one particular argument enlightening on the topic of falsification in differential diagnosis. It hinges on the concept of self-contradictory statements.

There’s an important logical precept named – a little hyperbolically – the Principle of Explosion. It asserts that any statement that contradicts itself (for example, “my eyes are brown and my eyes are not brown”) implies all possible statements. In other words: if you assume that a statement and its negation are both true, then you can deduce any other statement you like. Here’s how:

  1. Assume that the following two statements are true:
    1. “All cats are assholes”
    2. “There exists at least one cat that is not an asshole”
  2. Therefore the statement “Either all cats are assholes, or 9/11 was an inside job” (we’ll call this Statement A) is true, since the part about the asshole cats is true.
  3. However, if the statement “there exists at least one cat that is not an asshole” is true too (which we’ve assumed it is) and 9/11 were not an inside job, then Statement A would be false, since neither of its two parts would be true.
  4. So the only way left for Statement A to be true is for “9/11 was an inside job” to be a true statement. Therefore, 9/11 was an inside job.
  5. Wake up, sheeple.

plhpico

The Principle of Explosion is the crux of one of Popper’s most convincing arguments against the Principle of Induction as the basis for scientific knowledge.

It was assumed by many philosophers of science before Popper that science relied on some undefined Principle of Induction which allowed one to generalize from a finite list of experiences to a general rule about the universe. For example, the Principle of Induction would allow one to deduce from enough statements like “I dropped a ball and it fell” and “My friend dropped a wrench and it fell” to “When things are dropped, they fall.” But Popper argued against the existence of the Principle of Induction. In particular, he pointed out that:

If there were some way to prove a general rule by demonstrating the truth of a finite number of examples of its consequences, then we would be able to deduce anything from such a set of true statements.

Right? By the Principle of Explosion, a self-contradictory statement implies the truth of all statements. If we accepted the Principle of Induction, then the same evidence that proves “When things are dropped, they fall” would also prove “All cats are assholes and there exists at least one cat that is not an asshole,” which would prove every statement we can imagine.

So what does this have to do with falsification in differential diagnosis? Well, imagine you’ve come up with these hypotheses to explain some API slowness you’re troubleshooting:

Hypothesis Alpha: contention on the table cache is too high, so extra latency is introduced for each new table opened

Hypothesis Bravo: we’re hitting our IOPS limit on the EBS volume attached to the database server

There are many test results that would be compatible with Hypothesis Alpha. But unless you craft your tests very carefully, those same results will also be compatible with Hypothesis Bravo. Without a highly specific test for table cache contention, you can’t prove Hypothesis Alpha through a series of observations that agree with it.

What you can do, however, is try to quickly falsify Hypothesis Bravo by checking some graphs against some AWS configuration data. And if you do that, then Hypothesis Alpha is the your best remaining guess. Now you can start treating for table cache contention on the one hand, and attempting the more time-consuming process (especially if it’s correct!) of falsifying Hypothesis Alpha.

Isn’t this kind of abstract?

Haha OMG yes. It’s the most abstract. But that doesn’t mean it’s not a useful idea.

If it’s your job to troubleshoot problems, you know that tunnel vision is very real. If you focus on generating alternate hypotheses and falsifying them, you can resist tunnel vision’s allure.

A Moral Thought Experiment That Breaks My Brain

Note: This blog post is not about computers or math or DevOps. I like those things, and I write about them usually. But not today.

Sometimes I read something and I’m like “that can’t be right,” but then I think about it for a while and I can’t figure out why it’s not right. This happens to me especially often with arguments about moral intuition.

We humans make moral judgements and decisions all day, every day, without even thinking about it. It’s a central part of what makes us us.

Try this: pick a moral belief that you hold very firmly. For example, I went with, “It’s wrong to treat people with less respect because of their sexual orientation.” Now, try to imagine no longer believing that thing. Imagine that everything about your mind is the same, except you no longer hold to that one belief or its consequences. It’s hard, isn’t it? It really feels like you wouldn’t be the same person.

We know that our moral beliefs change over the course of our lives, and so that feeling of our selves being tightly coupled to those beliefs must be an illusion. But still, it’s very disturbing when a well constructed thought experiment forces us to reevaluate our basic moral intuitions.

What follows is a thought experiment that has such a brain-breaking effect on me. We can’t all be moral geniuses who cut right through the Trolley Problem like this kid:

Making People Happy, or Making Happy People?

On this week’s episode of Sam Harris’s Waking Up podcast, the Scottish moral philosopher and effective altruism wunderkind Will MacAskill gave a very brief but very brain-breaking argument. It’s had me scratching my chin intensely for a couple days now.

It seems intuitive to me that we, as humans, have no moral obligation to make sure that more humans come into existence in the future. After all, it’s not like you owe anything to hypothetical people who will never come into existence. There seems to me to be no way that such a moral obligation could arise. I think a lot of people would agree with this intuition.

Will MacAskill’s argument (and I’m not sure it’s his originally, but I heard it from him) goes like this. Imagine what we’ll call World A. World A has some number of people in it, living their lives. And let’s also imagine World B, which has all the same people as World A, plus another person named Harry. Everybody in World B is exactly as well-off as their counterparts in World A, and Harry’s well-being is at a 6 out of 10. He’s moderately well-off.

blog_world_a_b
Photo credit: Brett Swanson

According to my intuition that there is no moral reason to prefer World B, in which Harry exists, to World A, in which Harry never came into being. If we say the total moral value of World A is a and the moral value of World B is b, I believe that a = b.

Alright, MacAskill says, now let’s introduce World C. This world is identical to World A, except it includes a person named Harry whose well-being is an 8 out of 10. He’s very happy almost all the time!

blog_world_c.png

Now, by the same logic I used before, letting World C’s total moral value equal c, I have to say that a = c. This world with a Harry at well-being level 8 is not preferable to a world in which Harry never existed.

This puts me in a bit of a pickle. Because, by straight-up math, we know that if a = b and a = c then b = c. In other words, there’s no moral reason to prefer World C to World B. But come on! World C is exactly like World B except that Harry is better off. Obviously it’s an objectively preferable world.

I feel reductio ad absurdum‘d and it makes me very uncomfortable. Does this mean I have an obligation to have kids if I think I can give them happy lives? I don’t believe that, but I’m not sure what to believe now.

Concert line (Félix Pagaimo via Flickr)

The most important thing to understand about queues

You only need to learn a little bit of queueing theory before you start getting that ecstatic “everything is connected!” high that good math always evokes. So many damn things follow the same set of abstract rules. Queueing theory lets you reason effectively about an enormous class of diverse systems, all with a tiny number of theorems.

I want to share with you the most important fundamental fact I have learned about queues. It’s counterintuitive, but once you understand it, you’ll have deeper insight into the behavior not just of CPUs and database thread pools, but also grocery store checkout lines, ticket queues, highways – really just a mind-blowing collection of systems.

Okay. Here’s the Important Thing:

As you approach maximum throughput, average queue size – and therefore average wait time – approaches infinity.

“Wat,” you say? Let’s break it down.

Queues

A queueing is exactly what you think it is: a processor that processes tasks whenever they’re available, and a queue that holds tasks until the processor is ready to process them. When a task is processed, it leaves the system.

queue_basic
A queueing system with a single queue and a single processor. Tasks (yellow circles) arrive at random intervals and take different amounts of time to process.

The Important Thing I stated above applies to a specific type of queue called an M/M/1/∞ queue. That blob of letters and numbers and symbols is a description in Kendall’s notation. I won’t go into the M/M/1 part, as it doesn’t really end up affecting the Important Thing I’m telling you about. What does matter, however, is that ‘∞’.

The ‘∞’ in “M/M/1/∞” means that the queue is unbounded. There’s no limit to the number of tasks that can be waiting in the queue at any given time.

Like most infinities, this doesn’t reflect any real-world situation. It would be like a grocery store with infinite waiting space, or an I/O scheduler that can queue infinitely many operations. But really, all we’re saying with this ‘∞’ is that the system under study never hits its maximum occupancy.

Capacity

The processor (or processors) of a queueing system have a given total capacity. Capacity is the number of tasks per unit time that can be processed. For an I/O scheduler, the capacity might be measured in IOPS (I/O operations per second). For a ticket queue, it might be measured in tickets per sprint.

Consider a simple queueing system with a single processor, like the one depicted in the GIF above. If, on average, 50% of the system’s capacity is in use, that means that the processor is working on a task 50% of the time. At 0% utilization, no tasks are ever processed. At 100% utilization, the processor is always busy.

Let’s think about 100% utilization a bit more. Imagine a server with all 4 of its CPUs constantly crunching numbers, or a dev team that’s working on high-priority tickets every hour of the day. Or a devOp who’s so busy putting out fires that he never has time to blog.

When a system has steady-state 100% utilization, its queue will grow to infinity. It might shrink over short time scales, but over a sufficiently long time it will grow without bound.

Why is that?

If the processor is always busy, that means there’s never even an instant when a newly arriving task can be assigned immediately to the processor. That means that, whenever a task finishes processing, there must always be a task waiting in the queue; otherwise the processor would have an idle moment. And by induction, we can show that a system that’s forever at 100% utilization will exceed any finite queue size you care to pick:

no_good_0
No good. Processor idle, so utilization can’t be 100%.
no_good_1.png
No good. As soon as the task finishes, we’re back to the above picture.
no_good_2.png
No good. As soon as the current task finishes, we’re back to the above picture, which we’ve already decided is no good.
no_good_3.png
You get the idea.

So basically, the statements “average capacity utilization is 100%” and “the queue size is growing without bound” are equivalent. And remember: by Little’s Law, we know that average service time is directly proportional to queue size. That means that a queueing system with 100% average capacity utilization will have wait times that grow without bound.

If all the developers on a team are constantly working on critical tickets, then the time it takes to complete tickets in the queue will approach infinity. If all the CPUs on a server are constantly grinding, load average will climb and climb. And so on.

Now you may be saying to yourself, “100% capacity utilization is a purely theoretical construct. There’s always some about of idle capacity, so queues will never grow toward infinity forever.” And you’re right. But you might be surprised at how quickly things start to look ugly as you get closer to maximum throughput.

What about 90%?

The problems start well before you get to 100% utilization. Why?

Sometimes a bunch of tasks will just happen to show up at the same time. These tasks will get queued up. If there’s not much capacity available to crunch through that backlog, then it’ll probably still be around when the next random bunch of tasks show up. The queue will just keep getting longer until there happens to be along enough period of low activity to clear it out.

I ran a qsim simulation to illustrate this point. It simulates a single queue that can process, on average, one task per second. I ran the simulation for a range of arrival rates to get different average capacity usages. (For the wonks: I used a Poisson arrival process and exponentially distributed processing times). Here’s the plot:

without_extreme

Notice how things start to get a little crazy around 80% utilization. By the time you reach 96% utilization (the rightmost point on the plot), average wait times are already 20 seconds. And what if you go further and run at 99.8% utilization?

with_extreme

Check out that Y axis. Crazy. You don’t want any part of that.

This is why overworked engineering teams get into death spirals. It’s also why, when you finally log in to that server that’s been slow as a dog, you’ll sometimes find that it has a load average of like 500. Queues are everywhere.

What can be done about it

To reiterate:

As you approach maximum throughput, average queue size – and therefore average wait time – approaches infinity.

What’s the solution, then? We only have so many variables to work with in a queueing system. Any solution for the exploding-queue-size problem is going to involve some combination of these three approaches:

  1. Increase capacity
  2. Decrease demand
  3. Set an upper bound for the queue size

I like number 3. It forces you to acknowledge the fact that queue size is always finite, and to think through the failure modes that a full queue will engender.

Stay tuned for a forthcoming blog post in which I’ll examine some of the common methods for dealing with this property of queueing systems. Until then, I hope I’ve helped you understand a very important truth about many of the systems you encounter on a daily basis.

3 Things That Make Encryption Easier

Almost everyone (especially in ops) knows they should be better about encrypting secret data. And yet most organizations have at least a few passwords and secret keys checked into Git somewhere.

The ideal solution would be for everyone at your company to use PGP all the time, but that is a huge pain. Encryption tools are annoying to use, and a significant time investment is required to learn to use them correctly. And if security is hard, people will always find a way to avoid it.

In the last few months, I’ve adopted 3 new technologies that make secure storage and exchange of secret information at least bearable.

1: Blackbox

StackExchange’s blackbox tool makes it easy to store encrypted data in a Git repository. First you need to import into your personal keyring all the PGP keys you want to grant access to. Then you initialize the blackbox directory structure:

dan@george:/tmp/secrets$ blackbox_initialize
Enable blackbox for this git repo? (yes/no) yes
VCS_TYPE: git
NEXT STEP: You need to manually check these in:
git commit -m'INITIALIZE BLACKBOX' keyrings /private/tmp/secrets/.gitignore
dan@george:/tmp/secrets$ git commit -m'INITIALIZE BLACKBOX' keyrings /private/tmp/secrets/.gitignore
[master 695d29a] INITIALIZE BLACKBOX
2 files changed, 3 insertions(+)
create mode 100644 .gitignore
create mode 100644 keyrings/live/blackbox-files.txt

view raw
gistfile1.txt
hosted with ❤ by GitHub

Once you’ve initialized blackbox, you can start adding administrators, which are keys that will be granted access to the secret data in the repository:

dan@george:/tmp/secrets$ blackbox_addadmin dan@danslimmon.com
gpg: /private/tmp/secrets/keyrings/live/trustdb.gpg: trustdb created
gpg: key A9FD8CCF: public key "Dan Slimmon " imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
NEXT STEP: You need to manually check these in:
git commit -m'NEW ADMIN: dan@danslimmon.com' keyrings/live/pubring.gpg keyrings/live/trustdb.gpg keyrings/live/blackbox-admins.txt
dan@george:/tmp/secrets$ git commit -m'NEW ADMIN: dan@danslimmon.com' keyrings/live/pubring.gpg keyrings/live/trustdb.gpg keyrings/live/blackbox-admins.txt
[master (root-commit) 94108f6] NEW ADMIN: dan@danslimmon.com
3 files changed, 2 insertions(+)
create mode 100644 keyrings/live/blackbox-admins.txt
create mode 100644 keyrings/live/pubring.gpg
create mode 100644 keyrings/live/trustdb.gpg

view raw
gistfile1.txt
hosted with ❤ by GitHub

Now you can start adding secrets securely:

dan@george:/tmp/secrets$ echo "nuclear launch code: SashaMalia44" > launchcode.txt
dan@george:/tmp/secrets$ blackbox_register_new_file launchcode.txt
========== PLAINFILE launchcode.txt
========== ENCRYPTED launchcode.txt.gpg
========== Importing keychain: START
gpg: Total number processed: 1
gpg: unchanged: 1
========== Importing keychain: DONE
========== Encrypting: launchcode.txt
========== Encrypting: DONE
========== Adding file to list.
========== CREATED: launchcode.txt.gpg
========== UPDATING REPO:
NOTE: "already tracked!" messages are safe to ignore.
[master 2a5eb65] registered in blackbox: launchcode.txt
2 files changed, 1 insertion(+)
create mode 100644 launchcode.txt.gpg
========== UPDATING VCS: DONE
Local repo updated. Please push when ready.
git push

view raw
gistfile1.txt
hosted with ❤ by GitHub

I really like how this tool gives my team a distributed, version-controlled repository of secret information. We can even give other teams access to the repository without worrying about exposing secrets!

My team uses this tool for shared passwords and SSL private keys, and it works great. Check it out.

2: Salt

At my company, we use Salt for config management. Like most config management systems, Salt lets you decouple the values in a config file from the file itself. You make a template of the config file that will appear on the node, and you put the values in a pillar (equivalent to a Chef databag, or a Puppet… whatever it’s called in Puppet).

So instead of storing a config file like this:

[myapp]
username = app_user
password = hunter2

view raw
blah.ini
hosted with ❤ by GitHub

You store a template like this:

[myapp]
username = {{ pillar['myapp']['username'] }}
password = {{ pillar['myapp']['password'] }}

view raw
blah.ini
hosted with ❤ by GitHub

and a pillar (which is just a YAML file) like this:

myapp:
username: app_user
password: hunter2

view raw
pillar.yaml
hosted with ❤ by GitHub

Now suppose you don’t want to commit that super-secure password directly to your Salt repository. Instead, you can create a PGP keypair, and give the private key to your Salt server. Then you can encrypt the password with that key. Your pillar will now look like this:

#!yaml|gpg
myapp:
username: app_user
password: |
—–BEGIN PGP MESSAGE—–
—–END PGP MESSAGE—–

view raw
pillar.yaml
hosted with ❤ by GitHub

When processing your template on the target node, Salt will seamlessly decrypt the password for you.

I love that I can give non-admins access to our Salt repo, and let them submit pull requests, without worrying about leaking passwords. To learn more about this Salt functionality, you can read the documentation for salt.renderers.gpg.

3: SecretShare

Salt’s GPG renderer and blackbox are great ways to store shared secret data, but what about transmitting secrets to particular people? In most organizations, when passwords and such need to be transmitted from employee to employee, insecure methods are used. Email, chat, and Google docs are very common media for transmitting secrets. They’re all saved indefinitely, meaning that an attacker who gains access to your account can gain access to all the secret info you’ve ever sent or received.

To make transmitting secrets as easy and secure as possible, my teammate Alex created secretshare. It lets you transmit arbitrary secret data to others in your organization, and it has immense advantages over other systems:

  • Secrets are never transmitted or stored in the clear, so a snooper can’t even read them if they manage to compromise the Amazon S3 bucket in which they’re stored.
  • Secrets are deleted from S3 after 24-48 hours, so a snooper can’t go back through the recipient’s or sender’s communication history later and retrieve them.
  • Secrets are encrypted with a one-time-use key, so a snooper can’t use the key from one secret to steal another.
  • Users don’t need Amazon AWS credentials, so a snooper can’t steal those credentials from a user.

Right now, secretshare only exists as a command-line utility, but we’re very close to having a web UI as well, which will make it even easier for non-technical people to use.

 

Security’s worst enemy is bad UX. It’s critical to make the most secure path also the easiest path. That’s what these three solutions aim to do, and they’ve made me feel much more comfortable with the security of secret data at my company. I hope they can do the same for you.

Start a Paper Club!

A few months ago, a work friend and I were commiserating about how we never make time to read any research. There’s all this fascinating, challenging stuff being written, we agreed, and we’re missing it all.

When a few more coworkers chimed in, saying they’d also like to push themselves to read more academic literature, we realized we were onto something. The next day, the Exosite Paper Club was born.

It’s been really fun to organize and participate in Paper Club these last few months. I’ve learned a lot, not just about the fields on which our reading material focused, but also about my coworkers and my company. Now I want to share some my excitement about this project and encourage you to start a Paper Club at your own company.

What Paper Club is and does

Paper Club is a group that meets every other week over lunch. We all prepare by reading a particular academic paper, and we discuss it in a free-form group session. Sometimes we teach each other about concepts from the paper, sometimes we brainstorm ways to apply ideas to our work at Exosite, and sometimes we just chat.

The paper for each session is selected by a participant from the previous session, and can be about any topic from project management to statistics to psychology to medicine. Readers of any skill level in the paper’s subject matter are warmly welcomed at meetings, but you can always skip a session if the topic doesn’t interest you. All we ask is that, when the material is hard for you, you push yourself and try to grasp it anyway.

We want a wide variety of participants, from DevOps to UI to sales & marketing, but we also want to read papers that are detailed enough to be challenging. To these ends, we have three guidelines for submitting a paper:

  1. Papers should be accessible. Try to pick papers at a technical level such that at least some of your peers will be able to understand most of the content. You want the session to be a discussion, not a lecture.
  2. Papers should be challenging. While accessibility is important, you don’t want your peers to have too easy a time. The best conversations happen when people are forced to push themselves a bit. It’s okay to suggest papers that are only accessible to those with a particular academic background, as long as you think the paper will create a good discussion with some subset of your coworkers.
  3. Papers should be deep. The best discussions tend to come from papers that dive pretty deep into a topic. Reviews and textbook chapters can be interesting, but we tend to prefer papers that go into detail on a specific topic.

What we’ve read so far

We’ve been doing Paper Club at Exosite since mid-November, and we’ve discussed 8 papers to date. I thought we’d be reading mostly computer science papers, but I couldn’t be happier with the variety we’ve gotten!

Here are a few of the papers that produced the most interesting discussions:

  • Confidence in Judgment: Persistence of the Illusion of Validity. This classic behavioral study from 1978 takes the well-established observation that people (even especially experts) are usually more confident in their judgments than they should be. The authors build a simple but powerful model for the mental processes responsible for this overconfidence, and present some suggestions for systematically curtailing it.
  • On Bullshit. If you’re like most engineers, you’re utterly allergic to bullshit. But have you ever thought about what makes bullshit bullshit? How it’s different from outright lying, and how it’s different from normal speech? This famous philosophical essay tries to answer these questions, and it provides some valuable insights for someone trying to excise bullshit from their life.
  • Why Johnny Can’t Encrypt. This 1999 analysis of PGP 5.0 usability raises some points that are crucial for anyone trying to design intuitive user experiences. The paper led us into a super productive critique of the metaphorical structure used in our own company’s documentation.

How it’s going

I have been really happy with the intellectual diversity of our Paper Club participants. Even when the paper under discussion is an especially wonky one, we get project managers, devs from all different software teams, salespeople, marketers, managers, and occasionally even company executives. Seeing all these people engage in thoughtful dialogue on such a wide variety of topics is inspiring. It takes me out of my DevOps bubble and reminds me that I work with some very interesting and smart folks.

I’d like to see how far we can stretch ourselves. Selfishly, I’d like us to read a math or linguistics paper some time, and help each other through it. I think that would be very rewarding.

Overall, I’m very glad we have a Paper Club here. I’d recommend it to anyone who likes people and/or learning. If you start one at your company, let me know how it goes!

The 10 Best Books I Read In 2015

I read 56 books in 2015, which is more than I’d read in the previous 5 years combined. Turns out books are pretty cool. Who knew?

Here are the 10 books I liked the most this year, in no particular order.

1. Metaphors We Live By (1980)

Goodreads___Metaphors_We_Live_By_by_George_Lakoff_—_Reviews__Discussion__Bookclubs__Lists.pngI got real into linguistics this year, and this book offers an interesting perspective on semantics. We usually think of metaphor as a poetic device. But this book argues that the whole human conceptual system is based on metaphor! According to George Lakoff, every concept we understand (short of concepts corresponding to our direct experience) is understood by analogy with a more concrete concept.

There are some very intriguing ideas in here. I didn’t necessarily buy (or even understand) them all, but I’m really glad I read this book.

2. The One World Schoolhouse: Education Reimagined (2012)
one_world_schoolhouse_-_Google_Search.png

This thought-provoking book on education was written by the Khan Academy guy. He presents a lot of research pointing toward the hypothesis that self-paced “mastery learning” is much more broadly effective than the contemporary American model of arbitrarily delineated, one-speed-fits all classes.

Discussing this book with my friends (who tend to be smart academic underachievers), really brought home the point that our education system underserves anyone who understands a concept more slowly, or more quickly, than the rest of the class.

3. The Orphan Master’s Son (2012)

Goodreads___The_Orphan_Master_s_Son_by_Adam_Johnson_—_Reviews__Discussion__Bookclubs__ListsI’m endlessly fascinated by North Korea, and this novel stoked my fascination. That alone would probably have been enough, but it’s also super well written. I found it beautiful and sad and gripping the whole way through.

 

 

 

4. The Immortal Life of Henrietta Lacks (2010)

Goodreads___The_Immortal_Life_of_Henrietta_Lacks_by_Rebecca_Skloot_—_Reviews__Discussion__Bookclubs__ListsThe author of this book tracked down the family of the long-deceased woman whose incredibly robust tumor cells became the most widely studied strain of human tissue in the world. Her cells have been used by scientists to make countless discoveries in genetics and immunology.

Through interviews with Henrietta Lacks’ descendants, all of whom still live in abject poverty, Rebecca Skloot raises important and nuanced questions about the interplay between science and culture and race. It’d be hard to read this book and still think of science as “pure” or “objective.” Scientists aspire to objectivity, but they’re just as boxed in by their cultural preconceptions as anyone else.

5. Red Rising (2014)

Goodreads___Red_Rising__Red_Rising_Trilogy___1__by_Pierce_Brown_—_Reviews__Discussion__Bookclubs__ListsThis is a super fun young-adult novel about badass vicious teens trying to kill each other. The premise is pretty similar to that of The Hunger Games, but I found the character development and storytelling way better. And the second book in the trilogy, Golden Son, is awesome too! The third one is coming out very soon, and I can’t wait.

Don’t expect any deep truths or transcendent prose. This book is just really fun to read.

6. The Road (2006)

Goodreads___The_Road_by_Cormac_McCarthy_—_Reviews__Discussion__Bookclubs__Lists.pngHey, speaking of books that are really fun to read, this book is not one. It’s a painfully stark novel about a father and son trying to survive in post-apocalyptic North America. I definitely wouldn’t call it a “feel-good” book.

But despite the author’s unremittingly bleak vision, the relationship between the man and the boy (those are the only names given for the characters) is very touching. A friend of mine claims to use this book as a parenting handbook of sorts. I don’t know if I’d go that far, but I do see what he means.

7. History in Three Keys: The Boxers as Event, Experience, and Myth (1997)

Goodreads___History_in_Three_Keys__The_Boxers_as_Event__Experience__and_Myth_by_Paul_A__Cohen_—_Reviews__Discussion__Bookclubs__Lists.pngMy favorite non-fiction book of the year. On one level, this is a history book about the Boxer Rebellion: a grimly compelling episode in its own right. But, moreover, this book is about history itself. Paul A. Cohen describes the Boxer Rebellion through 3 different lenses – experience, history, and myth – each of which represents part of the way we engage with the past.

I came away from this book with a newfound appreciation for the role of the historian in creating history, and a newfound skepticism about the idea that history is composed of objective facts and dates.

[I did a lightning talk (transcript in the first slide’s speaker notes) relating what I learned from this book to post-mortem analysis in DevOps.]

8. The Martian Chronicles (1950)

Goodreads___The_Martian_Chronicles_by_Ray_Bradbury_—_Reviews__Discussion__Bookclubs__ListsI tried to read The Martian Chronicles in high school, and I was like “Pfft! This isn’t science fiction. The science doesn’t make any sense.” I’ve read a lot more sci-fi now, so I decided to give this classic another shot.

What I’ve learned since high school is that the sci-fi I like most isn’t about cool technology or mind-bending thought experiments (although those can add some nice seasoning to a story). It’s about humanity: what continues to define us as human, even when the things we think of basic to our humanity – Earth, language, gender, our bodies – are stripped away?

Bradbury knew exactly what he was on about. After reading these stories with the human question in mind, I finally understand why he’s been so influential in science fiction.

9. The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2009)

Goodreads___The_New_Jim_Crow__Mass_Incarceration_in_the_Age_of_Colorblindness_by_Michelle_Alexander_—_Reviews__Discussion__Bookclubs__ListsI already had strong objections to the War on Drugs on account of the senseless imprisonment of people who’ve done nothing harmful. But this book shows how drug arrest quotas, mandatory minimum sentences, and felon profiling work together as an insidious system to maintain white supremacy in America.

I’d recommend this book to any American. We all need to understand that racial oppression didn’t go away on July 2, 1964; it just adopted more clever camouflage.

10. Anathem (2008)

anathem_cover_-_Google_Search.pngThis is now my favorite Neal Stephenson book. It has his trademark mathyness and detail to a truly engaging story in a richly-imagined universe.

Like most Neal Stephenson books, it’s not for everyone. A lot of time is spent on epistemological ruminations and physics. I love that shit, so I loved this book all the way from page 1 to page 937.

 

Welp, those are some books I loved this year.

But I want to see what you’re reading, so I can find more books to love. Friend-poke me on GreatBooks!

Troubleshooting On A Distributed Team Without Losing Common Ground

I work on a team that fixes complex systems under time pressure. My teammates have different skill sets, different priorities, and different levels of expertise. But we all have to troubleshoot and solve problems together.

This is really hard to do effectively. Fortunately for us in the relatively new domain of DevOps, situations like ours have been studied extensively in the last couple decades. We can use the results of this research to inform our own processes and automation for troubleshooting.

One of the most important concepts to emerge from recent teamwork research, common ground, helps us understand why collaborative troubleshooting breaks down over time. This breakdown leads to wasted effort and mistakes, even if the team maintains constant communication in a chat room. But if we extend ChatOps by drawing on some ideas from medical diagnosis, we can make troubleshooting way easier without losing the benefits of fluid team conversation.

Common Ground

Ergonomics researchers D.D. Woods and Gary Klein (the latter of whom I wrote about in What makes an expert an expert?) published a phenomenally insightful paper in 2004 called Common Ground and Coordination in Joint Activity. In it, they describe a particular kind of failure that occurs when people engage in joint cognition: the Fundamental Common Ground Breakdown. Once you learn about the Fundamental Common Ground Breakdown, you see it everywhere. Here’s how the Woods/Klein paper describes the FCGB:

  • Party A believes that Party B possesses some knowledge
  • Party B doesn’t have this knowledge, and doesn’t know he is supposed to have it.
  • Therefore, he or she doesn’t request it.
  • This lack of a request confirms to Party A that Party B has the knowledge.

When this happens, Party A and Party B lose common ground, which Woods & Klein define as “pertinent knowledge, beliefs and assumptions that are shared among the involved parties.” The two parties start making incorrect assumptions about each other’s knowledge and beliefs, which causes their common ground to break down further and further. Eventually they reach a coordination surprise, which forces them to re-synchronize their understanding of the coordinated activity:

csel_eng_ohio-state_edu_woods_distributed_CG_final_pdf

Seriously, the FCGB is everywhere. Check out the paper.

I’m especially interested in one particular area where an understanding of common ground can help us do better teamwork: joint troubleshooting.

Common Ground Breakdown in Chatroom Troubleshooting

Everybody’s into ChatOps these days, and I totally get it. When a critical system is broken, it’s super useful to get everybody in the same room and hash it out. ChatOps allows everybody to track progress, coordinate activities, and share results. And it also helps to have lots of different roles represented in the room:

  • Operations folks, to provide insight into the differences between the system’s normal behavior and its current state
  • Software engineers, who bring detailed knowledge of the ways subsystems are supposed to work
  • Account managers and product managers and support reps: not just for their ability to translate technical jargon into the customer’s language for status reporting, but also because their understanding of customer needs can help establish the right priorities
  • Q.A. engineers, who can rule out certain paths of investigation early with their intuition for the ways in which subsystems tend to fail

The process of communicating across role boundaries isn’t just overhead: it helps us refine our own understanding, look for extra evidence, and empathize with each other’s perspectives.

But ChatOps still offers a lot of opportunities for common ground breakdown. The FCGB can occur whenever different people interpret the same facts in different ways. Interpretations can differ for many different reasons:

  • Some people have less technical fluency in the system than others. A statement like “OOM killer just killed Cassandra on db014” might change an ops engineer’s whole understanding of the problem, but such a shift could fly under the radar of, say, a support engineer.
  • Some people are multitasking. They may have a stake in the troubleshooting effort but be unable to internalize every detail from the chat room in real time.
  • Some people are co-located. They find it easier to discuss the problem using mouth words or by physically showing each other graphs, thereby adjusting their own shared understanding without transmitting these adjustments to the rest of the team.
  • Some people enter the conversation late, or leave for a while and come back. These people will miss common ground changes that happen during their absence.

These FCGB opportunities all become more pronounced as the troubleshooting drags on and folks become tired, bored, and confused. And when somebody says they’ve lost track of common ground, what do we do? Two main things: we provide a summary of recent events and let the person ask questions until they feel comfortable; or we tell them to read the backlog.

The Q&A approach has serious drawbacks. First of all, it requires somebody knowledgeable to stop what they’re doing and summarize the situation. If people are frequently leaving and entering the chat room, you end up with a big distraction. Second of all, it leaves lots of room for important information to get missed. The Fundamental Common Ground Breakdown happens when somebody doesn’t know what to ask, so fixing it with a Q&A session is kind of silly.

The other way people catch up with the troubleshooting effort is by reading the backlog. This is even more inefficient than Q&A. Here’s the kind of stuff you have to dig through when you’re reading a chat backlog:

tng-hipchat

There’s a lot to unpack there – and that’s just 18 messages! Imagine piecing together a troubleshooting effort that’s gone on for hours, or days. It would take forever, and you’d still make a lot of mistakes. It’s just not a good way to preserve common ground.

So what do we need?

Differential Diagnosis as an Engine of Common Ground

I’ve blogged before about how much I love differential diagnosis. It’s a formalism that doctors use to keep the diagnostic process moving in the right direction. I’ve used it many times in ops since I learned about it. It’s incredibly useful.

In differential diagnosis, you get together with your team in front of a whiteboard – making sure to bring together people from a wide variety of roles – and you go through a cycle of 3 steps:

  1. Identify symptoms. Write down all the anomalies you’ve seen. Don’t try to connect the dots just yet; just write down your observations.
  2. Generate hypotheses. Brainstorm explanations for the symptoms you’ve observed. This is where it really helps to have a good cross-section of roles represented. The more diverse the ideas you write down, the better.
  3. Test hypotheses. Now that you have a list of things that might be causing the problem, you start narrowing down that list by coming up with a test that will prove or disprove a certain hypothesis.

Once you’re done with step #3, you can cross out a hypothesis or two. Then you head back to step #1 and repeat the cycle until the problem is identified.

A big part of the power of differential diagnosis is that it’s written down. Anybody can walk into the room, read the whiteboard, and understand the state of the collaborative effort. It cuts down on redundant Q&A, because the most salient information is summarized on the board. It eliminates inefficient chat log reading – the chat log is still there, but you use it to search for specific pieces of information instead of reading it like a novel. But, most importantly, differential diagnosis cuts down on fundamental common ground breakdowns, because everybody has agreed to accept what’s on the whiteboard as the canonical state of troubleshooting.

Integrating Differential Diagnosis with ChatOps

We don’t want to lose the off-the-cuff, conversational nature of ChatOps. But we need a structured source of truth to provide a point-in-time understanding of the effort. And we (read: I) don’t want to write a whole damn software project to make that happen.

My proposal is this: use Trello for differential diagnosis, and integrate it with the chat through a Hubot plugin. I haven’t written this plugin yet, but it shouldn’t take long (I’ll probably fork hubot-trello and start from there). That way people could update the list of symptoms, hypotheses, and tests on the fly, and they’d always have a central source of common ground to refer to.

In the system I envision, the chat room conversation would be peppered with statements like:

Geordi: hubot symptom warp engine going full speed, but ship not moving

Hubot: Created (symp0): warp engine going full speed, but ship not moving

Beverly: hubot falsify hypo1

Hubot: Falsified (hypo1): feedback loop between graviton emitter and graviton roaster

Geordi: hubot finish test1

Hubot: Marked (test1) finished: reboot the quantum phase allometer

And the resulting differential diagnosis board, containing the agreed-upon state of the troubleshooting effort, might look like this example, with cards labeled to indicate that they’re no longer in play.

What do you think?

Let me know if your organization already has something like this, or has tried a formal differential diagnosis approach before. I’d love to read some observations about your team’s process in the comments. Also, VictorOps has a pretty neat suite of tools that approaches what I have in mind, but I still think a more conceptually structured (not to mention free) solution could be very useful.

Automation is most effective when it’s a team player. By using automation to preserve common ground, we can solve problems faster and more thoroughly, with less frustration and less waste. And that all sounds pretty good to me.

Minding Your Pees And Queues

A couple weeks ago, my wife & I went to Basilica Block Party, a local music festival. It was a good time, and OH MANS you have to see Fitz & The Tantrums live. Their sax player is a hero unit.

Anyhoo, we walked over to the porta-potties between sets. The lines were about 8–10 people long. And my spouse suggested an intriguing strategy for minimizing her wait time. I will call this strategy The Strategy:

The Strategy: All else being equal, get in the line with the most men.

The reasoning behind The Strategy is obvious: women take longer to pee than men, so if 2 queues are the same length, then the faster-moving queue should be the one with fewer women. It’s intuitive, but due to my current obsession with queueing theory, I became intensely interested in the strategy’s implications. In particular, I started to wonder things like:

  • How much can you expect to shave off your wait time by following The Strategy?
  • How does the effectiveness of The Strategy vary with its popularity? Little’s Law tells us that the overall average wait time won’t be affected, but is The Strategy still effective if 10% of the crowd is using it? 25%? 90%?

And then I thought to myself “I could answer these questions through the magic of computation!”

qsim

Lately I’ve been seeing queueing systems everywhere I go, so I figured it’d be worthwhile to write a generic queueing system simulator to satisfy my curiosity. That’s what I did. It’s called qsim.

To quote the README,

A queueing system in qsim processes arbitrary jobs and is composed of 5 pieces:

  • The arrival process controls how often jobs enter the system.
  • The arrival behavior defines what happens when a new job arrives. When the arrival process generates a new job, the arrival behavior either sends it straight to a processor or appends it to a queue.
  • Queues are simply holding pens for jobs. A system may have many queues associated with different processors.
  • A queueing discipline defines the relationship between queues and processors. It’s responsible for choosing the next job to process and assigning that job to a processor.
  • Processors are the entities that remove jobs from the system. A processor may take differing amounts of time to process different jobs. Once a job has been processed, it leaves the queueing system.

qsim provides a framework for implementing these building blocks and putting them together, and it also provides hooks that can be used to gather data about a simulation as it runs. I’m really looking forward to using qsim to gain insight into all sorts of different systems.

For now: porta-potties.

The porta-potty simulation

You’ll recall The Strategy:

The Strategy: All else being equal, get in the line with the most men.

To determine the effectiveness of The Strategy, I implemented PortaPottySystem using qsim. Here are some of the assumptions I made:

  • People arrive very frequently, but if all the queues are too long (8 people) they leave.
  • There are 15 porta-potties, each with its own queue. Once a person enters a queue, they stay in that queue until the corresponding porta-potty is vacant.
  • Shockingly, I couldn’t find any reliable data on the empirical distribution of pee times by sex, so I chose a normal distribution with a mean of 40 seconds for men and 60 seconds for women.
  • Most people just pick a random queue to join (as long as it’s no longer than the shortest queue), but some people use The Strategy of getting into the queue with the highest man:woman ratio (again, as long as it’s no longer than the shortest queue).
  • Nobody’s going number 2 because that’s gross.
  • Everyone is either a man or a woman. I know all about gender being a spectrum, and if you want to submit a pull request that smashes the gender binary, please do.

The first question I wanted to answer was: how does using The Strategy affect your wait time?

To answer this question, I ran a simulation where the probability of a given person deciding to use The Strategy is 1%. The other 99% of people simply join one of the shortest queues without regard to its man:woman ratio. I ran 20 simulations, each for the equivalent of 2 weeks, and came up with these wait time distributions:

waittimes

More of a box-plot person? I hear ya:

boxplot

The Strategy is definitely not a huge win here. On average, your wait time will be reduced by about 10–15 seconds (4–6%) if you use The Strategy. Still, it’s not nothing, right?

Now how does the benefit of using The Strategy vary with its popularity? This is actually really interesting. I never would have guessed it. But the data shows that you should always use The Strategy, even if everybody else is using it too.

Here I’ve charted average wait times against the proportion of people using the strategy. Colors are more prominent where the data set in question is large (and therefore heavily influences the overall average):

eff_v_pop

You’ll notice that the overall average (dark green) does not vary with Strategy popularity. This is good news, because otherwise we’d be violating Little’s Law, which would probably just mean our simulation was broken.

The interesting thing here is that the benefit of using The Strategy decreases pretty much linearly as its popularity increases, but at the same time there accrues a disadvantage to not using it. If everybody else in the system is using The Strategy, and you come along and decide not to, you can still expect to wait 10 seconds longer than everybody else. Therefore using The Strategy is unequivocally better than not using The Strategy.

Unless of course you don’t really care about those 10 seconds, in which case you should do whatever you want.