Falsifiability: why you rule things out, not in

This June, I had the honor of speaking at O’Reilly Velocity 2016 in Santa Clara. My topic was Troubleshooting Without Losing Common Ground, which I’ve written about and written about before that too.

I was pretty happy with my talk, especially the Star Trek: The Next Generation vignette in the middle. It was a lot of ideas to pack into a single talk, but I think a lot of people got the point. However, I did give a really unsatisfactory answer (30m46s) to the first question I received. The question was:

In the differential diagnosis steps, you listed performing tests to falsify assumptions. Are you borrowing that from medicine? In tech are we only trying to falsify assumptions, or are we sometimes trying to validate them?

I didn’t have a real answer at the time, so I spouted some bullshit and moved on. But it’s a good question, and I’ve thought more about it, and I’ve come up with two (related) answers: a common-sense answer and a pretentious philosophical answer.

The Common Sense Answer

My favorite thing about differential diagnosis is that it keeps the problem-solving effort moving. There’s always something to do. If you’re out of hypotheses, you come up with new ones. If you finish a test, you update the symptoms list. It may not always be easy to make progress, but you always have a direction to go, and everybody stays on the same page.

But when you seek to confirm your hypotheses, rather than to falsify others, it’s easy to fall victim to tunnel vision. That’s when you fixate on a single idea about what could be wrong with the system. That single idea is all you can see, as if you’re looking at it through a tunnel whose walls block everything else from view.

Tunnel vision takes that benefit of differential diagnosis – the constant presence of a path forward – and negates it. You keep running tests to try to confirm your hypothesis, but you may never prove it. You may just keep getting tests results that are consistent with what you believe, but that are also consistent with an infinite number of hypotheses you haven’t thought of.

A focus on falsification instead of verification can be seen as a guard against tunnel vision. You can’t get stuck on a single hypothesis if you’re constrained to falsify other ones. The more alternate hypotheses you manage to falsify, the more confident you get that you should be treating for the hypotheses that might still be right.

Now, of course, there are times when it’s possible to verify your hunch. If you have a highly specific test for a problem, then by all means try it. But in general it’s helpful to focus on knocking down hypotheses rather than propping them up.

The Pretentious Philosophical Answer

I just finished Karl Popper’s ridiculously influential book The Logic of Scientific Discovery. If you can stomach a dense philosophical tract, I would highly recommend it.

karl-popper-4.jpg
Karl “Choke Right On It, Logical Positivism” Popper

Published in 1959 – but based on Popper’s earlier book Logik der Forschung from 1934 – The Logic Of Scientific Discovery makes a then-controversial [now widely accepted (but not universally accepted, because philosophers make cats look like sheep, herdability-wise)] claim. I’ll paraphrase the claim like so:

Science does not produce knowledge by generalizing from individual experiences to theories. Rather, science is founded on the establishment of theories that prohibit classes of events, such that the reproducible occurrence of such events may falsify the theory.

Popper was primarily arguing against a school of thought called logical positivism, whose subscribers assert that a statement is meaningful if and only if it is empirically testable. But what matters to our understanding of differential diagnosis isn’t so much Popper’s absolutely brutal takedown of logical positivism (and damn is it brutal), as it is his arguments in favor of falsifiability as the central criterion of science.

I find one particular argument enlightening on the topic of falsification in differential diagnosis. It hinges on the concept of self-contradictory statements.

There’s an important logical precept named – a little hyperbolically – the Principle of Explosion. It asserts that any statement that contradicts itself (for example, “my eyes are brown and my eyes are not brown”) implies all possible statements. In other words: if you assume that a statement and its negation are both true, then you can deduce any other statement you like. Here’s how:

  1. Assume that the following two statements are true:
    1. “All cats are assholes”
    2. “There exists at least one cat that is not an asshole”
  2. Therefore the statement “Either all cats are assholes, or 9/11 was an inside job” (we’ll call this Statement A) is true, since the part about the asshole cats is true.
  3. However, if the statement “there exists at least one cat that is not an asshole” is true too (which we’ve assumed it is) and 9/11 were not an inside job, then Statement A would be false, since neither of its two parts would be true.
  4. So the only way left for Statement A to be true is for “9/11 was an inside job” to be a true statement. Therefore, 9/11 was an inside job.
  5. Wake up, sheeple.

plhpico

The Principle of Explosion is the crux of one of Popper’s most convincing arguments against the Principle of Induction as the basis for scientific knowledge.

It was assumed by many philosophers of science before Popper that science relied on some undefined Principle of Induction which allowed one to generalize from a finite list of experiences to a general rule about the universe. For example, the Principle of Induction would allow one to deduce from enough statements like “I dropped a ball and it fell” and “My friend dropped a wrench and it fell” to “When things are dropped, they fall.” But Popper argued against the existence of the Principle of Induction. In particular, he pointed out that:

If there were some way to prove a general rule by demonstrating the truth of a finite number of examples of its consequences, then we would be able to deduce anything from such a set of true statements.

Right? By the Principle of Explosion, a self-contradictory statement implies the truth of all statements. If we accepted the Principle of Induction, then the same evidence that proves “When things are dropped, they fall” would also prove “All cats are assholes and there exists at least one cat that is not an asshole,” which would prove every statement we can imagine.

So what does this have to do with falsification in differential diagnosis? Well, imagine you’ve come up with these hypotheses to explain some API slowness you’re troubleshooting:

Hypothesis Alpha: contention on the table cache is too high, so extra latency is introduced for each new table opened

Hypothesis Bravo: we’re hitting our IOPS limit on the EBS volume attached to the database server

There are many test results that would be compatible with Hypothesis Alpha. But unless you craft your tests very carefully, those same results will also be compatible with Hypothesis Bravo. Without a highly specific test for table cache contention, you can’t prove Hypothesis Alpha through a series of observations that agree with it.

What you can do, however, is try to quickly falsify Hypothesis Bravo by checking some graphs against some AWS configuration data. And if you do that, then Hypothesis Alpha is the your best remaining guess. Now you can start treating for table cache contention on the one hand, and attempting the more time-consuming process (especially if it’s correct!) of falsifying Hypothesis Alpha.

Isn’t this kind of abstract?

Haha OMG yes. It’s the most abstract. But that doesn’t mean it’s not a useful idea.

If it’s your job to troubleshoot problems, you know that tunnel vision is very real. If you focus on generating alternate hypotheses and falsifying them, you can resist tunnel vision’s allure.

A Moral Thought Experiment That Breaks My Brain

Note: This blog post is not about computers or math or DevOps. I like those things, and I write about them usually. But not today.

Sometimes I read something and I’m like “that can’t be right,” but then I think about it for a while and I can’t figure out why it’s not right. This happens to me especially often with arguments about moral intuition.

We humans make moral judgements and decisions all day, every day, without even thinking about it. It’s a central part of what makes us us.

Try this: pick a moral belief that you hold very firmly. For example, I went with, “It’s wrong to treat people with less respect because of their sexual orientation.” Now, try to imagine no longer believing that thing. Imagine that everything about your mind is the same, except you no longer hold to that one belief or its consequences. It’s hard, isn’t it? It really feels like you wouldn’t be the same person.

We know that our moral beliefs change over the course of our lives, and so that feeling of our selves being tightly coupled to those beliefs must be an illusion. But still, it’s very disturbing when a well constructed thought experiment forces us to reevaluate our basic moral intuitions.

What follows is a thought experiment that has such a brain-breaking effect on me. We can’t all be moral geniuses who cut right through the Trolley Problem like this kid:

Making People Happy, or Making Happy People?

On this week’s episode of Sam Harris’s Waking Up podcast, the Scottish moral philosopher and effective altruism wunderkind Will MacAskill gave a very brief but very brain-breaking argument. It’s had me scratching my chin intensely for a couple days now.

It seems intuitive to me that we, as humans, have no moral obligation to make sure that more humans come into existence in the future. After all, it’s not like you owe anything to hypothetical people who will never come into existence. There seems to me to be no way that such a moral obligation could arise. I think a lot of people would agree with this intuition.

Will MacAskill’s argument (and I’m not sure it’s his originally, but I heard it from him) goes like this. Imagine what we’ll call World A. World A has some number of people in it, living their lives. And let’s also imagine World B, which has all the same people as World A, plus another person named Harry. Everybody in World B is exactly as well-off as their counterparts in World A, and Harry’s well-being is at a 6 out of 10. He’s moderately well-off.

blog_world_a_b
Photo credit: Brett Swanson

According to my intuition that there is no moral reason to prefer World B, in which Harry exists, to World A, in which Harry never came into being. If we say the total moral value of World A is a and the moral value of World B is b, I believe that a = b.

Alright, MacAskill says, now let’s introduce World C. This world is identical to World A, except it includes a person named Harry whose well-being is an 8 out of 10. He’s very happy almost all the time!

blog_world_c.png

Now, by the same logic I used before, letting World C’s total moral value equal c, I have to say that a = c. This world with a Harry at well-being level 8 is not preferable to a world in which Harry never existed.

This puts me in a bit of a pickle. Because, by straight-up math, we know that if a = b and a = c then b = c. In other words, there’s no moral reason to prefer World C to World B. But come on! World C is exactly like World B except that Harry is better off. Obviously it’s an objectively preferable world.

I feel reductio ad absurdum‘d and it makes me very uncomfortable. Does this mean I have an obligation to have kids if I think I can give them happy lives? I don’t believe that, but I’m not sure what to believe now.

3 Things That Make Encryption Easier

Almost everyone (especially in ops) knows they should be better about encrypting secret data. And yet most organizations have at least a few passwords and secret keys checked into Git somewhere.

The ideal solution would be for everyone at your company to use PGP all the time, but that is a huge pain. Encryption tools are annoying to use, and a significant time investment is required to learn to use them correctly. And if security is hard, people will always find a way to avoid it.

In the last few months, I’ve adopted 3 new technologies that make secure storage and exchange of secret information at least bearable.

1: Blackbox

StackExchange’s blackbox tool makes it easy to store encrypted data in a Git repository. First you need to import into your personal keyring all the PGP keys you want to grant access to. Then you initialize the blackbox directory structure:

dan@george:/tmp/secrets$ blackbox_initialize
Enable blackbox for this git repo? (yes/no) yes
VCS_TYPE: git
NEXT STEP: You need to manually check these in:
git commit -m'INITIALIZE BLACKBOX' keyrings /private/tmp/secrets/.gitignore
dan@george:/tmp/secrets$ git commit -m'INITIALIZE BLACKBOX' keyrings /private/tmp/secrets/.gitignore
[master 695d29a] INITIALIZE BLACKBOX
2 files changed, 3 insertions(+)
create mode 100644 .gitignore
create mode 100644 keyrings/live/blackbox-files.txt

view raw
gistfile1.txt
hosted with ❤ by GitHub

Once you’ve initialized blackbox, you can start adding administrators, which are keys that will be granted access to the secret data in the repository:

dan@george:/tmp/secrets$ blackbox_addadmin dan@danslimmon.com
gpg: /private/tmp/secrets/keyrings/live/trustdb.gpg: trustdb created
gpg: key A9FD8CCF: public key "Dan Slimmon " imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
NEXT STEP: You need to manually check these in:
git commit -m'NEW ADMIN: dan@danslimmon.com' keyrings/live/pubring.gpg keyrings/live/trustdb.gpg keyrings/live/blackbox-admins.txt
dan@george:/tmp/secrets$ git commit -m'NEW ADMIN: dan@danslimmon.com' keyrings/live/pubring.gpg keyrings/live/trustdb.gpg keyrings/live/blackbox-admins.txt
[master (root-commit) 94108f6] NEW ADMIN: dan@danslimmon.com
3 files changed, 2 insertions(+)
create mode 100644 keyrings/live/blackbox-admins.txt
create mode 100644 keyrings/live/pubring.gpg
create mode 100644 keyrings/live/trustdb.gpg

view raw
gistfile1.txt
hosted with ❤ by GitHub

Now you can start adding secrets securely:

dan@george:/tmp/secrets$ echo "nuclear launch code: SashaMalia44" > launchcode.txt
dan@george:/tmp/secrets$ blackbox_register_new_file launchcode.txt
========== PLAINFILE launchcode.txt
========== ENCRYPTED launchcode.txt.gpg
========== Importing keychain: START
gpg: Total number processed: 1
gpg: unchanged: 1
========== Importing keychain: DONE
========== Encrypting: launchcode.txt
========== Encrypting: DONE
========== Adding file to list.
========== CREATED: launchcode.txt.gpg
========== UPDATING REPO:
NOTE: "already tracked!" messages are safe to ignore.
[master 2a5eb65] registered in blackbox: launchcode.txt
2 files changed, 1 insertion(+)
create mode 100644 launchcode.txt.gpg
========== UPDATING VCS: DONE
Local repo updated. Please push when ready.
git push

view raw
gistfile1.txt
hosted with ❤ by GitHub

I really like how this tool gives my team a distributed, version-controlled repository of secret information. We can even give other teams access to the repository without worrying about exposing secrets!

My team uses this tool for shared passwords and SSL private keys, and it works great. Check it out.

2: Salt

At my company, we use Salt for config management. Like most config management systems, Salt lets you decouple the values in a config file from the file itself. You make a template of the config file that will appear on the node, and you put the values in a pillar (equivalent to a Chef databag, or a Puppet… whatever it’s called in Puppet).

So instead of storing a config file like this:

[myapp]
username = app_user
password = hunter2

view raw
blah.ini
hosted with ❤ by GitHub

You store a template like this:

[myapp]
username = {{ pillar['myapp']['username'] }}
password = {{ pillar['myapp']['password'] }}

view raw
blah.ini
hosted with ❤ by GitHub

and a pillar (which is just a YAML file) like this:

myapp:
username: app_user
password: hunter2

view raw
pillar.yaml
hosted with ❤ by GitHub

Now suppose you don’t want to commit that super-secure password directly to your Salt repository. Instead, you can create a PGP keypair, and give the private key to your Salt server. Then you can encrypt the password with that key. Your pillar will now look like this:

#!yaml|gpg
myapp:
username: app_user
password: |
—–BEGIN PGP MESSAGE—–
—–END PGP MESSAGE—–

view raw
pillar.yaml
hosted with ❤ by GitHub

When processing your template on the target node, Salt will seamlessly decrypt the password for you.

I love that I can give non-admins access to our Salt repo, and let them submit pull requests, without worrying about leaking passwords. To learn more about this Salt functionality, you can read the documentation for salt.renderers.gpg.

3: SecretShare

Salt’s GPG renderer and blackbox are great ways to store shared secret data, but what about transmitting secrets to particular people? In most organizations, when passwords and such need to be transmitted from employee to employee, insecure methods are used. Email, chat, and Google docs are very common media for transmitting secrets. They’re all saved indefinitely, meaning that an attacker who gains access to your account can gain access to all the secret info you’ve ever sent or received.

To make transmitting secrets as easy and secure as possible, my teammate Alex created secretshare. It lets you transmit arbitrary secret data to others in your organization, and it has immense advantages over other systems:

  • Secrets are never transmitted or stored in the clear, so a snooper can’t even read them if they manage to compromise the Amazon S3 bucket in which they’re stored.
  • Secrets are deleted from S3 after 24-48 hours, so a snooper can’t go back through the recipient’s or sender’s communication history later and retrieve them.
  • Secrets are encrypted with a one-time-use key, so a snooper can’t use the key from one secret to steal another.
  • Users don’t need Amazon AWS credentials, so a snooper can’t steal those credentials from a user.

Right now, secretshare only exists as a command-line utility, but we’re very close to having a web UI as well, which will make it even easier for non-technical people to use.

 

Security’s worst enemy is bad UX. It’s critical to make the most secure path also the easiest path. That’s what these three solutions aim to do, and they’ve made me feel much more comfortable with the security of secret data at my company. I hope they can do the same for you.

Start a Paper Club!

A few months ago, a work friend and I were commiserating about how we never make time to read any research. There’s all this fascinating, challenging stuff being written, we agreed, and we’re missing it all.

When a few more coworkers chimed in, saying they’d also like to push themselves to read more academic literature, we realized we were onto something. The next day, the Exosite Paper Club was born.

It’s been really fun to organize and participate in Paper Club these last few months. I’ve learned a lot, not just about the fields on which our reading material focused, but also about my coworkers and my company. Now I want to share some my excitement about this project and encourage you to start a Paper Club at your own company.

What Paper Club is and does

Paper Club is a group that meets every other week over lunch. We all prepare by reading a particular academic paper, and we discuss it in a free-form group session. Sometimes we teach each other about concepts from the paper, sometimes we brainstorm ways to apply ideas to our work at Exosite, and sometimes we just chat.

The paper for each session is selected by a participant from the previous session, and can be about any topic from project management to statistics to psychology to medicine. Readers of any skill level in the paper’s subject matter are warmly welcomed at meetings, but you can always skip a session if the topic doesn’t interest you. All we ask is that, when the material is hard for you, you push yourself and try to grasp it anyway.

We want a wide variety of participants, from DevOps to UI to sales & marketing, but we also want to read papers that are detailed enough to be challenging. To these ends, we have three guidelines for submitting a paper:

  1. Papers should be accessible. Try to pick papers at a technical level such that at least some of your peers will be able to understand most of the content. You want the session to be a discussion, not a lecture.
  2. Papers should be challenging. While accessibility is important, you don’t want your peers to have too easy a time. The best conversations happen when people are forced to push themselves a bit. It’s okay to suggest papers that are only accessible to those with a particular academic background, as long as you think the paper will create a good discussion with some subset of your coworkers.
  3. Papers should be deep. The best discussions tend to come from papers that dive pretty deep into a topic. Reviews and textbook chapters can be interesting, but we tend to prefer papers that go into detail on a specific topic.

What we’ve read so far

We’ve been doing Paper Club at Exosite since mid-November, and we’ve discussed 8 papers to date. I thought we’d be reading mostly computer science papers, but I couldn’t be happier with the variety we’ve gotten!

Here are a few of the papers that produced the most interesting discussions:

  • Confidence in Judgment: Persistence of the Illusion of Validity. This classic behavioral study from 1978 takes the well-established observation that people (even especially experts) are usually more confident in their judgments than they should be. The authors build a simple but powerful model for the mental processes responsible for this overconfidence, and present some suggestions for systematically curtailing it.
  • On Bullshit. If you’re like most engineers, you’re utterly allergic to bullshit. But have you ever thought about what makes bullshit bullshit? How it’s different from outright lying, and how it’s different from normal speech? This famous philosophical essay tries to answer these questions, and it provides some valuable insights for someone trying to excise bullshit from their life.
  • Why Johnny Can’t Encrypt. This 1999 analysis of PGP 5.0 usability raises some points that are crucial for anyone trying to design intuitive user experiences. The paper led us into a super productive critique of the metaphorical structure used in our own company’s documentation.

How it’s going

I have been really happy with the intellectual diversity of our Paper Club participants. Even when the paper under discussion is an especially wonky one, we get project managers, devs from all different software teams, salespeople, marketers, managers, and occasionally even company executives. Seeing all these people engage in thoughtful dialogue on such a wide variety of topics is inspiring. It takes me out of my DevOps bubble and reminds me that I work with some very interesting and smart folks.

I’d like to see how far we can stretch ourselves. Selfishly, I’d like us to read a math or linguistics paper some time, and help each other through it. I think that would be very rewarding.

Overall, I’m very glad we have a Paper Club here. I’d recommend it to anyone who likes people and/or learning. If you start one at your company, let me know how it goes!

The 10 Best Books I Read In 2015

I read 56 books in 2015, which is more than I’d read in the previous 5 years combined. Turns out books are pretty cool. Who knew?

Here are the 10 books I liked the most this year, in no particular order.

1. Metaphors We Live By (1980)

Goodreads___Metaphors_We_Live_By_by_George_Lakoff_—_Reviews__Discussion__Bookclubs__Lists.pngI got real into linguistics this year, and this book offers an interesting perspective on semantics. We usually think of metaphor as a poetic device. But this book argues that the whole human conceptual system is based on metaphor! According to George Lakoff, every concept we understand (short of concepts corresponding to our direct experience) is understood by analogy with a more concrete concept.

There are some very intriguing ideas in here. I didn’t necessarily buy (or even understand) them all, but I’m really glad I read this book.

2. The One World Schoolhouse: Education Reimagined (2012)
one_world_schoolhouse_-_Google_Search.png

This thought-provoking book on education was written by the Khan Academy guy. He presents a lot of research pointing toward the hypothesis that self-paced “mastery learning” is much more broadly effective than the contemporary American model of arbitrarily delineated, one-speed-fits all classes.

Discussing this book with my friends (who tend to be smart academic underachievers), really brought home the point that our education system underserves anyone who understands a concept more slowly, or more quickly, than the rest of the class.

3. The Orphan Master’s Son (2012)

Goodreads___The_Orphan_Master_s_Son_by_Adam_Johnson_—_Reviews__Discussion__Bookclubs__ListsI’m endlessly fascinated by North Korea, and this novel stoked my fascination. That alone would probably have been enough, but it’s also super well written. I found it beautiful and sad and gripping the whole way through.

 

 

 

4. The Immortal Life of Henrietta Lacks (2010)

Goodreads___The_Immortal_Life_of_Henrietta_Lacks_by_Rebecca_Skloot_—_Reviews__Discussion__Bookclubs__ListsThe author of this book tracked down the family of the long-deceased woman whose incredibly robust tumor cells became the most widely studied strain of human tissue in the world. Her cells have been used by scientists to make countless discoveries in genetics and immunology.

Through interviews with Henrietta Lacks’ descendants, all of whom still live in abject poverty, Rebecca Skloot raises important and nuanced questions about the interplay between science and culture and race. It’d be hard to read this book and still think of science as “pure” or “objective.” Scientists aspire to objectivity, but they’re just as boxed in by their cultural preconceptions as anyone else.

5. Red Rising (2014)

Goodreads___Red_Rising__Red_Rising_Trilogy___1__by_Pierce_Brown_—_Reviews__Discussion__Bookclubs__ListsThis is a super fun young-adult novel about badass vicious teens trying to kill each other. The premise is pretty similar to that of The Hunger Games, but I found the character development and storytelling way better. And the second book in the trilogy, Golden Son, is awesome too! The third one is coming out very soon, and I can’t wait.

Don’t expect any deep truths or transcendent prose. This book is just really fun to read.

6. The Road (2006)

Goodreads___The_Road_by_Cormac_McCarthy_—_Reviews__Discussion__Bookclubs__Lists.pngHey, speaking of books that are really fun to read, this book is not one. It’s a painfully stark novel about a father and son trying to survive in post-apocalyptic North America. I definitely wouldn’t call it a “feel-good” book.

But despite the author’s unremittingly bleak vision, the relationship between the man and the boy (those are the only names given for the characters) is very touching. A friend of mine claims to use this book as a parenting handbook of sorts. I don’t know if I’d go that far, but I do see what he means.

7. History in Three Keys: The Boxers as Event, Experience, and Myth (1997)

Goodreads___History_in_Three_Keys__The_Boxers_as_Event__Experience__and_Myth_by_Paul_A__Cohen_—_Reviews__Discussion__Bookclubs__Lists.pngMy favorite non-fiction book of the year. On one level, this is a history book about the Boxer Rebellion: a grimly compelling episode in its own right. But, moreover, this book is about history itself. Paul A. Cohen describes the Boxer Rebellion through 3 different lenses – experience, history, and myth – each of which represents part of the way we engage with the past.

I came away from this book with a newfound appreciation for the role of the historian in creating history, and a newfound skepticism about the idea that history is composed of objective facts and dates.

[I did a lightning talk (transcript in the first slide’s speaker notes) relating what I learned from this book to post-mortem analysis in DevOps.]

8. The Martian Chronicles (1950)

Goodreads___The_Martian_Chronicles_by_Ray_Bradbury_—_Reviews__Discussion__Bookclubs__ListsI tried to read The Martian Chronicles in high school, and I was like “Pfft! This isn’t science fiction. The science doesn’t make any sense.” I’ve read a lot more sci-fi now, so I decided to give this classic another shot.

What I’ve learned since high school is that the sci-fi I like most isn’t about cool technology or mind-bending thought experiments (although those can add some nice seasoning to a story). It’s about humanity: what continues to define us as human, even when the things we think of basic to our humanity – Earth, language, gender, our bodies – are stripped away?

Bradbury knew exactly what he was on about. After reading these stories with the human question in mind, I finally understand why he’s been so influential in science fiction.

9. The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2009)

Goodreads___The_New_Jim_Crow__Mass_Incarceration_in_the_Age_of_Colorblindness_by_Michelle_Alexander_—_Reviews__Discussion__Bookclubs__ListsI already had strong objections to the War on Drugs on account of the senseless imprisonment of people who’ve done nothing harmful. But this book shows how drug arrest quotas, mandatory minimum sentences, and felon profiling work together as an insidious system to maintain white supremacy in America.

I’d recommend this book to any American. We all need to understand that racial oppression didn’t go away on July 2, 1964; it just adopted more clever camouflage.

10. Anathem (2008)

anathem_cover_-_Google_Search.pngThis is now my favorite Neal Stephenson book. It has his trademark mathyness and detail to a truly engaging story in a richly-imagined universe.

Like most Neal Stephenson books, it’s not for everyone. A lot of time is spent on epistemological ruminations and physics. I love that shit, so I loved this book all the way from page 1 to page 937.

 

Welp, those are some books I loved this year.

But I want to see what you’re reading, so I can find more books to love. Friend-poke me on GreatBooks!

Minding Your Pees And Queues

A couple weeks ago, my wife & I went to Basilica Block Party, a local music festival. It was a good time, and OH MANS you have to see Fitz & The Tantrums live. Their sax player is a hero unit.

Anyhoo, we walked over to the porta-potties between sets. The lines were about 8–10 people long. And my spouse suggested an intriguing strategy for minimizing her wait time. I will call this strategy The Strategy:

The Strategy: All else being equal, get in the line with the most men.

The reasoning behind The Strategy is obvious: women take longer to pee than men, so if 2 queues are the same length, then the faster-moving queue should be the one with fewer women. It’s intuitive, but due to my current obsession with queueing theory, I became intensely interested in the strategy’s implications. In particular, I started to wonder things like:

  • How much can you expect to shave off your wait time by following The Strategy?
  • How does the effectiveness of The Strategy vary with its popularity? Little’s Law tells us that the overall average wait time won’t be affected, but is The Strategy still effective if 10% of the crowd is using it? 25%? 90%?

And then I thought to myself “I could answer these questions through the magic of computation!”

qsim

Lately I’ve been seeing queueing systems everywhere I go, so I figured it’d be worthwhile to write a generic queueing system simulator to satisfy my curiosity. That’s what I did. It’s called qsim.

To quote the README,

A queueing system in qsim processes arbitrary jobs and is composed of 5 pieces:

  • The arrival process controls how often jobs enter the system.
  • The arrival behavior defines what happens when a new job arrives. When the arrival process generates a new job, the arrival behavior either sends it straight to a processor or appends it to a queue.
  • Queues are simply holding pens for jobs. A system may have many queues associated with different processors.
  • A queueing discipline defines the relationship between queues and processors. It’s responsible for choosing the next job to process and assigning that job to a processor.
  • Processors are the entities that remove jobs from the system. A processor may take differing amounts of time to process different jobs. Once a job has been processed, it leaves the queueing system.

qsim provides a framework for implementing these building blocks and putting them together, and it also provides hooks that can be used to gather data about a simulation as it runs. I’m really looking forward to using qsim to gain insight into all sorts of different systems.

For now: porta-potties.

The porta-potty simulation

You’ll recall The Strategy:

The Strategy: All else being equal, get in the line with the most men.

To determine the effectiveness of The Strategy, I implemented PortaPottySystem using qsim. Here are some of the assumptions I made:

  • People arrive very frequently, but if all the queues are too long (8 people) they leave.
  • There are 15 porta-potties, each with its own queue. Once a person enters a queue, they stay in that queue until the corresponding porta-potty is vacant.
  • Shockingly, I couldn’t find any reliable data on the empirical distribution of pee times by sex, so I chose a normal distribution with a mean of 40 seconds for men and 60 seconds for women.
  • Most people just pick a random queue to join (as long as it’s no longer than the shortest queue), but some people use The Strategy of getting into the queue with the highest man:woman ratio (again, as long as it’s no longer than the shortest queue).
  • Nobody’s going number 2 because that’s gross.
  • Everyone is either a man or a woman. I know all about gender being a spectrum, and if you want to submit a pull request that smashes the gender binary, please do.

The first question I wanted to answer was: how does using The Strategy affect your wait time?

To answer this question, I ran a simulation where the probability of a given person deciding to use The Strategy is 1%. The other 99% of people simply join one of the shortest queues without regard to its man:woman ratio. I ran 20 simulations, each for the equivalent of 2 weeks, and came up with these wait time distributions:

waittimes

More of a box-plot person? I hear ya:

boxplot

The Strategy is definitely not a huge win here. On average, your wait time will be reduced by about 10–15 seconds (4–6%) if you use The Strategy. Still, it’s not nothing, right?

Now how does the benefit of using The Strategy vary with its popularity? This is actually really interesting. I never would have guessed it. But the data shows that you should always use The Strategy, even if everybody else is using it too.

Here I’ve charted average wait times against the proportion of people using the strategy. Colors are more prominent where the data set in question is large (and therefore heavily influences the overall average):

eff_v_pop

You’ll notice that the overall average (dark green) does not vary with Strategy popularity. This is good news, because otherwise we’d be violating Little’s Law, which would probably just mean our simulation was broken.

The interesting thing here is that the benefit of using The Strategy decreases pretty much linearly as its popularity increases, but at the same time there accrues a disadvantage to not using it. If everybody else in the system is using The Strategy, and you come along and decide not to, you can still expect to wait 10 seconds longer than everybody else. Therefore using The Strategy is unequivocally better than not using The Strategy.

Unless of course you don’t really care about those 10 seconds, in which case you should do whatever you want.

Wherein I Rant About Fourier Analysis

Monitorama PDX was a fantastic conference. Lots of engineers, lots of diversity, not a lot of bullshit. Jason Dixon and his minions are some top-notch conference-runners.

As someone who loves to see math everywhere, I was absolutely psyched at the mathiness of the talks at Monitorama 2014. I mean, damn. Here, watch my favorite talk: Noah Kantrowitz’s.

I studied physics in college, and I worked in computational research, so the Fourier Transform was a huge deal for me. In his talk, Noah gives some really interesting takes on the application of digital signal processing techniques to ops. I came home inspired by this talk and immediately started trying my hand at this stuff.

What Fourier analysis is

“FOUR-ee-ay” or if you want to be French about it “FOO-ree-ay” with a hint of phlegm on the “r”.

Fourier analysis is used in all sorts of fields that study waves. I learned about it when I was studying physics in college, but it’s most notably popular in digital signal processing.

It’s a thing you can do to waves. Let’s take sound waves, for instance. When you digitize a sound wave, you _sample_ it at a certain frequency: some number of times every second, you write down the strength of the wave. For sound, this sampling frequency is usually 44100 Hz (44,100 times a second).

Why 44100 Hz? Well, the highest pitch that can be heard by the human ear is around 20000 Hz, and you can only reconstitute frequencies from a digital signal at up to half the sampling rate. We don’t need to capture frequencies we can’t hear, so we don’t need to sample any faster than twice our top hearing frequency.

Now what Fourier analysis lets you do is look at a given wave and determine the frequencies that were superimposed to create it. Take the example of a pure middle C (this is R code):

library(ggplot2)
signal = data.frame(t=seq(0, .1, 1/44100))
signal$a = cos(signal$t * 261.625)
qplot(data=signal, t, a, geom='line', color=I('blue'), size=I(2)) +
    ylab('Amplitude') + xlab('Time (seconds)') + ggtitle('Middle C')

Image

Pass this through a Fourier transform and you get:

fourier = data.frame(v=fft(signal$a))
# The first value of fourier$v will be a (zero) DC
# component we don't care about:
fourier = tail(fourier, nrow(fourier) - 1)
# Also, a Fourier transform contains imaginary values, which
# I'm going to ignore for the sake of this example:
fourier$v = Re(fourier$v)
# These are the frequencies represented by each value in the
# Fourier transform:
fourier$f = seq(10, 44100, 10)
# And anything over half our sampling frequency is gonna be
# garbage:
fourier = subset(fourier, f <= 22050)

qplot(data=fourier, f, v, geom='line', color=I('red'), size=I(2)) +
    ylab('Fourier Transform') + xlab('Frequency (Hz)') +
    ggtitle('Middle C Fourier Transform') + coord_cartesian(xlim=c(0,400))

Image

As you can see, there’s a spike at 261.625 Hz, which is the frequency of middle C. Why does it gradually go up, and then go negative and asymptotically come back to 0? That has to do with windowing, but let’s not worry about it. It’s an artifact of this being a numerical approximation of a Fourier Transform, rather than an analytical solution.

You can do Fourier analysis with equal success on a composition of frequencies, like a chord. Here’s a C7 chord, which consists of four notes:

signal$a = cos(signal$t * 261.625 * 2 * pi) + # C
    cos(signal$t * 329.63 * 2 * pi) + # E
    cos(signal$t * 392.00 * 2 * pi) + # G
    cos(signal$t * 466.16 * 2 * pi) + # B flat
qplot(data=signal, t, a, geom='line', color=I('blue'), size=I(2)) +
    ylab('Amplitude') + xlab('Time (seconds)') + ggtitle('C7 chord')

Image

Looking at that mess, you probably wouldn’t guess that it was a C7 chord. You probably wouldn’t even guess that it’s composed of exactly four pure tones. But Fourier analysis makes this very clear:

fourier = data.frame(v=fft(signal$a))
fourier = tail(fourier, nrow(fourier) - 1)
fourier$v = Re(fourier$v)
fourier$f = seq(10, 44100, 10)
fourier = subset(fourier, f <= 22050)

qplot(data=fourier, f, v, geom='line', color=I('red'), size=I(2)) +
    ylab('Fourier Transform') +
    xlab('Frequency (Hz)') +
    ggtitle('Middle C Fourier Transform') +
    coord_cartesian(xlim=c(0,400))

Image

And there are our four peaks, right at the frequencies of the four notes in a C7 chord!

Straight-up Fourier analysis on server metrics

Naturally, when I heard all these Monitorama speakers mention Fourier transforms, I got super psyched. It’s an extremely versatile technique, and I was sure that I was about to get some amazing results.

It’s been kinda disappointing.

By default, a Graphite server samples your metrics (in a manner of speaking) once every 10 seconds. That’s a sampling frequency of 0.1 Hz. So we have a Nyquist Frequency (the maximum frequency at which we can resolve signals with a Fourier transform) of half that: 0.005 Hz.

So, if our goal is to look at a Fourier transform and see important information jump out at us, we have to be looking for oscillations that occur three times a minute or less. I don’t know about you, but I find that outages and performance anomalies rarely show up as oscillations like that. And when they do, you’re going to notice them before you do a Fourier transform.

Usually we get spikes or step functions instead, which bleed into wide ranges of nearby frequencies and end up being much clearer in amplitude-space than in Fourier space. Take this example of some shit hitting some fans:

Image

If we were trying to get information from this metric with Fourier transforms, we’d be interested in the Fourier transform before and after the fan got shitty. But those transforms are much less useful than the amplitude-space data:

Image

I haven’t been able to find the value in automatically applying Fourier transforms to server metrics. It’s a good technique for finding oscillating components of a messy signal, but unless you know that that’s what you’re looking for, I don’t think you’ll get much else out of them.

What about low-pass filters?

A low-pass filter uses a Fourier transform to remove high frequency components from a signal. One of my favorite takeaways from that Noah Kantrowitz talk was this: Nagios’s flapping detection mechanism is a low-pass filter.

If you want to alert when a threshold is exceeded — but not every time your metric goes above and below that threshold in a short period of time — you can run your metric through a low-pass filter. The high-frequency, less-valuable data will go away, and you’ll be left with a more stable signal to check against your threshold.

I haven’t tried this method of flap detection, but I suspect that the low-sampling-frequency problem would make it significantly less useful than one might hope. If you’ve seen Fourier analysis applied as a flap detection algorithm, I’d love to see it. I would eat my words, and they’d be delicious.

I hope I’m wrong

If somebody can show me a useful application of Fourier analysis to server monitoring, I will freak out with happiness. I love the concept. But until I see a concrete example of Fourier analysis doing something that couldn’t be done effectively with a much simpler algorithm, I’m skeptical.

Addendum

As Abe Stanway points out, Fourier analysis is a great tool to have in your modeling toolbox. It excels at finding seasonal (meaning periodic) oscillations in your data. Also, Abe and the Skyline team are working on adding seasonality detection to Skyline, which might use Fourier analysis to determine whether seasonal components should be used.

Theo Schlossnagle coyly suggests that Circonus uses Fourier analysis in a similar manner.

Devops needs feminism

I just returned to Minneapolis from Velocity NY bursting with ideas as always. The program was saturated with fantastic speakers, like my new ops crush Ilya Grigorik of Google. And my favorite part, as always, was the hallway track. I met dozens of brilliant, inspiring engineers. Allspaw, Souders, and Nash really know how to throw a conference.

One exhilarating thing about Velocity is the focus on culture as a driving force for business. Everybody’s in introspection mode, ready to break down their organizational culture and work new ideas into it. It reminds me of artificial competence in genetic engineering. It’s a joy to experience.

But despite all this wonderful cultural introspection, y’know what word you don’t hear? Y’know what drags up awkward silences and sometimes downright reactionary vitriol?

Feminism.

As long as we’re putting our tech culture under the microscope, why don’t we swap in a feminist lens? If you question any random geek at Velocity about gender, you can bet they’ll say “Women are just as good as men at ops,” or “I work with a female engineer, and she’s really smart!” But as soon as you say “feminism,” the barriers go up. It’s like packet loss: the crowd only hears part of what you’re saying, and they assume that there’s nothing else to hear.

We need to build feminism into our organizations, and Velocity would be a great venue for that. I’m just one engineer, and I’m not by any means a feminism expert, but I do think I can shed some light on the most common wrongnesses uttered by engineers when feminism is placed on the table.

Feminism != “Girls are better than boys”

Mention feminism to a random engineer, and you’re likely to hear some variation on:

I’m against all bias! We don’t need feminism, we just need to treat each other equally.

Feminism is often portrayed as the belief that women are superior, or that men should be punished for the inequality they’ve created. Feminism is often portrayed as man-hating.

Feminism is not that. Everyone defines it differently, but I like the definition at the Geek Feminism Wiki:

Feminism is a movement which seeks respect and equality for women both under law and culturally.

Equality. Everyone who’s not an asshole wants it, but we don’t have it yet. That’s why we need a framework in which to analyze our shortcomings, conscious and unconscious. Feminism can be that framework.

Imagine hearing an engineer say this:

Our product should perform optimally! We don’t need metrics, we just need to build a system that performs well.

Would this not be face-palmingly absurd? Of course it would. Metrics let you define your goals, demonstrate the value of your goals, and check how well you’re doing. Metrics show you where you’re losing milliseconds. Metrics are the compass and map with which you navigate the dungeon of performance.

Feminism is to equality as metrics are to performance. Without a framework for self-examination, all the best intentions in the world won’t get you any closer to an equality culture.

Wanting equality isn’t enough

When feminism comes up, you might hear yourself say something like this:

I already treat female engineers equally. Good engineers are good engineers, no matter their gender.

Hey great! The intention to treat others equally is a necessary condition for a culture of equality. But it’s not a sufficient condition.

This is akin to saying:

I’m really into performance, so our site is as fast as it can be.

You might be a performance juggernaut, but you’re just one engineer. You’re responsible for one cross-section of the product. First of all, one person doesn’t constitute a self-improving or even a self-sustaining performance culture. And even more crucially, there are performance mistakes you don’t even know you’re making!

Promoting equality in your organization requires a cultural shift, just like promoting performance. Cultural shifts happen through discourse and introspection and goal-setting — not wishing. That’s why we need to look to feminism.

If you start actively working to attack inequality in your organization, I guarantee you’ll realize you were already a feminist.

Feminism doesn’t require you to be ashamed of yourself

When your heart’s in the right place and you’re constantly examining your own actions and your organization’s, you start to notice bias and prejudice in more and more places. Most disturbingly, you notice it in yourself.

Biases are baked right into ourselves and our culture. They’re so deeply ingrained that we often don’t see or hear them anymore. Think anti-patterns and the broken windows theory. When we do notice our biases, it’s horrifying. We feel ashamed and we want to sweep them under the rug.

Seth Walker of Etsy gave an excellent talk at Velocity NY entitled A Public Commitment to Performance.” It’s about how, rather than keeping their performance shortcomings private until everything’s fixed, Etsy makes public blog posts detailing their current performance challenges and recent performance improvements. This way, everyone at the company knows that there will be public eyes on any performance enhancement they make. It promotes a culture of excitement about improvements, rather than one of shame about failures.

When you notice biases in your organization — and moreover when others notice them — don’t hide them. Talk about them, analyze them, and figure out how to fix them. That’s the productive thing to do with software bugs and performance bottlenecks, so why not inequality?

Where to go from here

I’m kind of a feminism noob, but that won’t stop me from exploring it and talking about it. It shouldn’t stop you either. Geek Feminism is a good jumping-off point if you want to learn about feminism, and they also have a blog. @OnlyGirlInTech is a good Twitter account. I know there’s other stuff out there, so if you’ve got something relevant,  jam it in the comment section!

EDIT on 2013-10-21: Here are some links provided in the comments by Alexis Finch (thanks, Alexis Finch!)

Ada Initiative – focused on OpenSource, working to create allies as well as support women directly
http://adainitiative.org/what-we-do/workshops-and-training/

Girls Who Code – working with high school girls to teach them the skills and provide inspiration to join the tech fields
http://www.girlswhocode.com/

LadyBits – adding women’s voices to the media, covering tech and science [w/ a few men writing as well]
https://medium.com/ladybits-on-medium

Reductress – satire addressing the absurdity of women’s portrayal in the media [The Onion, feminized]
http://www.reductress.com/top-five-lip-glosses-paid-tell/

WomenWhoCode & LadiesWhoCode & PyLadies – if you want to find an expert engineer who happens to also be of the female persuasion [to speak at a conference, or to join your team] these are places to find seasoned tech folks, as well as for those new to tech to get started learning, with chapters worldwide.
http://www.meetup.com/Women-Who-Code-SF/ & https://twitter.com/WomenWhoCode
http://www.ladieswhocode.com/ & https://twitter.com/ladieswhocode
http://www.pyladies.com/ https://twitter.com/pyladies

Making a quick data visualization web-app with Shiny

Lately we’ve been getting concerned about our PHP error log. You know the story: errors start popping up, but they’re not causing problems in production, so you move on with your busy life. But you know in your heart of hearts that you should really be fixing the error.

The time has come for us to prune those errors, and I thought the first step should be, as always, to look at the data. Since it’s really the PHP developers who will know what to do with it, I thought it might be useful to make my analysis interactive. Enter Shiny: a web app framework that lets users interact directly with your data.

The first step was to massage my log data into a CSV that looks like this:

"date","error.id","error.count","access.count"
"2013-06-04","inc/foo/mario/journey.php:700",5,308733
"2013-06-04","inc/foo/mario/xenu.php:498",1,308733
"2013-06-04","inc/bar/mario/larp.php:363",14,308733
"2013-06-04","inc/nico.php:1859",3,308733
"2013-06-04","inc/spoot/heehaw.php:728",5,308733
"2013-06-04","inc/spoot/heehaw.php:735",5,308733
"2013-06-04","inc/spoot/heehaw.php:736",5,308733
"2013-06-04","inc/spoot/heehaw.php:737",5,308733
"2013-06-04","inc/spoot/heehaw.php:739",5,308733

For each date, error.id indicates the file and line on which the error occurred, error.count is how many times that error occurred on that date, and access.count is the total number of hits our app received on that date. With me so far?

Now I install Shiny (sure, this is artifice — I already had Shiny installed — but let’s pretend) at the R console like so:

install.packages('devtools')
library(devtools)
install_github('shiny', 'rstudio')
library(shiny)

And from the shell, I start a project:

mkdir portalserr
cd portalserr
cp /tmp/portalserr.csv .

Defining the UI

Now I write my app. I know what I want it to look like, so I’ll start with ui.R. Going through that bit by bit:

shinyUI(pageWithSidebar(
  headerPanel("PHP errors by time"),

I’m telling Shiny how to lay out my UI. I want a sidebar with form controls, and a header that describes the app.

  sidebarPanel(
    checkboxGroupInput("errors_shown", "Most common errors:", c(
      "davidbowie.php:50"="lib/exosite/robot/davidbowie.php:50",
      "heehaw.php:728"="inc/spoot/heehaw.php:728",
      …
      "llamas-10.php:84"="inc/widgets/llamas-10.php:84"
    )
  )),

Now we put a bunch of checkboxes on my sidebar. The first argument to checkboxGroupInput() gives the checkbox group a name. This is how server.R will refer to the checkbox contents. You’ll see.

The second argument is a label for the form control, and the last argument is a list (in non-R parlance an associative array or a hash) defining the checkboxes themselves. The keys (like davidbowie.php:50) will be the labels visible in the browser, and the values are the strings that server.R will receive when the corresponding box is checked.

  mainPanel(
    plotOutput("freqPlot")
  )

We’re finished describing the sidebar, so now we describe the main section of the page. It will contain only one thing: a plot called “freqPlot”.

And that’s it for the UI! But it needs something to talk to.

Defining the server

The server goes in — surprise — server.R. Let’s walk through that.

logfreq <- read.csv('portalserr.csv')
logfreq$date <- as.POSIXct(logfreq$date)
logfreq$perthou <- logfreq$error.count / logfreq$access.count * 10^3

We load the CSV into a data frame called logfreq and translate all the strings in the date column into POSIXct objects so that they’ll plot right.

Then we generate the perthou column, which contains the number of occurrences of a given error on a given day, per thousand requests that occurred that day.

shinyServer(function(input, output) {
  output$freqPlot

Okay now we start to see the magic that makes Shiny so easy to use: reactivity. We start declaring the server application with shinyServer(), which we pass a callback. That callback will be passed the input and output parameters.

input is a data frame containing the values of all the inputs we defined in ui.R. Whenever the user messes with those checkboxes, the reactive blocks (what does that mean? I’ll tell you in a bit) of our callback will be re-run, and the names of any checked boxes will be in input$errors_shown.

Similarly, output is where you put the stuff you want to send back to the UI, like freqPlot.

But the coolest part of this excerpt is the last bit: renderPlot({. That curly-bracket there means that what follows is an expression: a literal block of R code that can be evaluated later. Shiny uses expressions in a very clever way: it determines which expressions depend on which input elements, and when the user messes with inputs Shiny reevaluates only the expressions that depend on the inputs that were touched! That way, if you have a complicated analysis that can be broken down into independent subroutines, you don’t have to re-run the whole thing every time a single parameter changes.

     lf.filtered <- subset(logfreq, error.id %in% input$errors_shown)

      p <- ggplot(lf.filtered) +
        geom_point(aes(date, perthou, color=error.id), size=3) +
        geom_line(aes(date, perthou, color=error.id, group=error.id), size=2) +
        expand_limits(ymin=0) +
        theme(legend.position='left') +
        ggtitle('Errors per thousand requests') +
        ylab('Errors per thousand requests') +
        xlab('Date')
      print(p)

This logic will be reevaluated every time our checkboxes are touched. It filters the logfreq data frame down to just the errors whose boxes are checked, then makes a plot with ggplot2 and sends it to the UI.

And we’re done.

Running it

From the R console, we do this:

> runApp('/path/to/portalserr')

Listening on port 3087

This automatically opens up http://localhost:3087 in a browser and presents us with our shiny new… uh… Shiny app:

Why don’t we do it in production?

Running Shiny apps straight from the R console is fine for sharing them around the office, but if you need a more robust production environment for Shiny apps (e.g. if you want to share them with the whole company or with the public), you’ll probably want to use shiny-server. If you’re putting your app behind an SSL-enabled proxy server, use the latest HEAD from Github since it contains this change.

Go forth and visualize!

Quirks are bugs

“Stop Expecting That.”

When you use a program a lot, you start to notice its quirks. If you’re a programmer yourself, you start to develop theories about why the quirks exist, and how you’d fix them if you had the time or the source. If you’re not a programmer, you just shrug and work around the quirks.

I review about 400 virtual flash cards a day in studying for Jeopardy, so I’ve really started to pick up on the quirks of the flash card software I use. One quirk in particular really bothered me: the documentation, along with the first-tier support team, claims that when cards come up for review they will be presented in a random order. But I’ve noticed that, far from being truly random, the program presents cards in bunches of 50: old cards in the first bunch, then newer and newer bunches of cards. By the time I get to my last 50 cards of the day, they’re all less than 2 weeks old.

So I submitted a bug report, complete with scatterplot demonstrating this clear pattern. I explained “I would expect the cards to be shuffled evenly, but that doesn’t appear to be the case.” And do you know what the lead developer of the project told me?

“Stop expecting that.”

Not in so many words, of course, but there you have it. The problem was not in the software; it was in my expectations.

It’s a common reaction among software developers. We think “Look, that’s just the way it works. I understand why it works that way and I can explain it to you. So, you see, it’s not really a bug.” And as frustrating as this attitude is, I can’t say I’m immune to it myself. I’m in ops, so the users of my software are usually highly technical. I can easily make them understand why a weird thing keeps happening, and they can figure out how to work around the quirk. But the “stop expecting that” attitude is wrong, and it hurts everyone’s productivity, and it makes software worse. We have to consciously reject it.

Quirks are bugs.

A bug is when the program doesn’t work the way the programmer expects.

A quirk is when the program doesn’t work the way the user expects.

What’s the difference, really? Especially in the open-source world, where every user is a potential developer, and all your developers are users?

Quirks and bugs can both be worked around, but a workaround requires the user to learn arbitrary procedures which aren’t directly useful, and which aren’t connected in any meaningful way to his mental model of the software.

Quirks and bugs both make software less useful. They make users less productive. Neglected, they necessitate a sort of oral tradition — not dissimilar from superstition — in which users pass the proper set of incantations from generation to generation. Software shouldn’t be like that.

Quirks and bugs both drive users away.

Why should we treat them differently?

Stop “Stop Expecting That”ing

I’ve made some resolutions that I hope will gradually erase the distinction in my mind between quirks and bugs.

When I hear that a user encountered an unexpected condition in my software, I will ask myself how they developed their incorrect expectation. As they’ve used the program, has it guided them toward a flawed understanding? Or have I just exposed some internal detail that should be covered up?

If I find myself explaining to a user how my software works under the hood, I will redirect my spiel toward a search for ways to abstract away those implementation details instead of requiring the user to understand them.

If users are frequently confused about a particular feature, I’ll take a step back and examine the differences between my mental model of the software and the users’ mental model of it. I’ll then adjust one or both in order to bring them into congruence.

Anything that makes me a stronger force multiplier is worth doing.