3 Things That Make Encryption Easier

Almost everyone (especially in ops) knows they should be better about encrypting secret data. And yet most organizations have at least a few passwords and secret keys checked into Git somewhere.

The ideal solution would be for everyone at your company to use PGP all the time, but that is a huge pain. Encryption tools are annoying to use, and a significant time investment is required to learn to use them correctly. And if security is hard, people will always find a way to avoid it.

In the last few months, I’ve adopted 3 new technologies that make secure storage and exchange of secret information at least bearable.

1: Blackbox

StackExchange’s blackbox tool makes it easy to store encrypted data in a Git repository. First you need to import into your personal keyring all the PGP keys you want to grant access to. Then you initialize the blackbox directory structure:

Once you’ve initialized blackbox, you can start adding administrators, which are keys that will be granted access to the secret data in the repository:

Now you can start adding secrets securely:

I really like how this tool gives my team a distributed, version-controlled repository of secret information. We can even give other teams access to the repository without worrying about exposing secrets!

My team uses this tool for shared passwords and SSL private keys, and it works great. Check it out.

2: Salt

At my company, we use Salt for config management. Like most config management systems, Salt lets you decouple the values in a config file from the file itself. You make a template of the config file that will appear on the node, and you put the values in a pillar (equivalent to a Chef databag, or a Puppet… whatever it’s called in Puppet).

So instead of storing a config file like this:

You store a template like this:

and a pillar (which is just a YAML file) like this:

Now suppose you don’t want to commit that super-secure password directly to your Salt repository. Instead, you can create a PGP keypair, and give the private key to your Salt server. Then you can encrypt the password with that key. Your pillar will now look like this:

When processing your template on the target node, Salt will seamlessly decrypt the password for you.

I love that I can give non-admins access to our Salt repo, and let them submit pull requests, without worrying about leaking passwords. To learn more about this Salt functionality, you can read the documentation for salt.renderers.gpg.

3: SecretShare

Salt’s GPG renderer and blackbox are great ways to store shared secret data, but what about transmitting secrets to particular people? In most organizations, when passwords and such need to be transmitted from employee to employee, insecure methods are used. Email, chat, and Google docs are very common media for transmitting secrets. They’re all saved indefinitely, meaning that an attacker who gains access to your account can gain access to all the secret info you’ve ever sent or received.

To make transmitting secrets as easy and secure as possible, my teammate Alex created secretshare. It lets you transmit arbitrary secret data to others in your organization, and it has immense advantages over other systems:

  • Secrets are never transmitted or stored in the clear, so a snooper can’t even read them if they manage to compromise the Amazon S3 bucket in which they’re stored.
  • Secrets are deleted from S3 after 24-48 hours, so a snooper can’t go back through the recipient’s or sender’s communication history later and retrieve them.
  • Secrets are encrypted with a one-time-use key, so a snooper can’t use the key from one secret to steal another.
  • Users don’t need Amazon AWS credentials, so a snooper can’t steal those credentials from a user.

Right now, secretshare only exists as a command-line utility, but we’re very close to having a web UI as well, which will make it even easier for non-technical people to use.


Security’s worst enemy is bad UX. It’s critical to make the most secure path also the easiest path. That’s what these three solutions aim to do, and they’ve made me feel much more comfortable with the security of secret data at my company. I hope they can do the same for you.

Start a Paper Club!

A few months ago, a work friend and I were commiserating about how we never make time to read any research. There’s all this fascinating, challenging stuff being written, we agreed, and we’re missing it all.

When a few more coworkers chimed in, saying they’d also like to push themselves to read more academic literature, we realized we were onto something. The next day, the Exosite Paper Club was born.

It’s been really fun to organize and participate in Paper Club these last few months. I’ve learned a lot, not just about the fields on which our reading material focused, but also about my coworkers and my company. Now I want to share some my excitement about this project and encourage you to start a Paper Club at your own company.

What Paper Club is and does

Paper Club is a group that meets every other week over lunch. We all prepare by reading a particular academic paper, and we discuss it in a free-form group session. Sometimes we teach each other about concepts from the paper, sometimes we brainstorm ways to apply ideas to our work at Exosite, and sometimes we just chat.

The paper for each session is selected by a participant from the previous session, and can be about any topic from project management to statistics to psychology to medicine. Readers of any skill level in the paper’s subject matter are warmly welcomed at meetings, but you can always skip a session if the topic doesn’t interest you. All we ask is that, when the material is hard for you, you push yourself and try to grasp it anyway.

We want a wide variety of participants, from DevOps to UI to sales & marketing, but we also want to read papers that are detailed enough to be challenging. To these ends, we have three guidelines for submitting a paper:

  1. Papers should be accessible. Try to pick papers at a technical level such that at least some of your peers will be able to understand most of the content. You want the session to be a discussion, not a lecture.
  2. Papers should be challenging. While accessibility is important, you don’t want your peers to have too easy a time. The best conversations happen when people are forced to push themselves a bit. It’s okay to suggest papers that are only accessible to those with a particular academic background, as long as you think the paper will create a good discussion with some subset of your coworkers.
  3. Papers should be deep. The best discussions tend to come from papers that dive pretty deep into a topic. Reviews and textbook chapters can be interesting, but we tend to prefer papers that go into detail on a specific topic.

What we’ve read so far

We’ve been doing Paper Club at Exosite since mid-November, and we’ve discussed 8 papers to date. I thought we’d be reading mostly computer science papers, but I couldn’t be happier with the variety we’ve gotten!

Here are a few of the papers that produced the most interesting discussions:

  • Confidence in Judgment: Persistence of the Illusion of Validity. This classic behavioral study from 1978 takes the well-established observation that people (even especially experts) are usually more confident in their judgments than they should be. The authors build a simple but powerful model for the mental processes responsible for this overconfidence, and present some suggestions for systematically curtailing it.
  • On Bullshit. If you’re like most engineers, you’re utterly allergic to bullshit. But have you ever thought about what makes bullshit bullshit? How it’s different from outright lying, and how it’s different from normal speech? This famous philosophical essay tries to answer these questions, and it provides some valuable insights for someone trying to excise bullshit from their life.
  • Why Johnny Can’t Encrypt. This 1999 analysis of PGP 5.0 usability raises some points that are crucial for anyone trying to design intuitive user experiences. The paper led us into a super productive critique of the metaphorical structure used in our own company’s documentation.

How it’s going

I have been really happy with the intellectual diversity of our Paper Club participants. Even when the paper under discussion is an especially wonky one, we get project managers, devs from all different software teams, salespeople, marketers, managers, and occasionally even company executives. Seeing all these people engage in thoughtful dialogue on such a wide variety of topics is inspiring. It takes me out of my DevOps bubble and reminds me that I work with some very interesting and smart folks.

I’d like to see how far we can stretch ourselves. Selfishly, I’d like us to read a math or linguistics paper some time, and help each other through it. I think that would be very rewarding.

Overall, I’m very glad we have a Paper Club here. I’d recommend it to anyone who likes people and/or learning. If you start one at your company, let me know how it goes!

The 10 Best Books I Read In 2015

I read 56 books in 2015, which is more than I’d read in the previous 5 years combined. Turns out books are pretty cool. Who knew?

Here are the 10 books I liked the most this year, in no particular order.

1. Metaphors We Live By (1980)

Goodreads___Metaphors_We_Live_By_by_George_Lakoff_—_Reviews__Discussion__Bookclubs__Lists.pngI got real into linguistics this year, and this book offers an interesting perspective on semantics. We usually think of metaphor as a poetic device. But this book argues that the whole human conceptual system is based on metaphor! According to George Lakoff, every concept we understand (short of concepts corresponding to our direct experience) is understood by analogy with a more concrete concept.

There are some very intriguing ideas in here. I didn’t necessarily buy (or even understand) them all, but I’m really glad I read this book.

2. The One World Schoolhouse: Education Reimagined (2012)

This thought-provoking book on education was written by the Khan Academy guy. He presents a lot of research pointing toward the hypothesis that self-paced “mastery learning” is much more broadly effective than the contemporary American model of arbitrarily delineated, one-speed-fits all classes.

Discussing this book with my friends (who tend to be smart academic underachievers), really brought home the point that our education system underserves anyone who understands a concept more slowly, or more quickly, than the rest of the class.

3. The Orphan Master’s Son (2012)

Goodreads___The_Orphan_Master_s_Son_by_Adam_Johnson_—_Reviews__Discussion__Bookclubs__ListsI’m endlessly fascinated by North Korea, and this novel stoked my fascination. That alone would probably have been enough, but it’s also super well written. I found it beautiful and sad and gripping the whole way through.




4. The Immortal Life of Henrietta Lacks (2010)

Goodreads___The_Immortal_Life_of_Henrietta_Lacks_by_Rebecca_Skloot_—_Reviews__Discussion__Bookclubs__ListsThe author of this book tracked down the family of the long-deceased woman whose incredibly robust tumor cells became the most widely studied strain of human tissue in the world. Her cells have been used by scientists to make countless discoveries in genetics and immunology.

Through interviews with Henrietta Lacks’ descendants, all of whom still live in abject poverty, Rebecca Skloot raises important and nuanced questions about the interplay between science and culture and race. It’d be hard to read this book and still think of science as “pure” or “objective.” Scientists aspire to objectivity, but they’re just as boxed in by their cultural preconceptions as anyone else.

5. Red Rising (2014)

Goodreads___Red_Rising__Red_Rising_Trilogy___1__by_Pierce_Brown_—_Reviews__Discussion__Bookclubs__ListsThis is a super fun young-adult novel about badass vicious teens trying to kill each other. The premise is pretty similar to that of The Hunger Games, but I found the character development and storytelling way better. And the second book in the trilogy, Golden Son, is awesome too! The third one is coming out very soon, and I can’t wait.

Don’t expect any deep truths or transcendent prose. This book is just really fun to read.

6. The Road (2006)

Goodreads___The_Road_by_Cormac_McCarthy_—_Reviews__Discussion__Bookclubs__Lists.pngHey, speaking of books that are really fun to read, this book is not one. It’s a painfully stark novel about a father and son trying to survive in post-apocalyptic North America. I definitely wouldn’t call it a “feel-good” book.

But despite the author’s unremittingly bleak vision, the relationship between the man and the boy (those are the only names given for the characters) is very touching. A friend of mine claims to use this book as a parenting handbook of sorts. I don’t know if I’d go that far, but I do see what he means.

7. History in Three Keys: The Boxers as Event, Experience, and Myth (1997)

Goodreads___History_in_Three_Keys__The_Boxers_as_Event__Experience__and_Myth_by_Paul_A__Cohen_—_Reviews__Discussion__Bookclubs__Lists.pngMy favorite non-fiction book of the year. On one level, this is a history book about the Boxer Rebellion: a grimly compelling episode in its own right. But, moreover, this book is about history itself. Paul A. Cohen describes the Boxer Rebellion through 3 different lenses – experience, history, and myth – each of which represents part of the way we engage with the past.

I came away from this book with a newfound appreciation for the role of the historian in creating history, and a newfound skepticism about the idea that history is composed of objective facts and dates.

[I did a lightning talk (transcript in the first slide’s speaker notes) relating what I learned from this book to post-mortem analysis in DevOps.]

8. The Martian Chronicles (1950)

Goodreads___The_Martian_Chronicles_by_Ray_Bradbury_—_Reviews__Discussion__Bookclubs__ListsI tried to read The Martian Chronicles in high school, and I was like “Pfft! This isn’t science fiction. The science doesn’t make any sense.” I’ve read a lot more sci-fi now, so I decided to give this classic another shot.

What I’ve learned since high school is that the sci-fi I like most isn’t about cool technology or mind-bending thought experiments (although those can add some nice seasoning to a story). It’s about humanity: what continues to define us as human, even when the things we think of basic to our humanity – Earth, language, gender, our bodies – are stripped away?

Bradbury knew exactly what he was on about. After reading these stories with the human question in mind, I finally understand why he’s been so influential in science fiction.

9. The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2009)

Goodreads___The_New_Jim_Crow__Mass_Incarceration_in_the_Age_of_Colorblindness_by_Michelle_Alexander_—_Reviews__Discussion__Bookclubs__ListsI already had strong objections to the War on Drugs on account of the senseless imprisonment of people who’ve done nothing harmful. But this book shows how drug arrest quotas, mandatory minimum sentences, and felon profiling work together as an insidious system to maintain white supremacy in America.

I’d recommend this book to any American. We all need to understand that racial oppression didn’t go away on July 2, 1964; it just adopted more clever camouflage.

10. Anathem (2008)

anathem_cover_-_Google_Search.pngThis is now my favorite Neal Stephenson book. It has his trademark mathyness and detail to a truly engaging story in a richly-imagined universe.

Like most Neal Stephenson books, it’s not for everyone. A lot of time is spent on epistemological ruminations and physics. I love that shit, so I loved this book all the way from page 1 to page 937.


Welp, those are some books I loved this year.

But I want to see what you’re reading, so I can find more books to love. Friend-poke me on GreatBooks!

Minding Your Pees And Queues

A couple weeks ago, my wife & I went to Basilica Block Party, a local music festival. It was a good time, and OH MANS you have to see Fitz & The Tantrums live. Their sax player is a hero unit.

Anyhoo, we walked over to the porta-potties between sets. The lines were about 8–10 people long. And my spouse suggested an intriguing strategy for minimizing her wait time. I will call this strategy The Strategy:

The Strategy: All else being equal, get in the line with the most men.

The reasoning behind The Strategy is obvious: women take longer to pee than men, so if 2 queues are the same length, then the faster-moving queue should be the one with fewer women. It’s intuitive, but due to my current obsession with queueing theory, I became intensely interested in the strategy’s implications. In particular, I started to wonder things like:

  • How much can you expect to shave off your wait time by following The Strategy?
  • How does the effectiveness of The Strategy vary with its popularity? Little’s Law tells us that the overall average wait time won’t be affected, but is The Strategy still effective if 10% of the crowd is using it? 25%? 90%?

And then I thought to myself “I could answer these questions through the magic of computation!”


Lately I’ve been seeing queueing systems everywhere I go, so I figured it’d be worthwhile to write a generic queueing system simulator to satisfy my curiosity. That’s what I did. It’s called qsim.

To quote the README,

A queueing system in qsim processes arbitrary jobs and is composed of 5 pieces:

  • The arrival process controls how often jobs enter the system.
  • The arrival behavior defines what happens when a new job arrives. When the arrival process generates a new job, the arrival behavior either sends it straight to a processor or appends it to a queue.
  • Queues are simply holding pens for jobs. A system may have many queues associated with different processors.
  • A queueing discipline defines the relationship between queues and processors. It’s responsible for choosing the next job to process and assigning that job to a processor.
  • Processors are the entities that remove jobs from the system. A processor may take differing amounts of time to process different jobs. Once a job has been processed, it leaves the queueing system.

qsim provides a framework for implementing these building blocks and putting them together, and it also provides hooks that can be used to gather data about a simulation as it runs. I’m really looking forward to using qsim to gain insight into all sorts of different systems.

For now: porta-potties.

The porta-potty simulation

You’ll recall The Strategy:

The Strategy: All else being equal, get in the line with the most men.

To determine the effectiveness of The Strategy, I implemented PortaPottySystem using qsim. Here are some of the assumptions I made:

  • People arrive very frequently, but if all the queues are too long (8 people) they leave.
  • There are 15 porta-potties, each with its own queue. Once a person enters a queue, they stay in that queue until the corresponding porta-potty is vacant.
  • Shockingly, I couldn’t find any reliable data on the empirical distribution of pee times by sex, so I chose a normal distribution with a mean of 40 seconds for men and 60 seconds for women.
  • Most people just pick a random queue to join (as long as it’s no longer than the shortest queue), but some people use The Strategy of getting into the queue with the highest man:woman ratio (again, as long as it’s no longer than the shortest queue).
  • Nobody’s going number 2 because that’s gross.
  • Everyone is either a man or a woman. I know all about gender being a spectrum, and if you want to submit a pull request that smashes the gender binary, please do.

The first question I wanted to answer was: how does using The Strategy affect your wait time?

To answer this question, I ran a simulation where the probability of a given person deciding to use The Strategy is 1%. The other 99% of people simply join one of the shortest queues without regard to its man:woman ratio. I ran 20 simulations, each for the equivalent of 2 weeks, and came up with these wait time distributions:


More of a box-plot person? I hear ya:


The Strategy is definitely not a huge win here. On average, your wait time will be reduced by about 10–15 seconds (4–6%) if you use The Strategy. Still, it’s not nothing, right?

Now how does the benefit of using The Strategy vary with its popularity? This is actually really interesting. I never would have guessed it. But the data shows that you should always use The Strategy, even if everybody else is using it too.

Here I’ve charted average wait times against the proportion of people using the strategy. Colors are more prominent where the data set in question is large (and therefore heavily influences the overall average):


You’ll notice that the overall average (dark green) does not vary with Strategy popularity. This is good news, because otherwise we’d be violating Little’s Law, which would probably just mean our simulation was broken.

The interesting thing here is that the benefit of using The Strategy decreases pretty much linearly as its popularity increases, but at the same time there accrues a disadvantage to not using it. If everybody else in the system is using The Strategy, and you come along and decide not to, you can still expect to wait 10 seconds longer than everybody else. Therefore using The Strategy is unequivocally better than not using The Strategy.

Unless of course you don’t really care about those 10 seconds, in which case you should do whatever you want.

Wherein I Rant About Fourier Analysis

Monitorama PDX was a fantastic conference. Lots of engineers, lots of diversity, not a lot of bullshit. Jason Dixon and his minions are some top-notch conference-runners.

As someone who loves to see math everywhere, I was absolutely psyched at the mathiness of the talks at Monitorama 2014. I mean, damn. Here, watch my favorite talk: Noah Kantrowitz’s.

I studied physics in college, and I worked in computational research, so the Fourier Transform was a huge deal for me. In his talk, Noah gives some really interesting takes on the application of digital signal processing techniques to ops. I came home inspired by this talk and immediately started trying my hand at this stuff.

What Fourier analysis is

“FOUR-ee-ay” or if you want to be French about it “FOO-ree-ay” with a hint of phlegm on the “r”.

Fourier analysis is used in all sorts of fields that study waves. I learned about it when I was studying physics in college, but it’s most notably popular in digital signal processing.

It’s a thing you can do to waves. Let’s take sound waves, for instance. When you digitize a sound wave, you _sample_ it at a certain frequency: some number of times every second, you write down the strength of the wave. For sound, this sampling frequency is usually 44100 Hz (44,100 times a second).

Why 44100 Hz? Well, the highest pitch that can be heard by the human ear is around 20000 Hz, and you can only reconstitute frequencies from a digital signal at up to half the sampling rate. We don’t need to capture frequencies we can’t hear, so we don’t need to sample any faster than twice our top hearing frequency.

Now what Fourier analysis lets you do is look at a given wave and determine the frequencies that were superimposed to create it. Take the example of a pure middle C (this is R code):

signal = data.frame(t=seq(0, .1, 1/44100))
signal$a = cos(signal$t * 261.625)
qplot(data=signal, t, a, geom='line', color=I('blue'), size=I(2)) +
    ylab('Amplitude') + xlab('Time (seconds)') + ggtitle('Middle C')


Pass this through a Fourier transform and you get:

fourier = data.frame(v=fft(signal$a))
# The first value of fourier$v will be a (zero) DC
# component we don't care about:
fourier = tail(fourier, nrow(fourier) - 1)
# Also, a Fourier transform contains imaginary values, which
# I'm going to ignore for the sake of this example:
fourier$v = Re(fourier$v)
# These are the frequencies represented by each value in the
# Fourier transform:
fourier$f = seq(10, 44100, 10)
# And anything over half our sampling frequency is gonna be
# garbage:
fourier = subset(fourier, f <= 22050)

qplot(data=fourier, f, v, geom='line', color=I('red'), size=I(2)) +
    ylab('Fourier Transform') + xlab('Frequency (Hz)') +
    ggtitle('Middle C Fourier Transform') + coord_cartesian(xlim=c(0,400))


As you can see, there’s a spike at 261.625 Hz, which is the frequency of middle C. Why does it gradually go up, and then go negative and asymptotically come back to 0? That has to do with windowing, but let’s not worry about it. It’s an artifact of this being a numerical approximation of a Fourier Transform, rather than an analytical solution.

You can do Fourier analysis with equal success on a composition of frequencies, like a chord. Here’s a C7 chord, which consists of four notes:

signal$a = cos(signal$t * 261.625 * 2 * pi) + # C
    cos(signal$t * 329.63 * 2 * pi) + # E
    cos(signal$t * 392.00 * 2 * pi) + # G
    cos(signal$t * 466.16 * 2 * pi) + # B flat
qplot(data=signal, t, a, geom='line', color=I('blue'), size=I(2)) +
    ylab('Amplitude') + xlab('Time (seconds)') + ggtitle('C7 chord')


Looking at that mess, you probably wouldn’t guess that it was a C7 chord. You probably wouldn’t even guess that it’s composed of exactly four pure tones. But Fourier analysis makes this very clear:

fourier = data.frame(v=fft(signal$a))
fourier = tail(fourier, nrow(fourier) - 1)
fourier$v = Re(fourier$v)
fourier$f = seq(10, 44100, 10)
fourier = subset(fourier, f <= 22050)

qplot(data=fourier, f, v, geom='line', color=I('red'), size=I(2)) +
    ylab('Fourier Transform') +
    xlab('Frequency (Hz)') +
    ggtitle('Middle C Fourier Transform') +


And there are our four peaks, right at the frequencies of the four notes in a C7 chord!

Straight-up Fourier analysis on server metrics

Naturally, when I heard all these Monitorama speakers mention Fourier transforms, I got super psyched. It’s an extremely versatile technique, and I was sure that I was about to get some amazing results.

It’s been kinda disappointing.

By default, a Graphite server samples your metrics (in a manner of speaking) once every 10 seconds. That’s a sampling frequency of 0.1 Hz. So we have a Nyquist Frequency (the maximum frequency at which we can resolve signals with a Fourier transform) of half that: 0.005 Hz.

So, if our goal is to look at a Fourier transform and see important information jump out at us, we have to be looking for oscillations that occur three times a minute or less. I don’t know about you, but I find that outages and performance anomalies rarely show up as oscillations like that. And when they do, you’re going to notice them before you do a Fourier transform.

Usually we get spikes or step functions instead, which bleed into wide ranges of nearby frequencies and end up being much clearer in amplitude-space than in Fourier space. Take this example of some shit hitting some fans:


If we were trying to get information from this metric with Fourier transforms, we’d be interested in the Fourier transform before and after the fan got shitty. But those transforms are much less useful than the amplitude-space data:


I haven’t been able to find the value in automatically applying Fourier transforms to server metrics. It’s a good technique for finding oscillating components of a messy signal, but unless you know that that’s what you’re looking for, I don’t think you’ll get much else out of them.

What about low-pass filters?

A low-pass filter uses a Fourier transform to remove high frequency components from a signal. One of my favorite takeaways from that Noah Kantrowitz talk was this: Nagios’s flapping detection mechanism is a low-pass filter.

If you want to alert when a threshold is exceeded — but not every time your metric goes above and below that threshold in a short period of time — you can run your metric through a low-pass filter. The high-frequency, less-valuable data will go away, and you’ll be left with a more stable signal to check against your threshold.

I haven’t tried this method of flap detection, but I suspect that the low-sampling-frequency problem would make it significantly less useful than one might hope. If you’ve seen Fourier analysis applied as a flap detection algorithm, I’d love to see it. I would eat my words, and they’d be delicious.

I hope I’m wrong

If somebody can show me a useful application of Fourier analysis to server monitoring, I will freak out with happiness. I love the concept. But until I see a concrete example of Fourier analysis doing something that couldn’t be done effectively with a much simpler algorithm, I’m skeptical.


As Abe Stanway points out, Fourier analysis is a great tool to have in your modeling toolbox. It excels at finding seasonal (meaning periodic) oscillations in your data. Also, Abe and the Skyline team are working on adding seasonality detection to Skyline, which might use Fourier analysis to determine whether seasonal components should be used.

Theo Schlossnagle coyly suggests that Circonus uses Fourier analysis in a similar manner.

Devops needs feminism

I just returned to Minneapolis from Velocity NY bursting with ideas as always. The program was saturated with fantastic speakers, like my new ops crush Ilya Grigorik of Google. And my favorite part, as always, was the hallway track. I met dozens of brilliant, inspiring engineers. Allspaw, Souders, and Nash really know how to throw a conference.

One exhilarating thing about Velocity is the focus on culture as a driving force for business. Everybody’s in introspection mode, ready to break down their organizational culture and work new ideas into it. It reminds me of artificial competence in genetic engineering. It’s a joy to experience.

But despite all this wonderful cultural introspection, y’know what word you don’t hear? Y’know what drags up awkward silences and sometimes downright reactionary vitriol?


As long as we’re putting our tech culture under the microscope, why don’t we swap in a feminist lens? If you question any random geek at Velocity about gender, you can bet they’ll say “Women are just as good as men at ops,” or “I work with a female engineer, and she’s really smart!” But as soon as you say “feminism,” the barriers go up. It’s like packet loss: the crowd only hears part of what you’re saying, and they assume that there’s nothing else to hear.

We need to build feminism into our organizations, and Velocity would be a great venue for that. I’m just one engineer, and I’m not by any means a feminism expert, but I do think I can shed some light on the most common wrongnesses uttered by engineers when feminism is placed on the table.

Feminism != “Girls are better than boys”

Mention feminism to a random engineer, and you’re likely to hear some variation on:

I’m against all bias! We don’t need feminism, we just need to treat each other equally.

Feminism is often portrayed as the belief that women are superior, or that men should be punished for the inequality they’ve created. Feminism is often portrayed as man-hating.

Feminism is not that. Everyone defines it differently, but I like the definition at the Geek Feminism Wiki:

Feminism is a movement which seeks respect and equality for women both under law and culturally.

Equality. Everyone who’s not an asshole wants it, but we don’t have it yet. That’s why we need a framework in which to analyze our shortcomings, conscious and unconscious. Feminism can be that framework.

Imagine hearing an engineer say this:

Our product should perform optimally! We don’t need metrics, we just need to build a system that performs well.

Would this not be face-palmingly absurd? Of course it would. Metrics let you define your goals, demonstrate the value of your goals, and check how well you’re doing. Metrics show you where you’re losing milliseconds. Metrics are the compass and map with which you navigate the dungeon of performance.

Feminism is to equality as metrics are to performance. Without a framework for self-examination, all the best intentions in the world won’t get you any closer to an equality culture.

Wanting equality isn’t enough

When feminism comes up, you might hear yourself say something like this:

I already treat female engineers equally. Good engineers are good engineers, no matter their gender.

Hey great! The intention to treat others equally is a necessary condition for a culture of equality. But it’s not a sufficient condition.

This is akin to saying:

I’m really into performance, so our site is as fast as it can be.

You might be a performance juggernaut, but you’re just one engineer. You’re responsible for one cross-section of the product. First of all, one person doesn’t constitute a self-improving or even a self-sustaining performance culture. And even more crucially, there are performance mistakes you don’t even know you’re making!

Promoting equality in your organization requires a cultural shift, just like promoting performance. Cultural shifts happen through discourse and introspection and goal-setting — not wishing. That’s why we need to look to feminism.

If you start actively working to attack inequality in your organization, I guarantee you’ll realize you were already a feminist.

Feminism doesn’t require you to be ashamed of yourself

When your heart’s in the right place and you’re constantly examining your own actions and your organization’s, you start to notice bias and prejudice in more and more places. Most disturbingly, you notice it in yourself.

Biases are baked right into ourselves and our culture. They’re so deeply ingrained that we often don’t see or hear them anymore. Think anti-patterns and the broken windows theory. When we do notice our biases, it’s horrifying. We feel ashamed and we want to sweep them under the rug.

Seth Walker of Etsy gave an excellent talk at Velocity NY entitled A Public Commitment to Performance.” It’s about how, rather than keeping their performance shortcomings private until everything’s fixed, Etsy makes public blog posts detailing their current performance challenges and recent performance improvements. This way, everyone at the company knows that there will be public eyes on any performance enhancement they make. It promotes a culture of excitement about improvements, rather than one of shame about failures.

When you notice biases in your organization — and moreover when others notice them — don’t hide them. Talk about them, analyze them, and figure out how to fix them. That’s the productive thing to do with software bugs and performance bottlenecks, so why not inequality?

Where to go from here

I’m kind of a feminism noob, but that won’t stop me from exploring it and talking about it. It shouldn’t stop you either. Geek Feminism is a good jumping-off point if you want to learn about feminism, and they also have a blog. @OnlyGirlInTech is a good Twitter account. I know there’s other stuff out there, so if you’ve got something relevant,  jam it in the comment section!

EDIT on 2013-10-21: Here are some links provided in the comments by Alexis Finch (thanks, Alexis Finch!)

Ada Initiative – focused on OpenSource, working to create allies as well as support women directly

Girls Who Code – working with high school girls to teach them the skills and provide inspiration to join the tech fields

LadyBits – adding women’s voices to the media, covering tech and science [w/ a few men writing as well]

Reductress – satire addressing the absurdity of women’s portrayal in the media [The Onion, feminized]

WomenWhoCode & LadiesWhoCode & PyLadies – if you want to find an expert engineer who happens to also be of the female persuasion [to speak at a conference, or to join your team] these are places to find seasoned tech folks, as well as for those new to tech to get started learning, with chapters worldwide.
http://www.meetup.com/Women-Who-Code-SF/ & https://twitter.com/WomenWhoCode
http://www.ladieswhocode.com/ & https://twitter.com/ladieswhocode
http://www.pyladies.com/ https://twitter.com/pyladies

Making a quick data visualization web-app with Shiny

Lately we’ve been getting concerned about our PHP error log. You know the story: errors start popping up, but they’re not causing problems in production, so you move on with your busy life. But you know in your heart of hearts that you should really be fixing the error.

The time has come for us to prune those errors, and I thought the first step should be, as always, to look at the data. Since it’s really the PHP developers who will know what to do with it, I thought it might be useful to make my analysis interactive. Enter Shiny: a web app framework that lets users interact directly with your data.

The first step was to massage my log data into a CSV that looks like this:


For each date, error.id indicates the file and line on which the error occurred, error.count is how many times that error occurred on that date, and access.count is the total number of hits our app received on that date. With me so far?

Now I install Shiny (sure, this is artifice — I already had Shiny installed — but let’s pretend) at the R console like so:

install_github('shiny', 'rstudio')

And from the shell, I start a project:

mkdir portalserr
cd portalserr
cp /tmp/portalserr.csv .

Defining the UI

Now I write my app. I know what I want it to look like, so I’ll start with ui.R. Going through that bit by bit:

  headerPanel("PHP errors by time"),

I’m telling Shiny how to lay out my UI. I want a sidebar with form controls, and a header that describes the app.

    checkboxGroupInput("errors_shown", "Most common errors:", c(

Now we put a bunch of checkboxes on my sidebar. The first argument to checkboxGroupInput() gives the checkbox group a name. This is how server.R will refer to the checkbox contents. You’ll see.

The second argument is a label for the form control, and the last argument is a list (in non-R parlance an associative array or a hash) defining the checkboxes themselves. The keys (like davidbowie.php:50) will be the labels visible in the browser, and the values are the strings that server.R will receive when the corresponding box is checked.


We’re finished describing the sidebar, so now we describe the main section of the page. It will contain only one thing: a plot called “freqPlot”.

And that’s it for the UI! But it needs something to talk to.

Defining the server

The server goes in — surprise — server.R. Let’s walk through that.

logfreq <- read.csv('portalserr.csv')
logfreq$date <- as.POSIXct(logfreq$date)
logfreq$perthou <- logfreq$error.count / logfreq$access.count * 10^3

We load the CSV into a data frame called logfreq and translate all the strings in the date column into POSIXct objects so that they’ll plot right.

Then we generate the perthou column, which contains the number of occurrences of a given error on a given day, per thousand requests that occurred that day.

shinyServer(function(input, output) {

Okay now we start to see the magic that makes Shiny so easy to use: reactivity. We start declaring the server application with shinyServer(), which we pass a callback. That callback will be passed the input and output parameters.

input is a data frame containing the values of all the inputs we defined in ui.R. Whenever the user messes with those checkboxes, the reactive blocks (what does that mean? I’ll tell you in a bit) of our callback will be re-run, and the names of any checked boxes will be in input$errors_shown.

Similarly, output is where you put the stuff you want to send back to the UI, like freqPlot.

But the coolest part of this excerpt is the last bit: renderPlot({. That curly-bracket there means that what follows is an expression: a literal block of R code that can be evaluated later. Shiny uses expressions in a very clever way: it determines which expressions depend on which input elements, and when the user messes with inputs Shiny reevaluates only the expressions that depend on the inputs that were touched! That way, if you have a complicated analysis that can be broken down into independent subroutines, you don’t have to re-run the whole thing every time a single parameter changes.

     lf.filtered <- subset(logfreq, error.id %in% input$errors_shown)

      p <- ggplot(lf.filtered) +
        geom_point(aes(date, perthou, color=error.id), size=3) +
        geom_line(aes(date, perthou, color=error.id, group=error.id), size=2) +
        expand_limits(ymin=0) +
        theme(legend.position='left') +
        ggtitle('Errors per thousand requests') +
        ylab('Errors per thousand requests') +

This logic will be reevaluated every time our checkboxes are touched. It filters the logfreq data frame down to just the errors whose boxes are checked, then makes a plot with ggplot2 and sends it to the UI.

And we’re done.

Running it

From the R console, we do this:

> runApp('/path/to/portalserr')

Listening on port 3087

This automatically opens up http://localhost:3087 in a browser and presents us with our shiny new… uh… Shiny app:

Why don’t we do it in production?

Running Shiny apps straight from the R console is fine for sharing them around the office, but if you need a more robust production environment for Shiny apps (e.g. if you want to share them with the whole company or with the public), you’ll probably want to use shiny-server. If you’re putting your app behind an SSL-enabled proxy server, use the latest HEAD from Github since it contains this change.

Go forth and visualize!