The Problem with Averages

If you’re interested in inclusive design, I’d recommend listening to “On Average” from the podcast 99% Invisible. From the episode:

So in 1926, when the army was designing its first-ever fighter plane cockpit, engineers measured the physical dimensions of hundreds of male pilots and used this data to standardize cockpit dimensions. Of course, the possibility of female pilots was never considered. Of course.

The size and shape of the seat, the distance to the pedals and the stick, the height of the windshield, even the shape of the flight helmets were all made to conform to the average 1920’s male pilot. Which changed the way the pilots were selected.

You basically then select people that fit into that and then exclude people that don’t.

Designing for the average and excluding anyone who doesn’t fit that average isn’t, well, inclusive. The episode goes on to discuss how design (including in the military) has become more inclusive — but it’s still something we struggle with.

From what I’ve seen in software design and development, one of the challenges is deciding which user personas and scenarios will be considered in your design, and which issues are only edge cases. Where and how do you draw that line? It’s also a matter of just remembering to think outside your own perspective, to consider cases that you haven’t thought of already. (As designer and fellow Automattician Mel Choyce pointed out, it’s about challenging your own biases by seeking out and really listening to users with different perspectives.)

As a linguaphile, I tend to notice when software design struggles or forgets to include non-English languages. For example, designs that aren’t responsive to languages that take up more space (ahemGermanahem) or that don’t consider right-to-left languages end up excluding entire populations of potential users in other parts of the world. It can be hard for monolingual designers and developers to know how their products work in other languages, and it’s always satisfying when I have a chance to test a product and suggest language-based enhancements, so people can use our products in any language. It’s one small way I can help democratize publishing for users around the world.

Advertisement

Multilingual Testing

As a polyglot and a former translator, I am a huge advocate for software localization, which also means testing software in multiple languages. Code that works flawlessly in English can totally break down in another language — whether it’s due to missing translations, translations that don’t fit into the space provided by the UI, or bugs that only pop up in other languages. (I found examples of all three while testing the WordPress apps today.)

But that’s not the only reason I like testing in other languages. As soon as I switch to one of my non-native languages, I’m forced to slow down and take a fresh look at the interface. Is everything where I expect it to be? Am I seeing what I’m supposed to see on this screen? Do all the buttons work the way they should? Working in another language can help you look at the software with a fresh set of eyes and find bugs that occur across languages — even in English.

Give it a try! Pick another language you speak — or one you’re trying to learn — and use it while you test. I’m trying to spend at least one day a month using WordPress.com and the WordPress apps in another language. It’ll help my testing, and I’m sure it’ll also help my language skills. 🙂

Testing as Exploration

In the beginning, there was testing.

Thus begins James Bach and Michael Bolton’s essay on Exploratory Testing 3.0. The point they make is that, at the start, there wasn’t a clear distinction made between exploratory testing and automated testing. It was only after the rise of automated, scripted testing that the term “exploratory testing” came about to define human, interactive, ad hoc testing.

Bach and Bolton describe the evolution of exploratory testing over time. They note how the concept of agency came to characterize exploratory testing as opposed to scripted testing, and how they eventually decided to do away with the distinction altogether. That is, their new definition of testing is not exploration versus scripting — it characterizes scripting as just one technique through which we can explore and test our software:

Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.

As someone who loves tinkering with, exploring, and trying to break new things, I wholeheartedly support that perspective.