Wrangling Excellence

Today marked a big change for me at work.

For the past 4+ years, I worked as a Happiness Engineer supporting WordPress.com and the WordPress apps. I spent roughly the first two years working in the WordPress.com Support Forums, and I found that I loved providing public support and troubleshooting the incredible range of issues that arose there. I spent the past two years supporting the WordPress apps, and over time I got more and more involved in testing them as well.

As I spent time developing on my own manual testing approach, working with beta testing communities, exploring the support/development feedback loop, and encouraging my coworkers’ troubleshooting skills, I also kept an eye on a team being formed at Automattic around automated testing and bug prioritization. I worked with and learned from them as more discussions arose around testing and quality within our fast-paced, distributed environment. And although I enjoyed helping people use WordPress, I discovered that my favorite work was helping development teams understand our customers’ needs and identify what issues most needed their attention.

Earlier this year, I finally decided to build on my existing coding skills to try my hand at automated testing. With some guidance, I developed the first suite of UI tests for a new editor (codenamed “Aztec”) for the WordPress for iOS app. Later I added a suite of UI tests for the same editor for WordPress for Android. I also worked with a coworker to automate screenshots of the WordPress.com signup flow in multiple languages, to help our internationalization team review those localized flows. Some of this work was part of a trial, as I applied internally to change roles.

That work and study paid off, and today I started my first day as an Excellence Wrangler. I’ll be automating tests, doing manual testing, triaging bugs reports, and generally helping our support and development teams communicate and prioritize to create the best experience possible for our customers.

And if that excitement wasn’t enough, I also had a delivery that I’ve been waiting on since I hit my four-year anniversary at Automattic — a new laptop with the WordPress logo:

2017-08-07 21.00.15.jpg

Advertisements

Sharing User Feedback from App Reviews

Over the past year, I’ve been working fairly closely with the mobile app team at Automattic. As I got more involved, I tried to help close the feedback loop with the team by taking advantage of the feedback our users were already giving us — so of course I took a look at our app reviews.

It’s hard to look through app reviews. I mean, on one hand, it’s just emotionally draining to be hit with that barrage of unmediated criticism (although the unmediated praise is wonderful!). But it’s also hard to grok all that feedback when it’s just a stream of comments. So I decided to collect that feedback and present it to the team in an easier-to-digest format. I’ve now gone through that process several times and want to share it in case it helps you process reviews or other feedback from your customers.

Collect and Organize the Feedback

The first step is to gather up all the feedback. I used App Annie, since our mobile app team was already using it. I decided to identify all the reviews from the latest version of our app (in my case, it was the WordPress app on two platforms, iOS and Android) and export them. This conveniently dumped all of the ratings, reviews, and user details into CSV files (one per platform).

Then, I set up a spreadsheet for each platform and focused on a few key details:

  • The user’s rating (from 1 to 5)
  • The review’s title and content (adding a translation where the review was in another language)
  • The main issue in the review
  • Any secondary issues or notes about the review

How did I identify the main and secondary issues? A little analysis.

Analyze the Feedback

To find the main and secondary issues, I read every single review from that version of the app. I picked keywords to describe the main issues users described and assigned one of these keywords (categories) to each review.

If you’ve ever coded survey responses, this is a similar process. If this is the first time you’ve done this, here are some tips:

  • Read through all or a representative sample of the reviews. (For your first time, and especially for an unfamiliar product, you might need to read all of them.)
  • As you read, make notes about the topics or keywords that come up (more is better at this stage).
  • Pare down your list to a subset of more general keywords. For example, for the WordPress app I used keywords like “Editor,” “Login,” and “Media upload.”
  • Go through the reviews one by one and assign a keyword for the main issue the user described.
  • If the user mentioned more than one issue, or there is additional detail that you think will be helpful later on, add it in the field for secondary issues or notes. For example, I found a number of reviews with the “Editor” keyword that specifically mentioned “limited features” in the editor, so that went into the second field so I could keep track of that sub-issue.

Pro tip: To keep my sanity, I worked from 1-star reviews to 5-star reviews, so the toughest criticism came when I had the most energy and the work got easier and more cheerful as I went.

Once I was done assigning keywords to each review, I organized the spreadsheet by those keywords so I could see which issues were most commonly reported. I made adjustments to the keywords, looked for subsets of related issues, and checked everything for consistency. Finally, I got ready to share my findings.

Share the Feedback

I had a few self-imposed guidelines for what I wanted the team to get from this user feedback:

  • Praise for the things we are doing well
  • A clear picture of the top pain points our users experience
  • Suggestions for what action could have the biggest impact

Here’s a template showing how I organized my report:

Overview:
- Number of reviews
- Average rating
- How ratings are weighted (evenly spread? split between 1 and 5 stars?)

Highlights:
- Features or experiences that our users enjoy and appreciate
- 2-3 quotes from positive reviews

What did users mention in their reviews?
- The top three issues mentioned in reviews
- For each issue, an explanation of its impact (how many or what percentage of reviews mentioned it? what were the star ratings for those issues?) and a little context about what exactly users discussed and your interpretation of the source of the problem
- Links to any open bug or enhancement issues the team is already tracking, or any ongoing work related to the issue

Suggestions for followup:
- One or two projects, or open issues in the bug tracker, that the team could make a top priority to help address this feedback
- Any other user feedback (for example, from customer support interactions) that could shed additional light on the feedback in the reviews

I shared this with our entire mobile app team (along with the spreadsheets with the raw data), inviting questions and discussion. Although we haven’t taken action on every single issue, it has led to some quick wins, reprioritizing, and planning ahead with our users in mind.

I hope this is useful to you and your team! If you try it out, let me know how it goes. And if you have ideas for how to improve this process, I’d love to learn from you.

The Problem with Averages

If you’re interested in inclusive design, I’d recommend listening to “On Average” from the podcast 99% Invisible. From the episode:

So in 1926, when the army was designing its first-ever fighter plane cockpit, engineers measured the physical dimensions of hundreds of male pilots and used this data to standardize cockpit dimensions. Of course, the possibility of female pilots was never considered. Of course.

The size and shape of the seat, the distance to the pedals and the stick, the height of the windshield, even the shape of the flight helmets were all made to conform to the average 1920’s male pilot. Which changed the way the pilots were selected.

You basically then select people that fit into that and then exclude people that don’t.

Designing for the average and excluding anyone who doesn’t fit that average isn’t, well, inclusive. The episode goes on to discuss how design (including in the military) has become more inclusive — but it’s still something we struggle with.

From what I’ve seen in software design and development, one of the challenges is deciding which user personas and scenarios will be considered in your design, and which issues are only edge cases. Where and how do you draw that line? It’s also a matter of just remembering to think outside your own perspective, to consider cases that you haven’t thought of already. (As designer and fellow Automattician Mel Choyce pointed out, it’s about challenging your own biases by seeking out and really listening to users with different perspectives.)

As a linguaphile, I tend to notice when software design struggles or forgets to include non-English languages. For example, designs that aren’t responsive to languages that take up more space (ahemGermanahem) or that don’t consider right-to-left languages end up excluding entire populations of potential users in other parts of the world. It can be hard for monolingual designers and developers to know how their products work in other languages, and it’s always satisfying when I have a chance to test a product and suggest language-based enhancements, so people can use our products in any language. It’s one small way I can help democratize publishing for users around the world.

Using Mental Models for Troubleshooting

When a user reports an issue they’re having with the product you support, how do you know what to do next? How do you identify the source of the issue? How do you know where to start investigating?

I’ve mulled over these questions countless times as I tried to explain to coworkers how I troubleshoot. When someone describes an issue, it always seems like the possible causes just pop into my head, unbidden. But of course that isn’t it at all — I have gotten better at troubleshooting our products over time, and that didn’t happen by chance.

When I read Jim Grey’s post How to Hire an Entry-Level Tester, his key traits for testers meshed with how I understand troubleshooting and this trait stood out:

Create mental models:  Building a mental model of a system, even if it’s incomplete or partially inaccurate, helps a tester orient themselves to a problem and generate ideas on how to work through it.

When I look at a problem, I fit it into my mental model of the product and use that model to start investigating. On my team, I’ve started to explicitly discuss mental models, how to build and expand them, and how to use them to get better at supporting and troubleshooting our products. Rather than trying to summarize the things we’ve talked about on my team, I’ll share the introduction to mental models I wrote recently, as part of a troubleshooting training I’m developing for WordPress.com support.


Mental Models

mental model is “an explanation of someone’s thought process about how something works in the real world” (Wikipedia). In other words, a mental model is how you understand or represent a real thing in a more simplified or abstract way.

Mental Model of a Bicycle

To get a better sense of mental models, let’s look at an example: Bicycles. Not everyone understands every part of a bicycle, but even if you just ride a bicycle now and then you probably have a concept of what it is and — at least to some extent — how it works. That is your mental model.

A user’s mental model

As a bicycle user, your mental model might be very simple — all you need to know are the parts you interact with or a general sense of what makes a bicycle different from, say, a tricycle or a car. In this simple mental model, you’ll see that the bicycle includes a frame, handlebars, a seat, and two wheels:

bicycle-simple
Continue reading “Using Mental Models for Troubleshooting”