My coworker Kelly shared a fantastic lesson recently:
Tell me the problem, not how you think I should fix it.
This is really much harder than it seems at first glance. While talking with users, I often end up with all sorts of suggestions about how we could improve our products. Add a feature, change the layout, remove a roadblock, etc. Even if a user didn’t make the suggestion explicitly, I sometimes come away from an interaction thinking, If only we did this instead of that …
It’s really easy to think you know the solution to your problem. If I had a penny for every time I told a developer or designer, “Our users want X,” or “We could resolve the issue by doing Y,” well, you know. But the fix you come up with might not be the best fix for everyone using the product, or even the best fix to help you reach your own goal — and pushing for minute fixes can also mean missing out on fixes you can make to the bigger picture.
Slowly, I am learning to recognize that instinct to come up with a fix and refocus on identifying the problem. Here are some things I ask myself to help with that:
- What assumptions did we make when designing the product about how it was going to be used, and what assumptions is this user making about how the product should work? Where do those assumptions clash?
- What was the user’s goal, and where did the product fail to help them meet that goal?
- What patterns or trends have I seen recently in the problems our users are telling us about (or the fixes they are asking for) that might indicate a bigger breakdown?
With those questions in mind, I can engage the people who use our products in conversations about the problems they are encountering, and communicate those problems to the people who create our products. And I’m happy to brainstorm and offer suggestions when it’s helpful, but unless I start by communicating the problem, we are all missing an important step along the way to the solution.
I came across an interesting logic puzzle in the New York Times today. You get 3 numbers and have to figure out the pattern:
If you like puzzles, take a moment and check out A Quick Puzzle to Test Your Problem Solving. Then continue on to see what this has to do with testing software …
Continue reading “A Tester’s Approach to Problem Solving”
I recently read an article on The Verge about how you can run Android apps on a Mac (or PC) using Chrome. That was all the invitation I needed to try it out. So off I went to find the three things I needed:
1. An APK
2. A PC, Mac, Linux, or Chromebook on Chrome Version 41+.
3. The ARC Welder app
I have a Mac, and you can download the ARC Welder app from the Chrome store, so all that was left was the APK. I’ve never owned an Android device, so I wasn’t sure what an APK was, but I assumed it was some kind of file type for Android apps. (It turns out APK stands for Android application package.) The article I read said you can get APKs from the Google Play Store, but I didn’t have any luck finding them there. Luckily, I had another idea.
I was most interested in testing (maybe you guessed already) the WordPress app. Since it’s an open-source app, I headed to the WordPress Android app repo on GitHub. There’s a release page there where you can download the APKs for all of the previous releases. Bingo!
After installing ARC Welder and adding the WordPress APK, the app fired up and I was ready to go. Easy peasy. The biggest challenge now is figuring out how to interact with a touch app on my laptop. For example, I have to tap twice to paste, rather than using a keyboard shortcut. But it’s really fun to explore the app this way, especially as it’s my first time interacting with the WordPress Android app (which is a bit different from the iOS app that I use on a daily basis).
And here I am, composing my first blog post on the app. Pretty neat. 🙂
In the beginning, there was testing.
Thus begins James Bach and Michael Bolton’s essay on Exploratory Testing 3.0. The point they make is that, at the start, there wasn’t a clear distinction made between exploratory testing and automated testing. It was only after the rise of automated, scripted testing that the term “exploratory testing” came about to define human, interactive, ad hoc testing.
Bach and Bolton describe the evolution of exploratory testing over time. They note how the concept of agency came to characterize exploratory testing as opposed to scripted testing, and how they eventually decided to do away with the distinction altogether. That is, their new definition of testing is not exploration versus scripting — it characterizes scripting as just one technique through which we can explore and test our software:
Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.
As someone who loves tinkering with, exploring, and trying to break new things, I wholeheartedly support that perspective.