I spoke to a client a little while ago about facilitating a user jury. At first, I wasn’t sure what they were asking. Were the users of their product being put on trial? In fact, they were asking to have the product put on trial, in front of a jury of users. Not exactly a courtroom trial; they were asking to gather user feedback from users by having them try to perform tasks with the product. This, I understand. It is one of the critical methods we routinely use at MAYA, and one of the most important things that can be done to improve the usability of a product or service. (Assuming you’re going to listen to the results and make changes based on what you learn.)
1000 names for the same thing
My first instinct, when asked about the user jury, was to correct the terminology. Surely, this exercise should be called a “usability test.” But an argument about semantics wasn’t going to improve the usability of the product. I needed to get over my desire to have everyone describe things perfectly in terms I personally approved and get on with the work, which is the thing that will actually make a difference. In fact, I warmed up to the term over time—it is a bit like a trial for the product; it may be deemed innocent or guilty (of poor usability) at the end; and 12 users is a decent number of evaluators. I still think “usability test” is a more well-known and appropriate name, but I helped with the “user jury,” and we collected valuable results that were fed into the design process to improve the product.
What’s important, in product design, is to use an iterative process and create a feedback loop that collects usability issues and uses them to drive improvements. (There should also be Quality Assurance—a way to collect bugs and performance issues to drive stability and speed improvements.) Although there are other methods (with varying costs and different characteristics) like heuristics-based audits, using gifted designers in the first place, cognitive walkthroughs, and user modeling, usability testing remains a core technique.
Just do user research, but call it whatever you want
Mark Hurst, CEO of Creative Good, prefers to call usability tests Listening Labs, to emphasize that you’re listening to users, and uses ad-hoc tasks rather than defining tasks beforehand. Our client chose to ask for a user jury. I’ve heard them called “user tests,” although we’ve tried to stay away from this term because it sounds like it’s the user being tested, not the product. People familiar with focus groups might say it’s a “one-on-one,” meaning that there’s a one-to-one ratio between the participant and the moderator (presumably the product is present as well). We’ve run exercises dubbed “the out of box experience” because we gave participants a boxed product—as if they recently purchased it—and asked them to open it and begin using it, to test that experience. Still, it was a usability test at its heart, the parameters were just a bit different than some other tests we’ve run. There are remote usability tests and remote unmoderated usability tests; there are micro tests (small numbers of participants and/or single tasks), and there are online A/B tests that may have thousands of participants.
I have to admit that imagining a user jury entertains me—I picture the (allegedly) unusable product on the witness stand, while a jury of users decides its fate. The mental image is amusing, even if it’s not really how a usability test works.
Fundamentally, it doesn’t matter what nomenclature (terminology, name, description…) you use, the best practice to insure product usability is to collect feedback from users attempting tasks, and make changes to the design based on what you learn.