Benchmarking: how to provide valuable comparisons around UX research
Take snapshots of problematic outcomes to help show the impact of UX
"[Data] gains it's meaning from the comparison." That quote helped me understand what was missing from my re-design recommendations.
When re-designing a product or application, you might have a completely different understanding of what that means compared to other team members. You might imagine it as a complete overhaul, while others think it means "keep mostly everything but touch up a few places."
Change is difficult, especially when it involves significant effort. This is especially true if some team members were on the project's previous iteration: they may prefer something other than scrapping everything they've done.
But this is where that quote led to a breakthrough. You see, I was making the wrong argument. In these cases, you don't need to convince your team that your design solution is "Great."
You must convince them it will be "Better than before." To do that, we can use the power of benchmarking to make our point.
How benchmarking helps with the problem of qualitative comparison
With quantitative research, for example, we can quickly look at a metric at two points and make a statement like "We are performing 8% better now compared to 6 months ago."
On the other hand, if "4/5 found this interface frustrating" 6 months ago, and it's down to "3/5 users" now, that means next to nothing. It could be that you improved the interface, or you just so happened to choose a more resilient user.
This is where benchmarking can help you. UX benchmarking evaluates a product's user experience using metrics to gauge performance against a meaningful standard.
A single benchmark can offer some insights, while repeated benchmarks can help you determine trends, patterns, and much richer insights.
The classic benchmarking example is the System Usability Scale (SUS), a 10-question user survey about the product's usability. This is a way of taking ten usability-related questions and creating a score out of 100.
Let's say you calculate the SUS scores for your participants, which averages out to 56. That's a piece of data that means little. However, you can figure out some insights by comparing it with other benchmarks.
For example, your usability is lower than the average 'passing' SUS score of 68.
Or, compare it with a letter grade, according to statistics websites like MeasuringU.
Being able to provide a statement like "Our SUS score rates at an F" or "Our SUS score is below the industry average" can help highlight the problems with a project and help change people's minds on why we need to change.
In addition, once you start collecting more measurements after re-design, you can begin to highlight trends over time.
For example, "6 months ago, our SUS score was 56. Today, it's 64." Just having those two numbers provides a compelling story about progress and improvements.
So, how exactly do you create benchmarks? It starts with translating your user findings into a metric businesses care about.
3/5 frustrated users means what, exactly?
Much of what we capture around task completions and pain points is often tied to emotions. If we sense that a particular user finding is causing issues, we often want to say something like, "3/5 users found this interface frustrating."
On a surface level, it makes sense. Users were frustrated by various aspects of the interface, so we should work to make the interface more manageable.
Except that's different from how businesses measure products. They use terms like Key Performance Indicators (KPIs) and metrics to measure financial or product success. A frustrated user is terrible, of course, but they're still determining what impact that might have on metrics or the bottom line.
This is why the first part of benchmarking involves translating your user-finding into metrics the organization may be more familiar with. To help with this, it's best to remember two phrases:
To understand UX Metrics, you need the HEART of a Pirate (AARRR!)
To understand measurable outcomes, look where users complain
How to translate the (predicted) impact of your findings
The first phrase is a pneumonic for the Metrics UX (and usability)-related problems will likely impact. It stands for two frameworks:
Google HEART framework (Happiness, Engagement, Adoption, Retention, Task Completion)
Pirate Metrics (Acquisition, Activation, Retention, Revenue, Referral)
I talked about this in greater detail in another article, but these 10 UX metrics greatly help translate the impact specific user findings may have on the business.
For example, these frustrated users may impact our adoption metrics because they're getting frustrated with the onboarding process and just abandoning the website.
However, sometimes you lack these metrics (such as when designing an internal application). In these cases, the other phrase, finding where people complain, will help figure out the impact.
After all, if this is an emotional topic for users, customers will want to complain about it. Whether it's customer support tickets, 1-star reviews, or YouTube videos showing how fixing this issue solves one of the customer's biggest gripes, it can help you drive change.
Collect your baseline data and provide a comparison/prediction
Regardless of your choice, accurately collecting measurements around what is currently happening is an essential next step in this process.
For example, it might be as simple as visualizing the high volume of customer support tickets we currently receive about a particular issue. Seeing that data point by itself isn't that convincing, but we can still provide a comparison with one of three values:
A competitor: How does your current user experience around a particular feature compare with your competitors?
Industry-standard: If our SUS score is 54, and the industry SUS score is 68, point that out
Stakeholder-determined goal: If a project goal is to reduce customer support tickets by 15%, you can show where we stand at this point
Being able to show this benchmark and use it with design recommendations can allow us to provide better comparisons through predictions.
For example, we can say, "We think that this user issue is directly related to customer support tickets, so if we fix this issue, we expect customer support tickets to go down."
We won't be able to estimate how much, but by pointing out the significant issues related to our benchmark, we can emphasize what is critical to fixing an issue.
Collect another benchmark in the future to check your predictions
After re-designing an application, it's an excellent idea to re-collect your benchmark during your subsequent user testing. That way, you can see what's changed and interpret your findings with a comparison point.
For example, imagine saying, "We got 60 customer support tickets about this issue in May and then 32 customer support tickets for the past 3 months after the re-design went live."
Pointing to this not only shows why user research matters, but it's also a way of pointing out the positive impact you've had with your re-design.
We often don't do this enough: at most, we talk about the positive things users said about our product during this user test. However, being able to start by talking about the positives users faced since the last time you tested before jumping into the current usability problems allows your team to see how much progress you're making.
The ROI of UX is tricky to explain, but benchmarking can help
Justifying the value of User Research is a complex topic people much more intelligent than me have tried to tackle. Quantifying the ROI of UX in terms of business metrics can take time and effort.
However, benchmarking can help with this. By pairing our specific user research processes with these metrics snapshots, we can see what changes as we make design decisions and if we have a positive impact.
We don't give 100% certainty or statistical significance to our research. However, showing how we can have a more significant impact, from UX metrics to customer support tickets and complaints, helps us show how valuable our user research process can be to the larger organization.
So, if you are struggling with stubborn stakeholders on a re-design, try creating some benchmarks around what you're facing. This can help persuade them that they need to change.
Iโve revamped my Maven course to teach Data Informed UX Design. If you want to learn this valuable skill, consider joining the waitlist.
Kai Wong is a Senior Product Designer and Data and Design newsletter writer. His book, Data-Informed UX Design, provides 21 small changes you can make to your design process to leverage the power of data and design.