How to avoid being intimidated by quantitative UX metrics
How to filter out quantitative metrics that don’t matter
I’ve been intimidated by data several times in my UX career, but I never ran into analysis paralysis until I started looking at Google Analytics data.
The sheer amount of information, combined with unfamiliarity with making sense of the data, frustrated me with where to start.
And I’m not alone in this. According to John Ciancutti, Chief Product Officer at Coursera,
The tension [between design and data] is natural because it’s like “I don’t understand, it’s foreign, I’m not good at it”…As a designer, you are probably more capable than you recognize to raise great points around data, but you don’t know how to think about it yet because it’s not familiar.”
But there’s one thing that can help you immensely as long as you keep it in mind:
The goal of quantitative data is to answer specific questions about what’s happening in the User Experience.
If you know what you’re looking for, you can probably filter out most of the data that presents itself.
To understand these specific questions, let’s talk about a framework that uses both quantitative and qualitative data: UX Optimization.
Understanding UX Optimization
UX Optimization, by Craig Tomlin, is a process that combines the power of behavioral quantitative data with qualitative data coming from user testing to improve websites.
This process consists of 4 steps:
Build appropriate user personas
Check UX Behavioral metrics (from places like Google Analytics)
Do User testing
Compile Analysis/ Design Recommendations
To understand why we need to consult Behavioral metrics, let’s consider a scenario.
Imagine you’re choosing between two design alternatives, one catering to newer users and the other towards experienced users. Which one do you choose?
In most cases, you choose the one that’s going to serve our primary users. But to figure that out, you need to know how many users fall into each category. So that’s why you would look at UX Behavioral metrics.
Behavioral UX Data is a signpost for WHAT is happening on a website, from types of interactions to the number of users. But when we establish a user persona and a research question, we also establish the specific type of quantitative data we want to look at.
That lets us do something essential: pick the 4 types of behavioral metrics we’re looking for.
The 4 types of UX Behavioral data
When it comes to UX behavioral data, most of the data can be grouped into 4 different categories.
Acquisition (PPC Keyword Data)
Conversion (Clicks, sign-ups, downloads)
Engagement (Bounce Rate, Time on Page)
Technical (Visits by browser, screen resolution, etc.)
UX will likely only focus on one category based on your research question, but let’s go through a quick overview of each.
Acquisition-based data is used to understand where your users are coming from and how they decided to visit your website or download your app.
For example, if certain keywords or ads are leading to your site or what they searched for to find your website.
This tends to be the realm of marketing and SEO types. Still, there might be some niche uses if there are certain problems in navigation, like if keywords highlight a specific page (i.e., “yoursitename login help”), which might suggest that users can’t find things.
Conversion is a popular metric that your business cares about because these metrics can deal with making a profit.
Converting readers to users or users to customers is one way that businesses can track performance and evaluate how they’re doing.
In addition, if you’ve ever had to re-design a website, there’s a chance that at its core has been a bad conversion metric.
While we may hear from users that they find things confusing, outdated, or frustrated, a business is more likely to invest time, money, and effort when a specific conversion metric is failing.
However, this tends to be less useful for user research because these metrics are usually known (and a driving force).
Instead, this tends to be supplemental evidence: if a user mentioned something in a user interview or survey, we could examine the metric as supporting evidence.
UX probably pays the most attention to engagement because these are metrics that are based around user/customer engagement.
Most UX KPIs that you’ve read about around going to be based on engagement metrics, which is why this can be incredibly useful to look at.
User research questions can range from understanding how many users follow a specific navigation pattern (behavioral flow maps), what the typical user does to find something, which pages or navigation are misleading (high bounce rate), or how the information design is being affected by the fold (or in terms of heatmaps).
You have to remember, though, that we should not use this information by itself to conclude.
Seeing that users spent 8 minutes on a page isn’t that revealing: there’s a lot of decisions to make on that page, or it could be because of UX issues.
But you can’t say for sure until you user test.
Lastly, there are technical metrics. This is mainly useful for research questions in a couple of different areas:
Mobile vs. Desktop
Average page loading time
Site speed overview
These are often not the focus of a research question but rather a way of clarifying issues or other focuses that you should be paying attention to. For example, you may want to think about the resolution of your design or whether or not we should focus on the mobile version first.
But these things are often metrics that you should bring up with your team.
That means out of the 4 types of Behavioral UX metrics. But, unfortunately, you’re probably only paying attention to one, which can be instrumental in getting you to manage your expectations.
Managing your expectations
Large amounts of data can be intimidating, especially when you have almost no indicators on where to start.
But you should keep in mind that unless you’re doing something like an A/B test (a highly controlled quantitative experiment), you’re not trying to gather all your relevant data at this stage: you will still need user testing and conduct interviews.
The reason is that even though quantitative data can tell you What is happening, it can’t tell you Why.
To do that, you need to do user testing, and interviews like you’ve always had. But this quantitative data can help inform the latter: you might ask users if there are problems in the workflow, ask for a think-aloud test while completing problematic tasks, or design tasks involved with problematic navigation.
So before you balk at the thought of digging around with analytics data, take a moment and think about what you’re trying to accomplish with your user research. This will help you filter what data you’re searching for and help guide what you should ask during your user interviews.
And having that additional dimension of data can improve both your user research and designs.