Are you stopping one step short of driving change through user insights?
Why creating user insights, not presenting information, is critical to persuade teams.
Working in startups taught me the power of user insights and why I never seemed to be able to get approval for meaningful changes previously.
You may have encountered this before: you do a user test, gather data, and present several key user findings around what you've discovered.
While your team is on board with minor changes (like changing button text), they never seem to approve any design recommendations that could make a real difference.
What I've discovered, after digging into Data Storytelling, was that what I found wasn't the issue: it was how I presented it to the team. As many of you, I was presenting information when I needed to communicate insights.
To understand the difference, let's first talk about the Data pyramid and why you're likely stopping one step short of your team's needs.
DIKW pyramid, and why presenting information isn't enough
To explain why you're stopping one step short of providing actionable insights, we need to take a step back and talk about the DIKW pyramid, otherwise known as the Data pyramid.
This is a model of the steps people go through with knowledge management and learning, and this provides one of the critical insights into why presenting information isn't enough.
To help illustrate this, let's go through a user research presentation around user frustration to showcase this.
The first level is called "Data," but Raw Data is a better description. You would likely see this immediately after collecting user testing responses. This would be your spreadsheet of responses with essentially 0 analysis done to it.
For example, it might consist of:
Participant 1 said X and liked Y
Participant 2 said Z and liked X
Participant 3 said B and liked C
etc.
We wouldn't present this to stakeholders because they couldn't make heads or tails of this information.
Instead, we would do additional work to get to the next level: Information. At this level, we've structured the raw data and done some fundamental analysis to develop suitable conclusions.
For example, we might count how many participants said something and then present a finding like this:
"3 out of 5 users found using our product frustrating."
It sounds good and quickly summarizes our user research. There's just one problem: it doesn’t give any context or actionable next steps.
Moreover, this sort of information is not persuasive, which is problematic because different teams are likely to have information at odds. For example, Engineering might respond with another piece of information:
"To build a feature like that would take 3 sprints."
The product manager would then take those two pieces of information and decide based on them. 3 Sprints of effort might translate to $60,000 of resources, while they might not understand the impact of what "3 out of 5 users" means.
This is why we must step beyond the Information level: we're not just informing our team of random facts. We're trying to persuade them to take action and make meaningful changes.
To do that, we need to turn our information into insights.
Turn Information into Knowledge by pairing it with context
To get to the next level on the Data pyramid, we need to do two things:
Provide relevant context for the information
Create actionable next steps with the information
To do this, we can use a combination of frameworks established by Brent Dykes, author of Effective Data Storytelling, and Cole Nussbaumer Knaflic, author of Storytelling with Data.
While each framework is focused on a different aspect of Data Storytelling, they nevertheless touch on three general questions that need to be addressed:
Why does this matter: Who is your audience, and why should they care?
What should they do with this information?
What's at stake: what is the business impact or the consequences of inaction?
Answering these questions is critical to understanding the context needed around the information and creating those actionable next steps.
However, figuring out which information can be insightful on your own can be tricky. Brent Dykes, luckily, provides six criteria (that pair with these three questions) to evaluate your information and see whether they can become insights.
Valuable: Is the perceived upside worth the downside of changing an existing design/product?
Relevant: Is the insight meant for the intended audience, or is it more general?
Practical: Is your recommendation feasible or realistic to implement?
Specific: Do you have sufficient detail to drive immediate action?
Concrete: Is there a concrete outcome or way of measuring success?
Contextual: Is enough context provided for audiences to understand it?
Let's take the following finding and analyze it against these criteria to show how we can improve this piece of information.
Example: "3 out of 5 users found using our product frustrating."
Why should your audience care? (Valuable/Relevant)
One of the problems with the finding above is that we still need to do the work to translate it into terms our audience understands. Our Product Manager, for example, doesn't analyze findings and make decisions based on how users feel.
Their primary job responsibilities are often regarding budget, timeline, and project progress/metrics. As a result, it's unclear how changing things to be less frustrating will be helpful.
After all, it's excellent if users are less frustrated with the product, but will this investment significantly increase engagement metrics? Or will spending this effort result in a 0.25% increase in the metrics we care about?
However, that's not the worst problem with the finding.
What should they do with this information? (Practical/Specific)
The worst issue with the finding is that there's no apparent action to be taken around it. We need to identify what is frustrating about using our product, so it's hard to point out a solution or design recommendation to fix this.
Without this context, it's impossible to gauge the level of effort (and how practical) your design solution may be. After all, if this involves changing the entire user workflow or a significant revision across multiple pages, your team may be less likely to approve it than fixing one specific bug.
Identifying the scope of the effort and action needed is necessary, as, with it, your team may be able to make relevant decisions around your suggestion.
However, there's one more aspect missing from this: the consequences of inaction.
What is the business impact/consequences of inaction? (Concrete, Context)
You may only sometimes know the business impact of these findings, which is fine. However, what's missing from this finding, that's much more critical, is the stakes.
If your team ignores making this change, will it result in a few users grumbling about the problem? Or is it going to result in many people abandoning the product?
Identifying the consequences of inaction is necessary for your team's decision on which findings to move forward with.
With these criteria in mind, let's look at how we might improve this piece of information, with context, to become an insight.
Improving our findings:
Let's see how we might improve our initial example to be more insightful.
"3/5 users expressed frustration with the 2nd page of the onboarding process. We suspect this may explain our high abandonment rate after creating an account, and we need to simplify the steps taken on this page to address this."
We've made this information more specific and actionable by mentioning where the frustration occurs. In addition, we've also made it much more concrete and relevant by tying our user findings to an existing problem around metrics that our team knows about.
Is this a perfect insight that can cross the Information-Knowledge gap? Maybe. You'll have to consider your audience and iterate on your findings with others to figure that out for yourself. But taking this approach helps you in one specific way: it helps to create insights based on action.
Prioritize around actionable insights instead of impact
Using priority to evaluate and rank user research findings hasn't always sat well with me.
That's because I know that if I mark a finding as "Low priority" or even "Medium priority," it will likely be ignored or go straight into the trash.
Understandably, we can't always act on all the user findings we've observed. However, by doing this, we often lose context if we ever need to revisit this for any reason.
For example, if you ran into a "medium priority" issue with your first round of user testing, which is still around in the 2nd and 3rd rounds of testing, we may want to bring that up with our team.
This is why, rather than ranking user research by priority, figuring out which information can be turned into insights and what actions need to be taken can be a much better approach.
The reason why I learned this from startups was that 80% of the time, we were “putting out fires.” If a finding wasn’t a critical priority, it was likely never to be resolved. At the same time, though, these findings often indicated much deeper problems that needed to be resolved.
This is what digging and creating user insights can offer you. By taking that extra step, you can understand which information can and should be turned into user insights and have a higher likelihood of driving action, creating meaningful decisions, and getting stakeholder approval.
So, if you’re finding yourself in this holding pattern of only putting out fires and having most of your user research go in the trash, try digging a little bit deeper and uncovering user insights.
Doing so may allow you to have a greater impact as a designer and design meaningful changes.
I’m creating a Maven course on Data Storytelling for Designers! If you’re interested in learning (or would like to provide feedback), consider filling out this survey.
Kai Wong is a Senior Product Designer and Data and Design newsletter writer. His free book, The Resilient UX Professional, provides real-world advice to get your first UX job and advance your UX career.