RITE testing: how to test AI-driven products in a meaningful way
One of the best ways to go back to scrappy user testing in the AI era.

“User testing generates reports, not solutions.” Somewhere along the way, that statement became the way many businesses perceive UX research.
UX researchers have been among the victims of the AI-integrated future. From AI-based synthetic research offering “competition” (that’s flawed) to widespread budget cuts, many UX researchers face challenges with justifying their value.
But the core issue runs deeper than external threats. It stems from how user research itself has evolved.
User research has become too formalized.
The problem with Formalized UX Research
UX research has, in many organizations, transformed into a siloed, overly formal process.
Whether it’s a siloed department that keeps insights to itself or third-party solutions like UserTesting.com readily available, the perception of UX research has shifted dramatically.
We’ve gone from UX research being “scrappy interviews with real users” to “formal, expensive studies required for legitimacy.” This formalization creates a cascade of problems.
First, there’s an unclear value. From an executive perspective, the return on investment becomes murky. Traditional usability tests can cost upwards of $20,000 per round, plus facility fees, participant recruitment, and prototype creation costs.
What do organizations get for that investment? Too often, they get a report. In many organizations, even huge, well-resourced ones, user research doesn’t generate solutions.
It generates presentations. You hire an outside consultant, test with 5–10 users, conduct extensive analysis, and at the end, you don’t have a changed product. You have a PowerPoint highlighting weaknesses, but it often lacks an implementation plan or clear next steps.
For many executives, that’s not worth the investment, especially compared to AI alternatives like “synthetic research.”
While deeply flawed, the difference in perception is clear: “Really cheap and fast AI-powered insights” versus “A formalized, expensive report with no action taken.”
Then there’s the timing problem. And this is where things get truly backwards.
The Timing Trap, and Where UX Research Suffers
Because user testing is expensive, businesses often think: “Let’s only do it late in the process, when we have something polished to show.”
It makes some sense (why spend money testing a half-baked idea?), but it’s precisely the wrong approach to take around research.
Instead of treating it as a discovery and iteration tool, it’s treated like an elaborate QA process. By the time testing happens, major decisions have already been locked in. The prototype is “done.” Changes would be costly and politically challenging.
So the research findings sit in a report, and the product ships essentially unchanged.
Enter RITE Testing
This is why there’s a growing push towards scrappier, more iterative UX research. And at the forefront of this movement, around AI-driven products, is RITE testing.
RITE (Rapid Iterative Testing and Evaluation) is a methodology proposed by Greg Nudelman, author of UX for AI and The $1 Prototype. It fundamentally reimagines user testing through three key modifications:
Smaller participant groups. Instead of targeting 5–10 participants per round, RITE testing involves testing just 3–4 people per round for 3–4 rounds.
Real-time iteration. RITE testing embraces changing the prototype on the fly. Between rounds (sometimes even between individual sessions), the team updates the prototype and fixes issues discovered during testing.
Building as you go. Rather than testing a finished prototype, the team builds it out iteratively, evaluating what users need, what they want, and, most importantly, whether they would actually buy it.
Breaking With Tradition (For Good Reason)
This is a much leaner version of user testing that seems to break what we think of as the fundamentals of UX research:
Showcase a fully imagined high-fidelity prototype
Stick to scientific rigor through unchanging test conditions
But here’s the uncomfortable truth: in this era of design efficiency, scientific rigor sometimes matters less than showing results quickly. Unless you work in highly regulated industries like healthcare or finance, perfect methodology is becoming less valued than actionable insights.
Instead, RITE testing focuses on user research’s core strengths: identifying use cases and determining whether users will actually buy the product.
Why Use Cases Matter More Than Ever
In this AI-integrated future, one thing has become critical: use cases matter more than ever.
It may sound appealing (and cost-effective) to replace highly technical, highly paid experts with AI, but that’s often a path to failure because the fundamental use case isn’t there.
What you’re paying for with a highly paid expert isn’t just their ability to process information — it’s their ability to make high-value decisions and judgments on the fly. Something computers still struggle with.
This is where RITE testing excels:
Co-creating the use case with users
Understanding the desirability of a product
Understanding if users know how to use it
Assessing whether something isn’t just worth building — it’s worth buying
Here’s how to get started.
How to Get Started With RITE Testing
The premise behind RITE testing is simple: you often don’t have a completed prototype. You have a couple of screens and alternatives based around a single use case.
For example: “Imagine you’re trying to eat healthier, and you want to find an app that allows you to track calories quickly. You open up this app and see this screen. Tell me your first impression.”
But the real magic of RITE testing isn’t just gauging reactions to what you’ve built. It’s asking the follow-up question: “Great, where would you go next?”
Often, users describe a step in their workflow that you don’t yet have a screen for. And that’s perfectly fine, because it’s actually valuable information.
For example, they might say they’d check what they had for breakfast and lunch to see how many calories they have left for dinner. That screen might not exist. So you quickly sketch it out together, on a post-it or in a notebook, and talk through what they’d expect to see.
Through this collaborative process, you build out and understand a person’s actual workflow. While “live-sketching” might seem intimidating, you’ll mainly see it with the 1st few users. As you get a better sense of the workflow (and how they’d actually use your product), you will likely have the desired screens on hand.
This approach doesn’t just build a workflow that matches user needs. It creates a flexible product that users might actually want to buy.
RITE Testing as a Gateway
For teams not fully on board with user testing, RITE offers a crucial advantage: it’s a small win.
It’s quicker, less expensive, and more immediately actionable than traditional user testing. If it works well, you can use it to make the case for more comprehensive research later.
RITE testing uses the simplest possible prototype for the job, usually sticky notes or a basic Figma click-through. For more AI-integrated teams, this might also involve simply running a model of a prototype in a Python notebook.
This keeps costs down and enables rapid iteration, which makes the methodology work.
The Future of UX Research Lies With Use Cases
Nowadays, user testing has become overly formalized, so it’s done late in the process, when everything looks polished, to justify the cost. That needs to change.
RITE testing offers a way to bring user research back to where it belongs: early, often, and focused on generating solutions, not just reports.
In an era where AI threatens to replace traditional research with synthetic shortcuts, the answer isn’t to compete on speed or cost. It’s to double down on what makes user research valuable in the first place.
Real insights from real users. Problems solved in real-time. Products built collaboratively, not just validated retroactively.
That’s what RITE testing makes possible. And that might just be what gets user research to evolve.
Kai Wong is a Senior Product Designer and Data and Design newsletter author. He teaches a course, Data Informed Design: How to Show The Strategic Impact of Design Work, which helps designers communicate their value and get buy-in for ideas.


