Should you fix a design error in the middle of user testing?
What to consider if you want to change your design prototype mid-test

It is always painful when you realize a design error during user testing. When building a large-scale prototype, we had a small oversight: two buttons were labeled the same but did different things. The previous design had been a modal window, so youโd click the โAdd Hospitalโ button, which would bring up the modal to fill in some basic information. Youโd then click the โAdd Hospitalโ button in the modal window to add it to the record.
Our team couldnโt agree on better wording, so we figured weโd see if users ran into issues with it. But when the design shifted from a modal to a second page, we hadnโt considered the impact.
Something we thought a few users might comment on was causing users to fail the more significant task. Users had clicked โAdd Hospitalโ once, so they thought clicking โAdd Hospitalโ on the second page would add yet another hospital. It was such a big deal that we had to abandon the task with several participants. So we decided to change the buttonโs label after 3 participants.
This change was a minor change that we decided to change for the rest of our test participants after a few users ran into it. But it opened up a larger question: should we change our test prototype if there are flaws that every user encounters?
The answer is, it depends.
Arguments for and against changing prototypes
There are good arguments for and against changing prototypes. One of the primary arguments for changing a prototype is weโre not running a strict quantitative test. As a result, not all test conditions have to be the same.
If youโre gathering qualitative insights, thereโs a chance that you donโt want to hear the same comments about something minor. Instead, youโd rather hear about the different viewpoints users have to address the problem in detail. And sometimes the problem is so severe that you donโt need to see many people failing the task. After all, it only takes one person burning their hand before you might reconsider putting the handle of a deep fryer is next to the heating element.
Besides, it may be more beneficial to get some feedback about possible design alternatives you might be considering. Changing things here allows you to get a glimpse at how design alternatives might perform, which will help you iterate faster. But, there are also two good arguments why you shouldnโt. The first is if any UX metrics are involved. If a change alters your metrics substantially (like time on task shrinking by a minute), it will throw out any semblance of valid metrics.
This change can also be an issue, even if youโre not super focused on metrics. For example, if the first couple of users expressed frustration overall with the product, and then the following users love your product because of changes, how do you talk about general user impressions? Itโs not a mixed bag; itโs a bag that was largely negative until you decided to make some (often hasty) changes to the prototype.
This also becomes important if considering how youโre going to talk with stakeholders. For example, it can be hard to discuss if this is a severe problem, but only 2 of 8 users ran into it (because you changed it mid-test). As a side note, if youโre doing rapid prototyping, you might not ever change anything mid-test if your next round of Testing is next week. But if your next round of usability testing is a month or two away, you might want to consider it.
However, the main argument against changing things is that sometimes, the solution can be worse than the cure. Note that youโve often spent weeks thinking about and re-designing something before Testing. Making a sudden change after a day of Testing can sometimes screw up the results worse than the existing prototype. Why? Because you might succeed.
You might be forced to include a half-baked design idea in future iterations. This is because you didnโt capture the problem in detail, and the changes you made seem to work, so you canโt do much else. So there can be valid arguments both for and against changing things. But thatโs not the question that we should address to answer this. Instead, the question is this: is this change worth spending my userโs mental energy on?
Thinking about changes
When you spend some time thinking about it, a userโs ignorance is precious. Weโre getting users who may have some knowledge or experience of a subject to go into our prototype blind and play around with it for the first time. Weโll never quite re-capture their first time playing with the website, whether itโs things that they donโt know or mistakes that they make. Once theyโre familiar with it, later testing will be faster and usually less error-prone. We use two different terms, learnability and memorability, to talk about these experiences.
https://www.nngroup.com/articles/measure-learnability/
So ask yourself: is the detail that theyโre getting tripped up on worth their mental energy? If the answer to that question is not immediately obvious, consider asking yourself these two questions:
Is the solution obvious?
Will it negatively impact your analysis/metrics?
Is the solution obvious?:
The previous example had a pretty obvious solution: change the buttonโs name to something more understandable. Participants also gave suggestions on what we could use instead, making this even easier to address. If the solution is this obvious, you might as well go ahead with changing it. However, If the solution will need a team to help figure it out (or at least some deep thought), perhaps you want to hold off on changing things mid-test.
Will it negatively impact your analysis/metrics?
Will the change affect the larger picture or effort to do an analysis? Thinking about your findings/metrics from all participants may help you decide if you should change anything. For example, if your users felt negative about the product, but later, users loved the product after you made changes. It can be hard to summarize the overall findings for your stakeholders. Or if your task completion/time on task is better, but thatโs because the design makes it easier to skip optional (but recommended) steps. In that case, you may need to hold off on making any changes. But these questions may not give you the complete answer to that question. So here are some general guidelines on what you should (and shouldnโt) change mid-test.
What to change mid-test
I will describe issues in the broadest terms because these are general rules of thumb. Depending on your situation, you may or may not want to make some of these changes. Itโs up to you to interpret them for your project. However, most issues tend to fall into three main categories: OK to change, Design alternatives and Donโt change.
OK to change: Prototype issues, technical issues, button/container word issues, order of pages.
The first category of issues is the minor fixes that users should probably never devote mental energy to. We should remember that weโre testing the design of our product with a prototype, not the technical capabilities of the prototype itself. If things are missing from the prototype that negatively impacts your Testing, itโs OK to fix it. For example, if people expect to navigate back to the home page by clicking the logo and not distracting them from the task, itโs OK to fix that.
Other common prototype issues include:
Pop-up windows not being centered,
Problems with the prototype when magnified (look at your prototype zoomed in 125โ150%),
Buttons not working.
Tech issues are also an easy fix. For example, if youโre testing remotely and thereโs a bit of lag, fixing an issue like clicking a button twice creating duplicate form fields is a no-brainer.
However, the other two categories require just a bit more thought. Poorly worded buttons, headers, and containers are easy for your users to spot. Sometimes youโll get a lot of feedback or suggestions for what to change it to.
Several users suggested the same thing for the buttonโs wording (โSave Hospital Informationโ).
The solution is obvious, but before making the change, think about any meetings (or discussions) around button labels. It may be that you can change it no problem, but sometimes site-wide consistency takes precedence over the individual name of the button.
This also applies to changing the order of the pages for user testing. You can often get a huge difference in user feedback by switching pages (or excluding pages) to match their mental models and avoid priming them for specific questions. For example, suppose people expect the first page of an application to enter their information, but you start with your emergency contact page. In that case, they might make many errors accidentally entering in their info. Or if you start with a page of instructions that primes the user to stop moving forward (because it seems like they need to find documents before they can continue) when you donโt need to. It can be an easy fix to switch or exclude different pages, but there havenโt been any previous meetings about this.
The main thing here is with information dependencies: if any information carries over the next page (or different parts of the application), things will break by switching this around.
This brings us to the next category of changes, often considered the middle ground: design alternatives.
Design alternatives: Design elements, Button placement, informational/error notices
Sometimes, you get a lot of feedback about some design aspects that arenโt as crystal clear. Perhaps users complain a lot that you used radio buttons on a list because they want the ability to select multiple. Or they expected the order of buttons to be swapped, such as โCheckoutโ and โContinue Shopping.โ
Or the solution to the wording issue isnโt crystal clear, such as with informational notices (โBefore you begin, you need toโฆโ).
In that case, itโs still good to collect feedback about the current design: trying to swap things out randomly may harm your Testing more than you help it. But what you may want to do is create a design alternative that you can ask your users about during your Testing.
The best way to think about this is โinformalโ A/B Testing: you go into your prototyping software, duplicate the page, and change one specific thing with the page.
After gathering the userโs impressions on a particular page (or after youโre done with all of your tasks), you can show them the alternative design and gather their impressions.
Remember, in A/B testing you only change one particular feature: itโs not helpful to put all of the design alternatives you want to make on a single page. But generating these alternative designs is not only pretty straightforward: you can also gather feedback about possible alternatives that you might have tested in the next round of testing anyways. If the overall sentiment around radio buttons was negative, trying drop-down lists as an alternative allows you to iterate on an existing problem and gather feedback on a possible solution. But you shouldnโt always create alternatives for everything: sometimes itโs best not to touch the design.
Donโt change: Navigation issues, complex wording, contested features, large-scale issues.
Lastly, you should probably never touch the categories of issues, even if they might yield good research data. The reason why tends to be because of one of three reasons:
You had a lot of meetings discussing different possible solutions to this before settling on something
This is a significant change that would probably result in multiple things changing (and making your analysis harder)
This touches on things outside your project, so you need to talk about it.
Many of these issues touch on multiple reasons. For example, navigation issues not only are large-scale changes but also touch on things that may be outside of your projects, such as site-wide consistency or just another department.
If people are getting tripped up with menus or canโt find something, you might consider designing alternatives for button labels in a menu, but you still need to gather data about the current design. Or, you might have to either move on to the next task or nudge them to look at something and get their impressions.
Complex wording, such as paragraphs explaining instructions or definitions of terminology, likely has also had many meetings focused around them and may have things outside the project (like legal or organizational wording) to consider.
So please donโt change it: youโre not only not necessarily helping users (since your re-design might not even be feasible), youโre also stepping on a lot of toes by doing that.
Lastly, there may be other large-scale issues that you might not want to touch, even if the users consistently suggest this. For example, if they recommend a searchable table for ease of use, you might not want to consider that if you have six tables throughout the process.
The reason is, which tables do we apply it to? Do we make all tables searchable (even the ones that have a few results)? Do we only apply it to some? But, again, thatโs something you need discussions around, rather than quickly slapping together a re-design. Just as a reminder, these three different categories of issues exist only as guidelines: sometimes, you might find it beneficial to break these categories up. But all of them are based around a single idea: usersโ ignorance is a precious resource.
User ignorance is a precious resource.
Iโve been fortunate enough to have been taught the value of usersโ ignorance early on. Since my background was in Healthcare UX, I often tested with the same pool of Subject Matter Experts through many user tests. It was a luxury to sit down with a new medical professional and pick their brain, and sometimes they gave valuable insights we had never considered.
Thatโs translated to the way that I work now. For example, with iterative design testing, itโs often more important to capture as much data as possible, even if that means changing things mid-test. Users will see your prototype for the first time only once. So devoting their mental energy towards gathering those insights, problems, and alternatives is an excellent use of their time. So donโt be afraid to change something mid-test, especially if it allows you to get to what you want to learn.
Kai Wong is a UX Specialist, Author, and Data Visualization advocate. His latest book, Data Persuasion, talks about learning Data Visualization from a Designerโs perspective and how UX can benefit Data Visualization.