This is it. You’ve done the research, you’ve completed your customer journey map and created a digital prototype, and you’re finally done. Time to launch that digital experience and you’re off to the races.
Not quite. Because there’s still one vital step to take, and it’s one that too many organizations fail to factor into their plans — usability testing.
There are a couple reasons why some healthcare organizations shy away from testing their designs with users. Some are concerned about additional costs after taking a project so far. Others are concerned about slowing down momentum. Without real-world user testing, though, you put yourself at risk of uncovering problems after your design is already out in the wild. And that’s a costly mistake.
Here’s what goes into validating your designs — new or current — through user testing.
Choose the Type of Usability Testing That Best Fits Your Needs
There are a lot of factors to consider when deciding how best to test the design of your healthcare digital experience. One question to ask is why you’re doing this testing in the first place. What is it you hope to learn? You need to go into this testing phase with an understanding of your end goals.
Another factor to consider is simply what type of testing will work best for your needs. By the time you get to the point of usability testing, meaning you’ve already got a design completed that is ready to test, you’re most likely looking for qualitative data. Not just what somebody is doing — analytics and simple surveys can provide that type of quantitative data — but why they’re doing it.
There are two different approaches we suggest you evaluate before diving into your own usability testing to answer those questions.
Traditional User Testing
With the traditional approach to testing, you have a pool of users all going through the same test. There’s a lot of debate about how many users should be tested, with some arguing that more testers means more accurate insights. There may be some truth to that, but it’s important to consider your goals and your budget. We’ll get into that more below.
Once initial testing is complete, the feedback from all of the participants is collected and analyzed. Any insights gained are then incorporated into the next iteration of the design.
This type of testing allows organizations to collect a lot of data. However, initial insights are sometimes very obvious. This slows the pace of implementing improvements and retesting to validate any changes made, because you’re waiting to get the obvious results out of the way.
R.I.T.E. Testing
R.I.T.E., or Rapid Iterative Testing and Evaluation, is a user-testing method in which key insights from each participant are incorporated into the design between test sessions, allowing you to retest the design with those changes as you go along. The major benefit of R.I.T.E. testing is that it saves time in the long run.
However, with this approach you also run the risk of incorporating feedback that could potentially be an edge case. If that change is made, and it goes against the grain of what other testers’ feedback would have been, you might wind up working against your own design.
The key to successful R.I.T.E. testing is using expert analysis, provided by your own team members or an outside consultant, to measure results as you go. When there is any serious doubt about a piece of feedback from one user, you can make the call to move ahead without incorporating that change. See how it goes with the next one, and proceed from there.
Setting Your Healthcare Digital Experience Usability Test Up for Success
Certain questions always come up in regards to how usability testing works. Let’s take a look at two of the most common questions, and how you can orient yourself and your project before usability testing begins.
Who Do I Test?
There is no hard and fast rule in terms of how many participants to test. For more qualitative testing, where users are testing the experience in a moderated environment and can answer questions about what they’re experiencing, and not just what they’re doing, 5 or 6 is fine. If you’re looking for a large amount of qualitative data, that number could be 20, 30, or much more.
It also depends on what exactly is being tested. If it’s a common experience or design where there is a lot of existing research readily available, you can probably do with fewer test subjects. If you work in a highly regulated industry demanding a certain amount of research data, or if the decision makers in your organization typically want a large pool of data before they’ll sign their names to a change, then a larger pool may be appropriate.
Regardless of the number of participants, it’s important that your test subjects align with your customer personas. So, if you have defined 6 different personas you need to service, you’ll need 5 or 6 testers for each of them. It’s not about the number of users tested. It’s about testing the right users.
Go to your journey maps, figure out which personas are most relevant to this test, and go from there. If you haven’t created a journey map but need to complete a test, recruit the specific user type most likely to find themselves performing the tasks in the design.
Test for Usability Outside of Your Organization
“Well, we had our internal teams try everything out and they all thought it looked great!” That may be true. And those internal teams had nothing but the best of intentions. It’s still possible that they’re too close to what you do and how your design is supposed to function to really be objective about it, though.
The whole point of usability testing is to test without bias. That’s difficult to do when you have an internal team completing the testing. It’s not just a matter of removing the actual designers from the internal testing team, either. If you put a marketing team member in a usability testing group, for example, they’re going to be looking at what’s most relevant to them as a marketer — messaging and other such elements.
Real users aren’t going to read everything. They might not even read anything. There’s a chance they’ll just try to navigate through the test while quickly scanning the screen. That’s the type of unbiased usage that needs to be taken into consideration.
You’ll also want to figure out if you’re testing in-person or remotely if you’re doing a moderated usability test. How are you going to communicate with the participants? Are you just going to let them go through the test and review screen recordings? Or are you going to be asking questions along the way? If you do plan to ask questions, what are they? Scripting them out ahead of time will increase participant comfort levels and provide valuable background.
Remember — This Is an Ongoing Process
You’re never going to complete a single user test and land on the perfect design. When you test properly and plan accordingly, though, you’ll be able to make the necessary changes over time to optimize your healthcare digital experience for usability. Get in touch to learn more about this critical final step in the design thinking process.