A typical usability test may return over 100 usability issues. How can you prioritize these issues so that the development team knows which are the most serious ones?
Below I describe three different ways to ensure that you find and fix those issues with the biggest impact on the experience of your users.
To start with…
Before you can even start to think about prioritizing usability issues, there is something you need: A list of usability issues. (Surprise, surprise!)
I use a Google docs spreadsheet to note down things seen in usability videos – as this later allows me to share the insights with my team.
While talking to our Userbrain clients, we learned about many different things they capture, but these are the most common things they note down:
- ParticipantID (e.g. Tester 1)
- Timecode of the issue (When have a video recording of your session, E.g. 24:12)
- A brief description of the issue
- The area of the site where the issue occurs (e.g. product detail page)
- Possible solutions
- Severity rating
I’d recommend to write down at least a short description of the issue spotted and the timecode of the video recording so that you can find the issue again later.
The 3 questions
As David Travis points out, you can classify the severity of any usability problem into low, medium, serious or critical by asking just 3 questions with YES/NO answers:
1. Does the problem occur on a red route? YES/NO
2. Is the problem difficult for users to overcome?
3. Is the problem persistent?
You aren’t sure where the „Red Routes“ on your site are? Check out this great article by David to find the answer.
Critical isability problems therefore occur on a red route, are very difficult to overcome and very persistent for the user. You should start by fixing these problems immediately.
The task completion spreadsheet
I often use this method when we present the results of usability studies to our clients.
It requires that you capture the task completion in your tests, like this:
This method requires you to capture task completion in your tests – which is
On the horizontal axis, you lay out all the different tasks. In the example below we’ve tested 10 tasks. The vertical axis, shows the test participants:
You then show the success rate of every task and every tester by using one of three colors:
- Green means that the tester had no problems in performing the task assigned.
- Yellow means small issues which could be resolved by the tester without any external help.
- Red means that the tester couldn’t accomplish the task.
- Gray means that the tester couldn’t start with the task (e.g. Internet connection not working.)
This is a great way to spot tasks that were problematic for most participants.
You should then try to discover why these tasks were problematic and focus on resolving these issues first.
The rainbow spreadsheet
If you haven’t read It’s our research by Tomer Sharon I really encourage you to do so.
In his book, Tomer presents another great way of capturing and prioritizing usability issues:
You can download his spreadsheet here.
This visualization is quite similar to the task completion spreadsheet – you just display issues instead of tasks. Repeated observations are highlighted in different colors – the more colorful an issue, the more important to fix it.
Finding the low hanging fruit in your list of usability issues isn’t that hard. While some issues are just obvious to spot during testing, others may only become visible, after you have taken a closer look at your data.
The above methods show when a problem actually is a problem, especially if you are responsible for communicating these insights to the rest of your team.
How do you decide which usability issues to tackle first? I’ll be happy to hear about your workflows.