The Irrational Fear of “Professional Testers”

Published July 5, 2019 by Stefan Rössler in User Testing
⚡ Updated on October 2, 2023
The irrational fear of “professional testers”

Many product teams are concerned about testing with so-called “professional testers”. Professional testers are people who have already participated in several user tests before, and therefore it is quite generally assumed that they don’t provide any useful feedback anymore. In this article I will argue why I believe that this is not true.

There are several arguments people keep using to claim that user testing with professional testers is a bad thing. That’s why I thought it might be most effective to go through these arguments one by one and point out the flaws in the underlying reasoning to reveal that there is really nothing at all to worry about when testing with professional testers. Finally, I will tell you why I think it’s actually better to test with professional testers rather than with newbies.

False argument #1: Professional testers are bad for user testing

Occasionally, people ask us what measures we take to prevent professional testers from signing up to Userbrain and participate in our customers’ user tests. The premise is obviously that professional testers are bad for user testing and should be avoided. And while we understand why some people might think that this is true, our experience of more than 10 years has shown us that testing with professional testers usually provides better results than testing with newbies (more on this later).

Besides, just saying that something is bad and should therefore be avoided isn’t really an argument. It’s a dogmatic assertion which isn’t useful at all because it doesn’t explain why something should or shouldn’t be done. So, whenever somebody asks us what we do to avoid professional testers (which implies that they assume professional testers are bad for user testing), we ask them the most powerful question possible if you want to get to the bottom of something: Why? What is your reason for thinking that professional testers are bad for user testing? And this is what they might say…

False argument #2: Professional testers are expert users

Fair enough, whenever you use a remote user testing service like Userbrain, the testers on these platforms are obviously experienced enough to sign up for an online service and install a testing software on their computer, tablet, or mobile phone. This said, they definitely are not part of the 26% of the adult population which, according to this OECD skills study from 2016, are unable to use a computer.

For this study, more than 200,000 people from 33 countries were surveyed over a period of 4 years. At the end of this study, the participants were divided into 4 different skill levels based on the number of computer-based tasks they were able to complete. The results show that in addition to the 26% of people who are unable to use a computer (and therefore didn’t participate in this part of the study), around 14% are only capable of performing very simple tasks that are “based on well-defined problems involving the use of only one function within a generic interface to meet one explicit criterion”, e.g. using a search function or opening a file. This said, no remote user testing service in the world will provide you with testers from this 40% of people who either can’t use a computer at all or have too limited skills to sign up for such a service. But wait: Do you think that many of your customers or users will actually be part of this population?

Whatever your answer to this rather suggestive question might be, you’re probably interested in the other 3 skill levels mentioned above. So let’s have a quick look at these levels and the remaining 60% of people as described by the OECD researchers:

Level 1
29%
At this level, tasks typically require the use of widely available and familiar technology applications, such as e-mail software or a web browser. There is little or no navigation required to access information or commands required to solve the problem. The problem may be solved regardless of the respondent’s awareness and use of specific tools and functions (e.g. a sort function).
Level 2
26%
At this level, tasks typically require the use of both generic and more specific technology applications. For instance, the respondent may have to make use of a novel online form. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) can facilitate the resolution of the problem.
Level 3
5%
At this level, tasks typically require the use of both generic and more specific technology applications. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) is required to make progress towards the solution.

User advocate Jakob Nielsen has written an article about the distribution of users’ computer skills based on this OECD study and uses the following task examples to better illustrate the differences between these 3 skill levels:

Level 1 “Find all emails from John Smith.”
Level 2 “You want to find a sustainability-related document that was sent to you by John Smith in October last year.”
Level 3 “You want to know what percentage of the emails sent by John Smith last month were about sustainability.”

The main reason why I’m including all of this here is to show you that it’s definitely not true to claim that professional testers are expert users. And to be able to seriously argue that this is not the case, I first needed a proper definition for what an expert user actually is, which I do now have with level 3 as defined by the OECD.

After watching well over a thousand user testing sessions myself, I can tell you that almost none of our testers is skilled enough to fit level 3 (most of them are just trying to make a little extra money on the side, but they are certainly not IT experts). However, I have to admit that people signing up for remote user testing services will most probably not be distributed evenly across level 1 and 2 (as, according to the numbers above the average population is) and I believe that most of our testers will actually fit level 2. But again, if you think about it, won’t most of your users or customers also fit level 2? And even if your target audience is level 1 or maybe even below level 1, there’s one thing you can be absolutely sure about: If testers on a remote user testing platform are experiencing issues with your product, you’re having some real problems and you’d better roll up your sleeves and start to fix them.

Furthermore, never forget that nobody can be an expert in an interface they haven’t used yet. Thus, even if some people are highly experienced using computers, as long as they haven’t used your product yet it’s very likely that you’re going to learn a lot from watching them use it for the very first time.

False argument #3: Professional testers will tell you what you want to hear

This is an argument you hear a lot from people who are skeptical about paid user testing participants in general. The premise of their argument is that testers are paid and therefore will provide overly positive feedback and just tell you how great your product is. And I agree, you don’t want testers telling you how great your solution is because that’s not really useful, but I also want to remind you that user testing is not about people’s opinions anyway.

User testing is about watching people trying to achieve a specific goal using your product while thinking aloud. Thinking aloud is critical because it’s the only way of knowing what is going on in somebody’s mind and understand WHY he or she is experiencing an issue with your product. And understanding why an issue occurs, obviously puts you in a good position to actually solve it and prevent it from occurring again in the future. That’s why user testing is so powerful, because it’s the only method I know that will allow you not only to observe what people do but also understand why they are doing it. You listen to people during user tests because you want to understand their behaviour, but not because you’re interested in their opinions.

But okay, let’s assume you want to ask people about their opinions nevertheless (which can be useful if done correctly). Well, if you believe that it’s true that testers will tell you what you want to hear, wouldn’t it be a good idea to just tell them to be critical? Assuming they will tell you what you want to hear, this should solve the whole issue and prevent them from being overly positive. Shouldn’t it?

False argument #4: Professional testers will lie to get into a study

Oh yes, they will! As senior UX researcher Jim Ross points out in his article about recruiting better research participants:

“People who frequently supplement their income by participating in user research will say and do whatever it takes to get into a study. It’s often all too easy to figure out the correct responses and avoid being eliminated.”

And this comment from a tester on reddit talking about how to increase the number of tests he or she receives is a stark confirmation of this argument:

“Basically, I just lie now. I lie my ass off, pretending to be anything or anyone. I’ve taken tests where I was a devout Catholic, a recovering alcoholic, a proud owner of a concealed weapons permit (not all in one though).”

So there’s no doubt about it: professional testers will do whatever it takes to get into a study; they will lie and pretend to be anything or anyone you want them to be. But that doesn’t mean you can’t rely on test participant databases; it just means you have to be smart about how to use them. For example, with Userbrain you can target your test participants based on the following demographics:

  • Gender (female, male)
  • Region (United States of America, Canada, United Kingdom, Germany)
  • Age (18–34, 35–50, 51 or older)
  • Language (English or German)
  • Device type (computer, tablet, smartphone)

If you look at this list, you will notice that it’s really hard for testers to lie on any of these criteria. If, for example, they claim to be from the US and we notice them accessing any of our tools from a different location, they will not qualify to take the test (they will in fact be banned from our database if we suspect that they are trying to deceive us, which happens quite often). And concerning the rest of the criteria, why would anyone claim to be of the opposite sex or older or younger than they really are? And even if they did, wouldn’t you be able to hear the difference between someone in their 20s and someone in their 50s?

The whole point I’m trying to make is that you should never rely on test participant databases when it comes to criteria you can’t verify. Because yes, people will lie to get into a study, but that’s not an argument against doing these studies (in this case, remote user tests); it’s an argument against targeting based on criteria you can’t easily verify, like targeting people based on their interests, hobbies, job titles, income rates, or their computer experience skills. Why? Because a), they will lie to get more testing opportunities and b), it’s very easy for them to lie about it because they will simply tell you that they are whatever you want them to be, and they will most likely claim to have a high income assuming that this is a demographic factor that will be in higher demand than having a low income.

But before you get frustrated about all of this, just remember what usability expert Steve Krug has to say about usability/user testing in his best-selling book Don’t Make Me Think:

“The best-kept secret of usability testing is the extent to which it doesn’t much matter who you test. For most sites, all you really need are people who have used the Web enough to know the basics.”

So, you really don’t have to worry about testing with people who don’t fit your exact target audience. You just have to worry about paying more for testing with people who claim to fit your exact target audience when they really don’t, because that’s the only actual issue about people lying to get into a study.

False argument #5: Professional testers will not provide useful feedback

This is basically a smarter-sounding version of argument #1, which means it’s still a dogmatic assertion and not a real argument. The reason I’m repeating it here is simply because even experts keep warning about “professional testers”, but without explaining why these people are supposed to be so bad for user testing.

Jim Ross, for example, writes in his aforementioned article that, “while many of these people are great participants who are very good at providing insightful feedback, others volunteer a little too often”, suggesting that you should “eliminate those who have recently participated in a study”. But what I’m really missing here is the reason WHY you should eliminate these so-called “professional testers”? And the only attempt to answer this question I could find, was by UX researcher Marieke McCloskey in an article about remote user testing tools, where she writes, “many do these studies so frequently that they’ve learned to focus on certain aspects of the design and look for things to critique.” Well, that makes sense to me. But to be honest, isn’t that a benefit rather than a problem?

It’s certainly possible that I’m biased because I’m the co-founder of a remote user testing service myself and therefore rely heavily on professional testers. Maybe my perception is so skewed that I simply can’t see what everyone else seems to be able to see – in fact, so clearly that they don’t even need to argue their points anymore but just present them as self-evident truths. This said, I’m not claiming to have thought all of this through to the extent that I know every possible argument, so I would really appreciate anyone taking the time to tell me what I might be missing or misinterpreting here. For the time being, I will conclude this article with a short explanation of why I actually prefer testing with professional testers rather than with newbies.

Why professional testers are better than newbies

Before telling you why I prefer professional testers, you might want to know how to determine if someone actually is a professional tester or not. The way I’m doing it is by watching our Userbrain testers’ screen recordings and paying attention to the browser extensions and applications they have installed. And whenever I see someone having installed other user testing tools next to Userbrain, I assume this person is experienced with user testing. He or she will most probably fit the category of people who “volunteer a little too often” because they have signed up to multiple user testing platforms, presumably to receive testing opportunities on a regular basis.

The most important reason for why I prefer these “professional testers” is their experience with thinking aloud. Most of them are really, really good at saying whatever comes to their mind as they complete the tests, which, as mentioned earlier, is absolute fundamental for understanding their behaviour and in turn understand and fix usability issues. And if someone has other user testing tools installed (which implies they are “professional testers”), you can almost bet on their tests being insightful – at least this is my experience.

Another really positive aspect is that “professional testers” are usually using some sort of microphone to improve their audio quality. The reason they’re doing this is because they hope to get higher customer satisfaction ratings and in turn receive more testing opportunities.

A third reason for why experienced participants are really great to test with, is that they know how important it is to follow the testing instructions. While newbies often struggle to perform everything that’s asked of them, more experienced testers rarely overlook any of the tasks you ask them to do.

As you’ll see, these reasons apply almost exclusively to remote user testing – and especially if it’s unmoderated. So, if you have the resources to moderate your own user tests (which is something you should do whenever you can because it’s just so powerful!), you can use your moderation skills to get really great results, even with people who haven’t performed a single user test before. But if you’re doing unmoderated remote user testing, there isn’t anything that guarantees useful test results more than testing with so called “professional testers”. And that’s why I believe that they not only don’t harm your user testing, but that they can actually improve it quite substantially.

Okay, I really hope you have enjoyed this article and I would love to hear what you think about it. Thank you for reading!

Back to homepage