What I learned about user research methods as a test participant
As designers, we think we know all about user research methods. What to do, how to do it, who to recruit, when... but we don't get many opportunities to sit on the other side of the table.
What is it like being a usability test participant? How difficult is it? Why would anyone bother to give up their time to help us make our products and services better?
Researchers routinely screen out other design professionals from their user groups, and we don’t often get the chance to walk in our users' shoes. So when I had a couple of opportunities to take part in user testing recently, I grabbed them. And here’s what I learned.
Recruit your testers with care.
Both of the companies that recruited me were looking for testers among their existing user base. But they used completely different tactics.
Company A used an agency to recruit and screen their participants. The agency sent out the request for volunteers and screened us using a set of questions in the email. The agency also handled appointment booking.
The woman I spoke to from the agency was charming, but the experience could have been better:
The emails from the agency looked a bit scruffy, not what I was used to from this organisation.
My subscriber email address had been shared with an outside organisation. I don’t recall consenting to that.
The screener survey was fiddly to use, compared to something like Typeform. And I should have been eliminated after the first question, but I had to fill in the whole thing and send it back.
And even though I SHOULD have been ruled out, I wasn’t. The researcher wasn’t best pleased when a fellow UX professional turned up as a tester.
The appointment booking was all emails, not calendar invites. It’s only a small thing, but if there’s a chance your testers use an electronic calendar, send them a proper invite!
Company B used their usual email marketing to ask for volunteers. They handled screening and appointment booking via Ethn.io. I only ever spoke to the researcher.
This worked better, in several ways:
All the communications were branded - making them much more trustworthy from my perspective.
My email never left the company, and the researcher only saw it once I’d consented to take part.
I got a proper calendar invite so no chance of date mixups, and the survey was much easier to complete.
The researcher knew precisely who they were getting as a tester.
Because the researcher had a direct relationship with me, they felt okay to ask me to change the appointment time when something came up.
Some user research methods are unexpectedly tough
It wasn't the lab tests or the card sorting or any of the more specialised user research tactics... It was just using the app, while also trying to give good feedback.
As researchers, we say ‘just use it normally’, but actually that’s nonsense. Talking to your phone, about your phone, while using your phone - it’s weird. I felt very self-conscious at first. And most of my videos ended with me swearing as I struggled to find the off button.
When I couldn’t record - on the train for instance - I was asked to make notes and submit them via Google Docs. Taking notes during my commute was challenging. I started to feel stressed that I wasn’t providing the right information to the research team.
If I ever run this kind of user testing myself, I will explain to users that they don’t need to be a perfectionist about it. Probably at the point where we all go ‘Remember, we’re testing the app, not you!’
User testing incentives aren’t just about recruitment
Company A offered £50 for testing an app in 'lab conditions’; the test lasted one hour. They also gave me £150 for a week-long diary study. Company B's incentive was a £250 Amazon voucher for a half day research visit to my home.
In both cases, the cash wasn't my primary motivation for volunteering. I use both of these (paid) services on a daily basis, and I care about them enough that I'm willing to invest time in making them better. I do also have a professional interest in user research methods 'in the wild', so maybe cash would be more of a motivation for other testers.
Where the incentives really made a difference for me was during the diary study. Taking notes and recording how I used the app, actually got in the way of doing what I wanted to do. But when I started to flag midweek, there was a feeling that I owed the researcher their test results. They were paying me all that money, so I felt I couldn't let them down!
I was intrigued that both companies offered cash, rather than discounts or branded swag. In both cases, I could have had a year’s free subscription for less than the cash incentive. Is it easier to sign off an envelope of cash than a discount?
Long tests give different answers to short ones.
In the lab at Company A, I saw several new features that I was excited about. I couldn’t wait to get my hands on a working prototype for a whole week and try them out.
Halfway through the week-long test, I’d completely changed my mind. I still liked some of the new features but found they caused me a load of new issues. Other features had lost their appeal entirely. They just weren’t as useful in real life as I’d thought they’d be.
So - if Company A had done the usual thing, testing the app only in the lab, they’d have ended up with somewhat different feedback. They might have gone on to invest in features that users didn’t want or need.
If your service is something people use occasionally, then short tests are fine. But if it’s something people use day-in, day-out, then you need to choose user research methods that let people test it day-in, day-out.
New to user research?
Here are some places to get you started.
I have not received any compensation for writing this post. I have no material connection to the brands, products, or services that I have mentioned.