User research is integral to the product design process, allowing us to create intuitive experiences, address customer needs, and meet user goals. Over the years, the practice of user research has developed to not only include in-person solutions but remote ones too.
This natural development is a manifestation of at least two factors. First, remote research has become more common because there are now more tools than ever (e.g., dscout, Lookback, Zoom, and ours, Maze) that make it easier to conduct.
Second, the increase in remote work means that for remote-only companies like Gitlab, with users spread across the globe, remote UX research is the way to go.
And as the research practice grows, so is the variety of research methods. In “Remote Research”, authors Nate Bolt and Tony Tulathimutte write:
“In-person lab research used to be the only game in town, and as with most industry practices, its procedures were developed, refined, and standardized, and then became entrenched in the corporate R&D product development cycle.” But those procedures no longer serve modern design teams.
As the founder and CEO of Maze, a user testing platform, I’ve noticed this increase in remote testing firsthand. As part of our research (researching research), my team and I get to meet hundreds of designers who understand the potential of remote research and embrace it during their design process.
Let’s go over the six steps to remote research success.
6 tips for the best user testing results
Over the past year, my team and I have learned a lot about user testing and what makes it work best by interviewing our customers and speaking to them on the daily.
In building our tool to serve designers and researchers, we sought to understand what are the current needs of UX practitioners when it comes to research. Among the bottlenecks to doing research are lack of time and the difficulty in proving its value to stakeholders.
We think that remote user testing answers many of the drawbacks of traditional, in-lab research: it’s fast, tangible, and can give you the insights you need right when you need them.
Here are the six learnings we made that will help you to become a remote user testing rock star.
1. Plan, plan and plan again
Planning for your remote testing is key
One thing we notice happen time and again is poor planning. Just because remote testing is easier to do, the planning part shouldn’t be overlooked!
Remote testing has some of the requirements of in-person testing, and then some of its own.
Start by setting up your research goals: why are you testing this design with customers? What are you looking to find out? Erika Hall, co-founder of Mule Design and writer of Just Enough Research, writes:
“Only after you have a goal will you know what you need to know. And you have to know your question before you can choose how to answer it.”
With quantitative usability testing, you can set expected metric goals for your tests. For example, you might say that you require at least a 70% success rate on task completion and average 2-5% misclick rate on every screen in your test. Or you can measure the metrics of your current design, and benchmark the proposed redesign against the default.
Defining testing goals with your team should take place before you test so that you go into the process knowing what to look for.
Then you have to think about logistics: the who, when, and how of remote testing.
- Who: the people you’ll test with, either your customers or external participants
- When: the day and time of testing
- How: the type of testing you’ll do. Generally, you can choose between moderated and unmoderated testing.
Next, you have to work on the content for the tests. For unmoderated testing, this means writing task prompts that your users will go through. For moderated testing, it might mean the same tasks plus survey questions that you want your users to answer before or after.
From logistics to the actual content of the tests, preparing for your user tests is key.
2. Gather participants from the get-go
The goal of evaluative research is to bring usability and interaction issues in a design to the surface. By testing with your users at the design stage you make sure your product addresses real user needs before any solutions are developed.
Canopy, a cloud-based tax and accounting platform, tested four variations of their design for their Help Center with internal accounting specialists by sharing the test link to their Slack channel. The Canopy team found that one of the designs worked better, so they iterated on that, and after a repeating session got a 98.9% task success rate.
The fact that they had access to internal accounting specialists meant that they were able to uncover improvement areas that were characteristic of their customers, and implement a design that addressed their needs.
From our own experience with building our product, we’ve found that including our users during the design process helps us to understand their requirements and incorporate their feedback early on.
We stay in touch with our customers through our live chat on our website, on social media and community page, and via email.
We’ve found that customers who request a feature or functionality are more eager to participate in testing sessions and answer a few questions for us. So, when we do have something in the works, those users are kind enough to let us pick their brains and we get a lot of feedback this way. We set up short calls or send them a link to a Maze test (for testing Maze!) and usually collect results by the next day.
This process keeps us from wasting time and money on building features that don’t work well and that we’ll eventually have to later reconstruct.
It’s highly important that you don’t wait until you need to test to find participants. Instead, think about setting up an online group, platform, or Slack channel where your users can contact you and vice versa before you need them to.
When the time comes, you can just ask your customers for feedback—you’ll be surprised how many are willing to help.
If direct access to your customers isn’t an option, you might want to look for an external solution. Many user testing tools offer pools of participants to recruit from, and they usually provide demographic filters that help you select the audience you need.
If you’re on a low budget, you can find people on social media and online community forums like Reddit who can offer their feedback.
In the end, there’s no excuse not to test. Even if it’s only with friends or colleagues—you’re better off testing with them than not all.
3. Be clear and concise with your writing
One of the steps towards creating a user test is writing tasks and survey questions. These will guide your participants during the test and are key factors in getting the best results. Here are two examples of task prompts:
Sign up with your Google account
You want to sign up on the website. Complete the signup process using your Google account.
Update your credit card details
Your payment information needs to be updated. Update your account with new credit card details.
Phrase your tasks as clearly as possible to avoid vague interpretation. Every word you use matters—something that is especially true for remote unmoderated user testing. You have to make your ideas easily understood because you won’t be there to explain what you mean.
Have someone from outside of your team review your task prompts and questions. When working closely with a product, we can make the mistaken assumption that our users are more familiar than they are—leading us to make assumptions and sacrifice clarity.
“Have someone from outside of your team review your task prompts and questions. When working closely with a product, we can make the mistaken assumption that our users are more familiar than they are.”
Writing tasks that convey what users have to do and asking effective survey questions are critical steps to getting the insights you need from remote testing. What you give is what you get.
3a. Avoid leading questions
Avoid leading words in your tasks or questions
Something that we have to emphasize here is: avoid using leading words in your tests!
Your goal is to be very clear about what users have to do, while refraining from giving them precise instructions.
Take a look at this example:
Look up your friend by typing their name in the search bar and send them an invite by clicking on the +Add friend button in the top bar.
I hope the problem here is noticeable because it’s an extreme example on purpose. While you will probably get more users to complete such a task because the instructions are detailed, the results will be completely misleading.
If you’re researching the best way for users to complete a goal, you’ll want to give them an open-ended question that leaves room for their actions.
For instance, you could test various ways to get your users to purchase a subscription, e.g., from your landing page, in Account settings, or via your blog.
You can gain such insights through qualitative testing that helps you understand how your users navigate your product.
Formulate your question along these lines: How would you purchase a subscription for product X?
And see how users go about completing this task. The purpose is to observe them in action and understand what they do. Language plays an integral role in getting these insights.
Remember: no one will be there holding your users’ hands when they use your product in real life, so don’t do so when testing.
4. Test one flow per task
A key learning we’ve gotten from our customers is that participants often forget what they have to do if a given task is too complex. After completing the first few steps, they will ask for the instructions again—something that is true of both unmoderated and moderated testing.
“To help your users focus on one task at a time, we recommend breaking up the bigger tasks into two or three different ones.”
You may have noticed one theme running through all these tips: keeping things simple. To help your users focus on one task at a time, we recommend breaking up the bigger tasks into two or three different ones. Not only does this remove unnecessary complexity, but it also yields more results as users are likely to stay on and finish your test.
One way to avoid instruction overload is to test one user flow per task. That will inform your testers on what to focus. For example:
• Create a new project in your workspace and invite your team to join by email
• Create a new project in your workspace
• Invite your team to join by sending them email invites
By assigning each task one specific goal, you’re able to learn if users can successfully achieve that goal with the existent design.
5. Keep tests short
Tests with 8+ tasks have a higher drop off rate, based on internal Maze data
With unmoderated testing, it’s tempting to try and test every little corner of an interface within one session, but your best bet is keeping tests short.
We’ve noticed a high correlation between a test’s length and its completion rate: the longer the test, the more chances there are that users will drop off. And in remote unmoderated testing, the last thing you want is users abandoning the test for the “wrong” reasons, such as the length of your test.
Research shows that global attention spans are diminishing, and even though people agreed to be part of your testing session, they are going to feel discouraged if it goes on for too long.
Be mindful of your users’ time and keep your tests short. If you do have many elements that you want to test, divide them by categories and split them up into different usability sessions.
The scope of the design should give you a hint on how to organize your user testing sessions. For instance, if you’re testing an interface for an existing web page, you might test your new design in one go. On the other hand, if you’re designing a new product or feature, with a complex user journey, then it’s best to separate your sessions.
One way you can divide sessions is by creating user tests for each step of the journey the user goes through.
Here is a simplified user journey with touchpoints for an e-commerce website:
- Users search for the desired product on the site by category
- Users view the desired product page and details
- Users order product and pay for the product
- Users receive the order confirmation
Every touchpoint has dedicated designs that can be grouped into:
- Website Category and Search (1)
- Product Page (2)
- Order and Checkout (3)
- Confirmation (4)
These categories can form the basis of your user testing sessions as you go through the design process for a new product or feature.
Or if your team works in sprints or product cycles, you can plan a testing session at the end of every sprint as the now-famous design sprint methodology prescribes.
Grouping usability sessions by product, feature, or sprint will allow you to keep your tests organized and users focused.
6. Pilot test your test
Be sure your test works as intended before sharing it publicly with participants. Sometimes issues can arise that are related to the prototype or the designated tool of choice, and you’ll want to get them fixed right away.
Do a pilot test with your colleagues on the team so that they can pinpoint you to any ambiguities or things that could be improved.
“Remote testing is all about using technology to get the insights you need—and occasionally technology fails us.”
We also recommend that our customers send out their test in batches and monitor its performance with the first few users at least. If any unforeseen issues with the test do happen (e.g., unclear phrasing, technical issues), you’ll be able to sort them before doing any more testing.
Remote testing is all about using technology to get the insights you need—and occasionally technology fails us. Things like unstable internet connections or low-battery devices can significantly derail you from getting your results. Take these factors into consideration when planning a testing session, and test your device and connection beforehand.
Design and testing are both iterative processes
At the beginning of this article, we mentioned that user research is an integral part of the product design process. Remote research offers an effective way for all design teams to discover and collect user insights, no matter where users are located.
As the New Design Frontier Report reveals, companies big and small have an opportunity to advance design to a strategic position in their organization. This advancement will only take place when we approach design operations such as user testing iteratively because both design and testing take time before you get them right.
Whether that refers to scaling processes, tooling, or increasing employee know-how, a compound effect across all these design practices will set your company up for success.