As I was reading John Maeda’s 2018 Design In Tech report (a must read), I was blown away by a quote about halfway through: “Surprisingly very few companies conduct qualitative user research. Early stage startups surveyed by Albert Lee/Dayna Grayson: 12%, Mid-stage: 32%, Late-stage: 46%.”
Twelve percent! And less than half of all companies!
It’s an unfortunate fact, since qualitative research—just plain talking to customers and potential customers in a structured way—may be the most important thing an early- or mid-stage company can do to influence their success.
Sadly, I’m sure most of us can relate to the sentiment behind those numbers. In the formative stages of a new company, it’s easy to get carried away in the grunt work: getting software shipped, building a team, honing your growth and marketing strategy, and so much more—all while under the pressure of tight deadlines driven by a short runway.
Those early-stage habits turn into mid-stage values, and before you can say “qualitative,” you’re balancing the needs of a growing product, customer base, and team. Research often takes a back seat—even though we know how deeply valuable it can be.
It wasn’t until I joined Appcues a year and a half ago that I saw a different way of attacking this challenging problem. It’s one of the main reasons I joined the company (with immense thanks to our founder Jonathan Kim for setting it up from the start), and having lived it for a while now, I want to share it with the world because of the positive impact I’ve seen it have on our product and team.
This one weird trick you can do to stay close to customers
I can honestly say that Appcues has a customer-driven culture. It’s not dark magic or Kool-Aid, it’s just something we do. The core habit that makes it work is User Testing Day, a monthly event that brings everyone on the product side of the company together around research and testing.
It happens on the last Thursday of every single month, and when I say everyone, I mean everyone—including all engineers. Because it’s a whole-team event, it ends up being pretty fun.
Here’s how the day plays out:
- We often start with donuts (though to be honest, I’ve been thinking of switching to healthier foods, since we need a lot of energy)
- We sync in the morning and review scripts and teams, and then split into two groups to take screen-share calls in parallel for the whole day
- We have a team lunch in the middle, and we can’t help but head straight to the whiteboard at lunch and between every test—the discussion starts flowing
- We wrap it up at the end of the day and make sure we’ve recorded the most important conclusions, feelings, and quotes we don’t want to forget
The key to getting this done on a schedule reliably is automation.
In our product, we have a small link that says “Want to test new features?” It links to a Calendly calendar scheduling form that automatically schedules the person for a slot on the next User Testing Day, offering up a $25 gift card for participation. Each time we have someone sign up, Zapier sends us a message in Slack, and there’s a flurry of emoji celebration (we really do love our customers).
The day fills up every single month. It still surprises me, but our customers love it too.
Since we’re a B2B product, we get a variety of users from all over the world and from different industries. We often have brand new customers often sign up right away. It’s a positive from their perspective as well. They get an immediate insider view of our product process and the chance to influence the direction, and we get a brand new customer with fresh eyes.
It isn’t perfect, of course. Customers self-select into the process, and we only source current customers who are logging in. But the consistency and value of the sustained practice makes these biases worth dealing with.
We still do various research outside this schedule, but User Testing Day holds us accountable to real users no matter what. It really works.
Rigid process, flexible learning
For us, User Testing Day is not optional. It’s on the calendar no matter what, and it forces us to think about what we’re going to build and what users are going to see on that day. More importantly, it makes us think about what we need to learn most.
Because of that, even though the process is fixed, the content is not. We do a range of research depending on what part of the product design and development process we’re in.
In the early stages, we might do an interview. If we’re well into the design phase, we’ll learn from a clickable InVision prototype. If we have product we’ve recently built, then we’ll run usability tests on the real thing.
This flexibility helps us deal with the messy reality of building products. We naturally move quickly and have multiple projects in various stages at once. If we only tested shipped code or only did user interviews, we’d still get value out of it—but it might not be the most critical feedback we need at that moment.
It’s simple. Following our needs helps us meet our needs.
“Talk to a bunch of users once a month at a minimum, and automate the process to make it impossible to ignore.”
Fortunately, we’ve never had a problem figuring out what those needs are. On the first product planning get-together of the month, we plan based on what we’ve learned and what we still need to learn.
It’s always clear what the goals will be at the end. Having customers on the calendar forces us clarify what we’re doing now, what we’re tackling next, and what the heck we don’t know yet but probably should. That’s a great discussion to have repeatedly.
Everyone owns the experience
One of the best things I’ve learned: non-designers can be skilled at user research—and it can have a positive impact on them.
I’m not going to say “everyone is a designer” or even “everyone is a researcher,” but more people than you think can play a role—and you might be surprised at just how well.
Once, when a user asked, “What’s this button for?” an engineer responded with a perfect non-leading follow up: “I’m not sure. What would you think it’s for?”
I nearly fell out of my chair.
After a few months listening to customers and being close to what they experience, smart people can pick up some solid research skills.
It’s not as important that the tests are done by the book (but if you want a good book, check out Erika Hall’s Just Enough Research), but rather the impact they have on the team and the product as a whole. I’ve noticed the background level of empathy, care, and motivation that everyone has is way up compared to my past experiences. It’s a striking difference.
After each test is done, passionate discussion naturally sparks as we try to figure out the truth behind some piece of feedback we just heard, solve UX problems on the fly, and adjust our script or prototype so we can better test our assumptions.
These discussions are inclusive and energetic—everyone feels connected and invested. The discussion continues into the following week as we plan for iterations based on what we learned. Everyone already knows why we’re making each change, and often some are already done by the time planning comes around.
For us, User Testing Day has helped make many of the ideals of a human-centered design process real—spreading knowledge of problems around the team, helping people understand the “why,” distributing the ability to make better choices autonomously, and generally making everyone feel more connected to their work.
I know we have a better product, a happier and more motivated team, and a clearer trajectory because of it. It’s not everything, but it helps.
I know this is a lot, and it’s not for every team. But if there’s one simple idea to take home with you, it’s that good habits can drive good outcomes.
The only difference between our process and ones I’ve experienced in the past is the spark that kicks it off: Talk to a bunch of users once a month at a minimum, and automate the process to make it impossible to ignore.
If you have users lined up at your door to talk to you regularly, good things start happening.