One of the things I love most about the user research process is that, with a little bit of planning, it scales to fit whatever time/resource budget you have. Got lots of money and long development cycles? By all means hire a recruiting firm, fly to your users and visit them to get the ethnographic insights you need. You will certainly get amazing results from that. But for most of us, that’s not the situation we’re in. Most of us work in smaller teams who want to iterate quickly and release improvements often, while also increasing quality. Luckily there are ways to do valuable user research in those scenarios too, and that’s what I’d like to discuss today.
First, a little background on the Postmark team and how we work. We have one product manager (👋 ), 1-2 designers, and 1-2 developers, depending on the needs of a particular project. We are a remote team, so we rely heavily on the usual suspects (Slack, InVision, Basecamp, JIRA) to get our work done. We also don’t work more than 40 hours a week, which means we have to be particularly smart and responsible with how we spend our time. Those constraints define the requirements we have for doing user research, which can be summarized in this seemingly-impossible way:
Improve our products iteratively through research without slowing down our development process or increasing our stress levels at work.
So how do we go about this? Glad you asked…
The right method
The first thing I realized while designing a loose process to meet these requirements is that there can’t be any big analysis phase, PowerPoint deck, and team presentation. That certainly has its place, and I’ve done plenty of it in past roles, but it’s simply not necessary or beneficial for our team. Our goal is to improve our product together as a small team, not convince 15 people to start thinking about some changes 3 releases from now. That meant that there is only one usability research method that would work for us: RITE testing.
Rapid Iterative Testing and Evaluation follows the same principles as regular usability testing, with one key difference: you make changes as you go. UX Magazine has a good summary of this:
This method is similar to typical usability testing in that participants are asked to complete tasks using think-aloud protocol. The major difference is that, instead of waiting until the end of the study to gather the findings and suggest improvements, the team iterates on the design as soon as issues are discovered by one or two participants. In this way, designers can quickly test and get feedback on new solutions and ideas.
So we follow all the guidelines of task-based usability testing tools, but we spread them out over two or more days so that we can make changes iteratively as we go. This gets rid of the need for giant reports, and gets us what we really need out of research: better products.
The right remote usability testing tools
The second thing I realized is that, since we’re remote, we’re going to have to do remote testing as well. But I didn’t want to lose the one-on-one aspect of usability testing that is so critical, so tools like UserTesting were immediately off the list. There are so many different ways to handle remote testing of this sort, but I found that the easiest and most reliable way is to use Zoom.
Zoom is simply the best video conferencing software I’ve used. It is solid even with less-than-ideal internet connections, screen sharing works flawlessly, and most importantly: recording is built in and doesn’t impact call quality at all. I record all our sessions and post them to Basecamp for anyone to watch (you know, just in case there’s nothing else on TV).
The right fidelity
Our designers are also front-end developers, so our research process fits in seamlessly with their process without creating a bunch of extra work. We iterate early on in the design process using sketches and wireframes in InVision, but we do usability testing on fully interactive, high-fidelity prototypes.
This is where the trade-offs start to come in. Sure, lower fidelity prototypes could save some time if there are big changes to be made, but it’s also mostly throw-away work. Testing on high-fidelity prototypes means we are working on software that will eventually go into users’ hands. Once we’re finished with testing and have made the final changes, these prototypes get incorporated into our app and pushed to production. So there’s not this feeling that we’re doing double work. The research is the work, and simply means we are able to make changes based on user feedback before it goes live. That is the ultimate value and cost-saving promise that upfront user research delivers on so well.
The right people to talk to
Another trade-off happens during the recruiting process and how to find users for testing. Yes, we could hire a recruiting firm, pay them a bunch of money, and then work for weeks on recruiting surveys and scheduling. What we would get from that is (🤞) an unbiased sample of target customers. The problem is that the added cost is just not worth the increase in value from the results we’d get.
Instead, we do this in a very simple way. We have a mailing list (sign up here!) that we populate with people who want to be part of research. Whenever we do a study I email that list, and slots typically fill up within a couple of hours. I’m sure it has something to do with the Amazon gift cards we send them, but I choose to believe it’s mostly because they love us so much!
The right way to analyze and communicate results
The designers and I catch up on a video call after every single session. We also have longer calls after each day to discuss the feedback and changes we want to make. So by the time the research is done we’re on the same page and in agreement on what final changes need to be made. But of course, we don’t work in a vacuum, so we want to communicate all this to the team.
I’ve found what works best is a short, bullet-pointed Basecamp post explaining what we’re doing and why, and linking to the full videos if anyone would like to find out more. It helps to be part of a team that trusts each other to do their jobs well, so this is mostly an information post to share insights we gathered, as opposed to a “report” that has to be approved by “stakeholders.”
The right way to put it all together
As with any advice you read on the internet, it’s important to point out that YMMV. But I do want to make a couple of comments on this process based on what I’ve learned so far.
I cut my research teeth in eBay’s User Experience Research group, under the watchful eye of one of the leaders in the industry, Christian Rohrer. We had a large research budget and “release trains” that afforded us plenty of time to do the research we needed to do. So we went all out. We had on-site recruiters, on-site usability labs, and one or more of us were always on the road to go see customers in person. We followed a rigorous process and got good results from it. But — and here’s the important part — only because that’s the context that’s required at eBay. Research wasn’t used unless a PowerPoint presentation was able to cut through the noise and reach a VP of Product and their Product Managers. So we had to approach every proposed change with extensive evidence to back up our recommendations.
Startups and smaller companies are different. I often worry that we skip research at smaller companies because we think we don’t have the money or time to “do it right.” But remember that “doing it right” is dependent on context. If you have a culture of hierarchy and cutthroat “resource bidding” like eBay, then sure, scaled down research isn’t going to work for you. But if you work in a small team with very little hierarchy, there’s no reason to spend a ton of money and time on travel and recruiting to get user feedback.
I’ve done this both ways, and as a researcher I can honestly say that I have not seen a reduction in the quality of the data while using a more scaled-down approach. On the contrary, I tend to think it’s even more valuable doing it this way, because the line from customer to improvement is much shorter and faster. Just make sure you follow an established methodology — and remember what you learned about moderating usability studies. If you do that, this light and cheap way of doing remote RITE sessions can have immense value in your organization at a fraction of the time and cost it would take to do it on a larger scale.
So, my recommendation? Try this at home. Then adjust it for your context. It’s not going to cost you a lot, but it’s going to save you so much time and effort down the line if you catch usability issues early on.