Skip to main content

Experience Design

5 Practices to Improve B2B Usability Testing Outcomes

Man Using Computer

If you’ve been involved in even a few usability testing projects, you’ve likely seen how participants can get tripped up in seemingly unexpected ways. Over the years, it appeared to me that tests with B2B audiences seemed more likely to go off the rails. Even though we follow common research methods — such as creating natural scenarios and tasks, using high-fidelity mockups, asking people to think aloud — there’s something about these users that makes them different.

Recently, I went back through a variety of tests to look for root causes of problems, and how to mitigate them for better outcomes. In this post I share several situations I’ve encountered in B2B usability testing (focused mostly on product catalog experiences). And I provide suggestions on what you can do to build better tests. I should note that these arose during moderated tests, since as you’ll read, most of these situations would likely derail an unmoderated test.

And while the focus is on B2B audiences — such as customers, buyers, partners, suppliers, retailers — you might find the topics of this post apply non-B2B situations as well, with employees or “prosumers.” In many ways it’s about the user and their knowledge of your business and industry. Taken together, these examples are somewhat interrelated, but I’ve tried to make them stand alone for different situations.

Reasons why B2B Research is More Complex

While the primary topic of this post is about planning for better B2B usability testing, there are a few things worth pointing out that relate to the generally complex nature of B2B digital experiences:

  • Professional audiences, more granular personas: B2B tools are used by people who have specialized knowledge, and a variety of specific job responsibilities that involve interacting with your company frequently.
  • Complex decisions: Often the digital research phase for B2B products and services reflects complex purchases and multi-step buying processes, with participation by many individuals. Often we see more rational fact-finding activities.
  • Difficulty in finding participants: Along with nuances in planning and conducting the research itself, finding B2B participants can be time consuming and costly, given the complex roles and organizational contexts in recruitment. For example, time spent on the test while at work, compensation, and discussion of confidential or sensitive activities.
  • Designers/researchers need specialized knowledge: Designers of the testing/prototypes and researchers need to be close to the subject at hand (deep domain expertise) to fully understand products/services, roles/responsibilities, and develop meaningful scenarios. This is often much more complex than B2C contexts.
    With that in mind, here are five situations you might encounter with B2B audience usability testing.

How to Improve your B2B Usability Testing

1. Be Aware of the Whole Scenario

The situation: Quite often when you’re optimizing a large platform you want to test just a portion of a site. For example, you are updating product listing and detail page functionality for eCommerce, and you’re testing if users are more successful in finding a product using new filtering and page design.

How B2B audiences might go wrong: Crafting a natural experience might involve users first navigating an existing home page or landing page where they encounter poor content/UX, or they have to interact with a product catalog taxonomy that is overly complex (or overly simple).

Also, if your expected B2B participants have a strong knowledge of your existing web site (which you know is poor), they might have already developed non-optimal workarounds or ways to get to where you task them to go. They might just want to put a part number in search (or a product description you mentioned). Be prepared to redirect and learn from these odd behaviors.

Being aware of the whole scenario here means that you need to carefully assess all elements of the experience that your participant might encounter and think of how some shortcomings you haven’t yet addressed can affect your ability to test your new features in a natural way. Make sure what you really want to test doesn’t get hung up by what comes before.

A solution would be to clean up and prepare more pages of the flow than you expected. You may have to take screenshots of existing pages and remove pieces of content, creating a simpler prototype that directs and doesn’t derail. You might have to create a simpler product catalog list, so they can find the category you want them to.

2. Mockup Data Can Muck Up Testing

The situation: When testing B2B prototypes, “bad data” — either made up, or from another product — can be worse than “lorem ipsum” as stand-in copy.

How B2B audiences might go wrong: Like testing around existing content that is not optimal, the use of bad data for B2B mockups can stop the user in their tracks. It’s always important to make sure your prototypes match the “industry knowledge” of the user, and it can be surprising how much your customers know. I’ve had situations where a person instantly noticed that several product images on a results page mockup were not correct. I also had people scan model numbers in a repair parts table and say they didn’t go with the product. These are not things we’re trying to test!

A solution to this is to make mockups with real data. But because UX/UI design doesn’t specifically need good data to create effective solutions, it can take more time during the process. It’s not overly critical, and you can probably get by if you set the stage with the user. Reiterate that the experience is a “fake picture” of the website, and “some things are not wired up” and “some data isn’t correct.”

3. When Savviness Drives Speediness (and Missing Things)

The situation: You might encounter a person who spends all day on sites like yours, has specific preexisting experience on your site, and who has an overly confident sense of “how these things work.” It might even be that the person has some experience in testing websites.

How B2B audiences might go wrong: Once I gave a participant the typical introduction about “asking for help in evaluating our revised site functionality…” and “we’re going to give you a few tasks related to finding products.” We began with a task where navigating to product details involved interacting with many UI elements that we wanted to observe (drilling down, filtering, reviewing results). I asked “can you describe what you see here” on the product listing page, in which the person then zoomed all around talking about all sorts of things up and down the page and what they thought everything did and what might happen if they clicked on them, etc.

I think this person didn’t do so well later throughout, not because of the UX/UI, but because they never got “into the zone” of being a typical user performing specific tasks. It was tough to get them to move through the screens, and each time they wanted to deconstruct all the features.

Mitigating this situation: First, be aware that some B2B people might not want to test the way you would like, they might just want to talk about your business. And second, there’s a challenge in evaluating interfaces: it’s best to spend most of your time observing what people do, it’s not always useful to ask them about functionality. In this case, even though the person claims they used web sites like this “all the time,” I didn’t get much out of their session.

4. Be Aware of “Task Switching”

Here’s an odd instance where again a person who has a great deal of knowledge about your products might go beyond the task at hand and use the site quite differently than expected.

What happened: This was a typical series of tasks with minimal prompting along the way, I was ready to coach if they get stuck. At one point I had the participant look at a product detail page designed with several tabs of different information. I asked them “now, if you wanted to see what is included in the package, where would you look?” The goal was simply to get to the “Specifications” tab and find the bullet.

But I could tell as they talked out loud that they changed my task to something like “find out if this specific item is the one that comes with the extra bracket” which led them to start looking for a comparison grid feature (which wasn’t on any other tab or anywhere on the site) to compare all the models in the family. They also pointed out that the product variant selector (a menu to change models) didn’t actually include enough data to “get to the product that had the bracket!”

I think the issue here is that in creating the new UI for the detail pages we didn’t select an average product to comp — we selected a top-selling, super popular product — which makes sense as something to show off during design, plus it had a lot of variants and other nuances. The problem was the knock-on effect this created where during testing, the person who purchases these products all the time still relies on printed literature – which does feature a comparison grid.

Perhaps if we also had a less common product mocked up, we could obtain a better focus on the pieces we were most interested in that iteration. But, while they didn’t actually find “what is in the package,” they did provide insight into some very different features that all B2B customers would find useful, and gave us a new story to understand the research-to-purchase journey.

5. When Usability Testing Turns into Contextual Inquiry

These examples lead me to a summary topic about B2B testing. Personally, I’m always ready to have these moderated usability tests turn into contextual inquiry. While we conduct contextual inquiry during early discovery stages of a product, and do a lot of planning with internal subject matter experts, I think it’s best to be ready to go with the flow.

Said another way, to the extent you have more professional, skilled audiences, with a variety of responsibilities, who have knowledge of your business and site, who are going through complex buying decisions, the more likely they might deliberately or accidentally give you something different than you expect.

And if you have difficulty in finding participants, you likely are doing a lot less testing, which is unfortunate but also means you must make the most of your time with them.

So, make sure to have some deeper questions prepared, to be able to ask them if something goes wrong, or if you can get more time with an individual.

Some things your could learn from your participants:

  • How they define their job responsibilities (how many hats do they wear)
  • To what extent their company is more progressive than peers
  • What other kinds of systems/platforms they use to do their daily work
  • What is in front of them when they use your website
  • How they view your company vs. competitors

Tip: These insights help create more granular personas.

Summary

In this post I’ve identified many ways in which your best laid plans for B2B usability testing might go astray, with the following suggestions:

  • Assess the whole experience, make sure existing bad experiences don’t get in the way of the flow
  • Use real data as much as possible, or clearly indicate “this is not real data”
  • It’s OK to ask people what they think certain functionality does, but do this after you observe them working on the main tasks
  • Your time with B2B users is precious — be ready to switch to a broader discussion if that’s how they would like to interact with you
  • Be prepared to run a few pilot tests and adjust depending on what happens, before you put it in front of more people.

Bonus Point: When it comes to analysis, be thoughtful about interpreting the results. Did the customer have trouble because of the new experience or design, or because they got tripped up by something out of place in the mockup, or maybe both, or something else entirely? It’s not as simple as pass or fail with B2B audiences.

Can we Help?

If you’re looking for help with B2B usability testing, contact our experience design research experts.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mike Osswald

Mike is a lead experience consultant, who has been focused on digital experience strategy and innovation for over 25 years. Mike helps our clients deliver optimal customer journeys. He’s always on the lookout for significant trends that build strong consumer, channel, and employee relationships.

More from this Author

Follow Us
TwitterLinkedinFacebookYoutubeInstagram