A few weeks ago at SauceCon, staging-devopsy.kinsta.cloud talked to Matt Wyman, chief customer officer at SauceLabs. This year’s theme, Reimagine Test, emphasized the need to prioritize the user experience in testing. As Wyman put it, “We really focused on that customer experience, using the right type of test tool at the right moment.” After the conference, we caught up with him to talk more about a critical point—that many teams don’t know how to test for user experience because that hasn’t been in their wheelhouse.
Wyman provided advice on three typical testing mistakes and how developers can improve the accuracy and breadth of their testing environment.
1. Bloated Testing Suites
Here’s a common scenario: Flaky tests get slipped into large test suites and developers ignore them instead of correcting or removing them. The suite gets increasingly bloated as people add more and more tests, which can obscure their results. “From the outside, it’s hard to tell which tests are the most critical when you’re making a change to pass,” Wyman said.
He explained that if two people are working on the same part of the codebase and person A sets up a focused series of tests, person B can make a change and be sure it will work when those tests pass. But that’s no longer the case with a testing suite stuffed with irrelevant tests.
“If instead, I create a two-hour suite with thousands of tests in it, you won’t know which one is most applicable to the change you’re making,” Wyman pointed out. “This is one of the more significant complications when people are less knowledgeable about building a good test suite or test practice.”
2. Using a Single Tool Like a Hammer
Wyman emphasized the need to use the right testing tools at the right time for the right purpose—something many developers don’t do. In this common mistake scenario, he said, the developer takes a functional user interface testing framework and uses it to validate their integrations, their APIs, their visual results and everything else.
“It’s a classic case where you have a hammer and everything looks like a nail,” he said. “It makes your overall testing practice very heavy and very long. Typically, if you’re not using the right tool for the right purpose, it takes a lot of effort to maintain it.”
3. User blind spots
You don’t know what you don’t know—or, as Wyman put it, “the experience error is hard to find and even harder to discover.”
Wyman discussed a hypothetical scenario in which engineers build and test a system that accepts only numeric inputs. Testing runs perfectly—because the engineers test only the entry of numbers. The fix here is fuzz testing: Thinking beyond simple use cases and introducing a “what if” quotient by testing random and unforeseen inputs, such as users who enter letters. Building this foresight into the programming language itself can involve developers in the quality assurance process much earlier and avoid mistakes down the line.
He also stressed the need for observability in testing, with instrumentation that can help teams predict where users might run into limitations. “Developers have to look at the user analytics and usage analytics, not just system analytics,” he said. “Quite often the user analytics and observability the product manager is looking at, they see from the outside—and that’s useful data for the developer, so they can see those failure cases early.”
Transcending Testing Limitations
Is there such a thing as a flawless test environment? Probably not. But by sidestepping the three mistakes above, developers can transcend ingrained testing paradigms and elevate both quality and experience.