“A user interface is like a joke: if you need to explain it, it’s not that good.”
-Zoltan Kollin, UX Designer
Test automation is critical for continuous delivery and provides fast, repeatable, affordable testing; there’s no doubt it’s a must-have when deploying at speed. Customers often ask us about testing for brand new features—when is the right time to introduce automated tests?—so we’ll cover that here.
When testing for functionality at the browser level, we should differentiate between two kinds of testing: new feature testing and regression testing. The former focuses on making sure brand new features are functional and easy to use; the latter focuses on making sure nothing in the current application has broken as teams deploy new builds.
In brief, we recommend manually testing brand new feature sets, and then deploying automated tests to cover these brand new feature sets for regression. Below we expand on why we believe this.
Regression testing covers the current application functionality when new changes or features are introduced. It is critical because during the deployment of a new feature, all eyes are on that feature; current functionality will have less human attention. Because existing feature sets are fairly stable, there is a clear payoff to investing in automating these tests: they are repeatable and won’t need to change frequently.
But what about for brand new features?
What is a New Feature?
Testing brand new features is a more interesting puzzle. What should be tested? When should that testing be automated?
Before going further, we should make some distinctions in terminology. “New features” come in three flavors:
- Changes that do not affect the user interface (e.g.: a backend change)
- Changes that affect the user interface for existing workflows (e.g.: a button moves)
- Changes that introduce a brand new workflow (e.g.: adding a new product)
For regular maintenance on an application or alterations to functionality that don’t change the workflow for a user, there’s no need to build brand new tests at the browser level: your current browser testing suite already has you covered—that’s what it’s there for.
For changes that impact a current workflow, you will need to update your existing automated tests to reflect these changes. This can be done during feature development or after the feature hits the testing environment and breaks the testing suite.
For changes that introduce brand new products or workflows, no browser-level automation yet exists to test them. These kinds of changes are what we are calling “brand new features.” This automation will need to be introduced, but should be introduced after the new feature goes to production.
UX Testing and Functionality Testing of New Features
For brand new features or major changes to features, a team will need to develop tests that cover multiple angles. Functionality is key—don’t introduce new bugs—so you’ll need to do functionality testing. But in addition, teams need to test the user interface (UI) for ease of use and customer value before deployment—this is user experience (UX) testing.
This kind of testing can really only be done by humans, and shouldn’t be done exclusively by developers or product teams familiar with the product. Familiarity with product perverts one’s capacity to determine usability. Users unfamiliar with the new feature need to test it to determine if it’s intuitive and delightful, and strong, quantitative metrics need to be used to understand the big picture and avoid interpretation bias by the product team. Services such as Usertesting.com or Vempathy can provide measurable, quantitative user experience feedback from dozens of different dimensions.
The fact that humans are already repeatedly manually testing a brand new feature for UX means that they are by nature also testing the same new features for functionality: if something breaks, they’ll find it. Building automated tests for brand new features is therefore not yet necessary, but there’s also a good reason to specifically wait.
New Feature Functionality Testing: Timing
For any brand new feature, a team should anticipate that it will be making some major tweaks after releasing to production. A disciplined team will not tolerate releasing major bugs with a new product, but should be ready to improve the product as they get user feedback. You should expect new features released into production to change a few times before they stabilize. For this reason, investing heavily in automated testing for the functionality of those features is a move that should be made late in the game, when the new feature has become more stable: otherwise, you’ll waste your investment in building these automated tests, and will simply need to rebuild them multiple times before they are repeatable.
Automated testing pays off when it’s run many times: it’s expensive and difficult to build, so it doesn't make sense to build automated tests for workflows that will be tested once or twice before the test needs to be rebuilt. Once the new feature is stabilized, then build your automated tests, fold them into your regression testing suite, and move manual testing efforts towards the next set of new features.