When creating a testing suite, start with your team’s values

A crew team in a boat

Building software at scale is often about people—not technology.

Most problems have multiple solutions, especially in the engineering world. We excel at learning of and understanding the problem, coming up with solutions, picking the best one, and implementing it. This is a mostly linear process, with a series of selections/decisions on the way, and on to building something that will be seen, appreciated, respected.

Testing, on the other hand, involves thinking through lots of scenarios, placing yourself in an almost infinite number of others' shoes, such as your application’s users–in the case of UI. The number of permutations looks and, more importantly, feels endless, and worse, pointless. Debugging is in the future and might never happen; testing is an annoyance in the here and now. Some approaches are more quantifiable than others. Generally, engineers will often lean toward quantifiable approaches with numbers attached.

When new projects and teams form, as leaders we should strive to create a culture that champions value, consistency, the ability to be debugged, and maintainability.

When new projects and teams form, as leaders we should strive to create a culture that values valuable, consistent, debuggable, and maintainable automation tests for our frontend applications and integrate with our continuous integration and continuous deployment (CI/CD) systems.

Key testing principles

  1. Value-able - Prevents software regressions and confirm health from the user or consumer’s perspective

  2. Consistent - Same code, same environment means the same tests pass. Reduce flakiness and increase deterministic outcomes.

  3. Debuggable & Maintainability - In theory, the more tests the better our codebase should be. Make writing tests easy and fun! We want engineers to enjoy writing tests—or at least not hate writing them. If engineers hate writing tests, then they may not be written. Developer efficiency and debugging capabilities (why did it break?) are key drivers for quality metrics.

  4. Encapsulated - When code is changed within a codebase, all tests can be updated to pass within the same codebase.

  5. Complete - All lines of code, user interactions, components parameters are reproducible and testable across entire test suite

  6. Time Efficient - Don’t make us wait. Run test suites concurrently when possible, use caching, limit tests to code changes if possible. Machines are cheaper than people.

As you can see, these are ranked. Always rank your tasks and your values, so that you can return to them for deciding the priority between “two good things.” The perfect can often be the enemy of the good.

Other questions to ponder

  • Do you like being able to step through (debug) issues in the browser interactively? Do you like debugging in the command line?

  • Is interactively updating expected outputs from the CLI when tests fail convenient or too easy to update? For instance, snapshot testing.

  • Are videos of in-browser tests that failed helpful?

  • Do you like the guideline that when code is changed within a codebase, all tests can be updated to pass within the same codebase?

  • For UI testing, is documenting where an API property is tested, that is logic, tests, browser tests, functional tests, or other helpful? That is, "this property is tested in the logic tests." "This property is tested in the in-browser tests."

  • Do you prefer using API-driven data IDs/attributes in tests instead of classnames/markup to run DOM queries?

  • Do you like tests suites running in parallel on different virtual machines if it speeds up the whole test suite?

  • Should we limit testing of desktop browsers with browser rendering engines with more than 5% market share?

  • How many minutes is too long for a test suite to pass in order to merge into master?

  • Are there any other testing features/paradigms that have delighted your team in past projects?

  • Are there other values/principles that are not listed in the introduction paragraph that you would like included?

What are your automated testing values?

Stephen James

Cross-functional alignment creator collaborating across engineering, design, compliance, and program management leadership on research-led and customer-focused projects. I have the privilege of leading accessibility and design system initiatives that enable organizations to craft a consistent experience that delivers compliance, customer value, and market impact.

Previous
Previous

Product Accessibility: Which discipline is responsible?

Next
Next

What is mental prototyping?