Trust your Tests

Trust your Tests

When I first started driving, I was scared to change lanes. The other cars are driving so close! It feels like I can high-five the other person if I just stick my hand out the window.

At 80 km/h there’s less than a second of reaction time if they swerved into my lane. How can I really know if there wasn’t a car in the next lane just a couple of inches behind my field of view? I looked around again and again, contorting my neck into awkward positions.

Eventually, I learned to trust my mirrors. If I didn’t see another car in my mirrors, I can safely assume there isn’t a car beside or behind me, even if I’m not looking directly there.1

I didn’t feel comfortable with this right away—changing lanes without even looking in the direction that my car is going!? But eventually, I got used to it.

I feel the same way about software tests.2 At first, I felt the need to verify every change visually by loading it up in the UI. How can I tell if my change worked without seeing it with my own eyes?

But I think that automated tests are superior to manually confirming changes in many ways, and I should feel comfortable relying on them more often, even in the front-end!

Of course, I still manually load the UI to confirm my changes when necessary, especially if it’s a completely new feature. Often, visual inspection will reveal subtle bugs you hadn’t considered. I’m not saying that automated tests replace manual testing, I just think that automated tests are generally superior to manual tests.

Why automated tests are superior to manual tests

  1. Tests force you to be explicit about states and conditions
  2. Tests work even when you’re not looking
  3. Tests help others who don’t know what to click on
  4. Tests help you when you forget 6 months later

1. Tests force you to be explicit about states and conditions

Your users are diverse: they reside in different locales, have different permissions, and are eligible for different sets of features.

It’s time-consuming to spin up dummy users to cover every possible use case, especially for large enterprise SaaS apps with many feature-flagged conditions.

Every time I make a change, if I want to confirm the UI change visually, I’d have to create multitudes of dummy users for all edge cases every time I make even minor changes to the front-end.

Instead, I can invest this time into covering each permutation of a user state with a test. This has the benefit of forcing me to comprehensively map out all possible edge cases to ensure I actually covered every edge case.

I only need to write the test once, then afterwards it will run every time I make any updates to the code path, and every edge case is tested without me having to click through the whole user journey again.

This saves time, freeing me from confirming changes manually and letting me move on to shipping more features!

2. Tests work even when you’re not looking

If I manually “confirm” a UI change visually, this may satisfy my emotional need to “confirm” that my code performs as expected, but I think this is a false confidence.

What I really did is “confirm” that the code works merely for one user, in one state, under one set of conditions.

I did not actually click through the feature for all users in all states in all cases. Let’s say there are 10 possible user states—if I “confirm” my feature with 1 test user, then I only have 10% test coverage. In contrast, by writing an automated test for every user state, I have 100% test coverage.

This is not to say that automated tests replace the need to manually test, but that there is a cost/benefit trade-off to make.

For small changes, does the extra “confidence” matter? A large enterprise SaaS app can easily have over 100 possible user states, and I don’t think it’s worth the cost/benefit trade-off to spend an hour setting up the perfect test user just to cover 1% of the possible user journeys. I’d rather spend that hour writing more automated tests.

But for larger changes and new feature work, manual testing is still valuable because it reveals subtle ways the UI can interact with other components outside your purview. This helps you improve your automated tests later.

Development speed matters, and I’d rather work in a team with a cultural expectation that every pull request has full automated test coverage, rather than an expectation that every pull request has to be manually tested.

My purpose in writing this blog post is to give you spiritual permission to feel emotionally comfortable relying on your tests for confidence that you implemented the feature correctly, even when you didn’t click through the UI, especially for smaller changes.

3. Tests help others who don’t know what to click on

In a company, you’re not the only one writing code. Having good test coverage helps your coworkers collaborate with you, and gives them confidence anything they build on top of your work can be shipped quickly, even if they aren’t familiar with your work.

Teams with good test coverage are a joy to work with because I don’t need to worry about testing UIs they own, I can focus on testing the incremental new features I’m building, which saves time for everyone.

I don’t need to coach every collaborator on how every feature works before they can be productive, because the automated tests are working in the background to ensure nothing they implement will break existing features.

4. Tests help you when you forget 6 months later

“Your coworker” is synonymous with “You, 6 months from now”, because you’ll forget what you wrote over time and end up in the same state of ignorance.

Automated tests are a form of documentation for what the feature is supposed to be doing. Good tests that cover the majority of edge cases give you a comprehensive view of every user who is using your product. You might only remember the most common user states, but have forgotten all the rare edge cases.

A less-common user is still a valuable user—they still derive benefits from your products and log in to your app every day, even if you might not think about them much.

The tests are always there, waiting in the wings, steadfastly reminding you of users you may have forgotten.

Conclusion

I’m not saying don’t do manual testing, but that manual tests have limitations. This blog post is to give you permission to not feel bad for not clicking on the UI when you are confident you have good automated test coverage.

I wouldn’t give drivers the advice “don’t turn your head when changing lanes”—my advice instead is to use your mirrors and trust that they work.

Manually testing the UI is helpful to give you emotional confidence that what you’re doing is working. But automated tests will actually test that it’s working. Use both!

I only publish half of my writing publicly. You can read the rest of my essays on my private email list:

Subscribing is free, no spam ever, and you can safely unsubscribe anytime

Footnotes

  1. I set up my mirrors the non-traditional way for full blind spot coverage by using my rear view mirror, not my side mirrors, to check for cars laterally. 

  2. By “tests” I mean any type of automated test: unit tests, front-end tests, integration tests, synthetic tests, or end-to-end tests. Each of these have different strengths and weaknesses, but the distinctions don’t matter so much for this blog post, because I’m comparing automated tests in general to manual testing.