I’ve written a lot about software development, but I’ve been silent when it comes to testing. Quality assurance is not my strongest side and some of the engineers I’ve worked with in the past can confirm this.
I have outright neglected tests, especially back when I worked for early-stage startups and did consulting work. Being able to rely on QA engineers to catch elusive problems and writing a few unit tests for more complex cases gave me a sense of calm.
Then I started working for enterprises that embraced a different philosophy. QA was a normal part of the development process and we didn’t have dedicated people to protect us from shipping bugs to production. We had to write our own tests and validate each other’s work.
It turned out that autonomy comes at the price of increased responsibility. Who could’ve known?
In a large company, the software you write could be running in production for years. A good test suite will help you have a good night’s sleep.
There are two general schools of thought in testing - one that favors unit tests and another that favors integration tests. While both philosophies have their strong points, a healthy test suite will have both.
It’s worth saying that if I am working on a prototype or something that won’t go into production, I would probably skip writing tests altogether. Requirements will change too often for them to bring any real value.
But for all other cases, I would suggest having e2e tests for your application’s critical public functionality and unit tests that cover your domain-specific logic.
The e2e tests will ensure your most important scenarios like authentication and payments work as expected. While the unit tests will cover the granular details. Especially for unit tests, I would focus predominantly on business logic since everything else is covered by your libraries.
But as long as you can validate that your application loads and its business logic produces the proper results, you reduce the severity of the bugs that can slip into production.
Thinking about testing strategies is easy when we have a clean slate that allows us to set up any tools and flows we need. But when we inherit an untested legacy codebase, we’re faced with a dilemma - where to start?
Writing unit tests function by function would be a very unproductive way to spend your time. Especially in a large codebase.
Adding integration tests is easier in general. You need familiarity with the product, not with the implementation. So even if no one can tell you the details about the algorithms being used, you will probably have a sales or marketing person at hand who knows how the software should behave.
I would start by writing integration tests for the application’s functionality in order of importance and frequency of change. Modules and features that are worked on more actively should get priority. After all, tests are meant to ensure we don’t introduce unwanted modifications.
While integration tests should be added proactively, you should approach unit tests the opposite way. Only add a test whenever you have to change the existing code or add new one. This is because legacy code is often hard to test and it will involve a certain amount of refactoring.
By only testing what you touch you reduce the scope of each change.
Exporting a private function for the sake of testing is not a good decision. That’s not what we mean when we say that code should be testable.
I don’t mean that private functons shouldn’t be tested. They shouldn’t be explicitly tested. Unexported functionality is an implementation detail that gets validated as a part of other functions that use it.
So whenever you write a test for a module’s public methods you implicitly test all the private functions it relies on. And if your unexposed functionality has edge cases that you want to cover, then test them through your public functions as well.
If you can’t validate the entire behavior of your private methods through the public interfaces then you need to work on your testability.
It may mean that you’re using environment variables too deep in the code or you’re importing specific modules which you could be passing as an argument to a function instead.
In all honesty, I’ve never done proper testing in production and this is still something I’m reluctant to do. I just haven’t worked at a company that would’ve benefited enough from this approach.
To do proper testing in production you need more than a quick hand in approving pull requests. You need a well-established feature flag flow, together with the ability to roll changes back quickly when necessary. You need the observability tools to identify problems early.
Testing in production highly depends on your users’ tolerance for bugs. An organization developing a product used for entertainment can be more lenient towards regressions. But another that has to retain a high level of trust and authority can’t allow itself that same luxury.
I wouldn’t be happy if my bank starts testing new features in production - and I say this as a software engineer. Even a simple UI bug can make my heart and stomach switch places.
Yet, there is great value in the idea that testing should not be limited to development and staging.
Production shouldn’t be a pristine environment where proper behavior is taken for granted. Real user behavior and data are only available there after all.
A test, regardless of its scope, is nothing more than a traffic barrier aiming to save you from sliding off the road. Quality comes from a good understanding of the domain, well-defined requirements, and a productive environment to work with.
An engineer who is not well-versed in the field he’s creating software for will have a harder time identifying edge cases. No matter how detailed a ticket is, there’s always the possibility that its creator has missed a detail or two.
Knowing how the system or the product should behave will help you to polish your software.
Because a test that validates the wrong behavior is even worse than not having one. Most production bugs are a result of incomplete understanding rather than a lack of testing.
While focusing on tests, we tend to neglect the importance of having a good local development setup. Having a fast-running suite shortens the feedback loop from implementation to result. This is easier said than done, though. I’d suggest using a test tool that allows you to run tests a single file at a time.
To add to this, my experience so far has shown that a problematic local setup can hinder your ability to produce quality software. The more friction a developer experiences when making changes to a codebase, the less likely they would be to go the extra mile for their feature.