Opinion: Testing doesn't give you correct code, but confidence.

Software Testing

To clarify what i talk about, let me define the notion of software testing. When i talk about testing, i mainly talk about unit testing, testing of a single unit of code, and integration testing, testing of multipe units of code in unison. In either way though testing that involes writing code to test some piece of code. But my opinion on testing also applies to the good' ol manual testing aswell as e2e testing suites, testing of whole systems with defined tests also formulated in code.

Opinion

Testing is fun and testing is also tiresome. But in either way, it can't give you the certainty that your code or any code for that matter is 100% correct and bug free.

The reason for that is that software is not only the code you write, or the artifact you compile in the end. It is a whole ecosystem. It is the system you run your piece of code on. It is the hardware that runs that system. To verify that your code is correct and completely bug free, you would need to verify that every part of that whole ecosystem runs always verifieably correct and that everything runs alsways with the same external parameters, and even then: chance. You can't create a "lab" situation. You only can establish boundaries, like always the same version of x and y and so on.

Or as simple negative argument: If testing could verify that code is bug free and correct, there wouldn't be any bug ever. I think there are many tickets in many ticketing systems around the world that show different.

So what do you actually do when you write tests? You build confidence in your code, when you can outline for example five executions of your code and all of them work as expected, you tested maybe five permutation of your code and it gives you confidence in your code. Since those examples can not only run on your machine but also your colleagues, she or he can also verify the examples, which gives them confidence in your work aswell. When a whole team is confident in a code; a base trust is established in that piece of code to be run in production.

Does that make a code free of bugs? Certainly not, but it at least gives you the confidence to publish the software and in the best case generate business value.

I didn't invent this point of view. Edsger Dijkstra already wrote in his 'A Discipline of Programming' – "program testing can be quite effective for showing the presence of bugs, but is hopelessly inadequate for showing their absence".

Showing bugs is actually a good thing though and not to be taken to be the opposite. Knowing something is broken, means in most cases you know what and where to fix; which actually makes your software more resilient.

The frequent response i encounter when i "rant" about testing is that of: Well if it doesn't hurt and makes our code more trustworthy, lets keep doing it anyway.", which completely misses the point of the opinion. I am not against testing, i am for it. But i despise programming towards testing, since it should be more a verification than a framework to build against.

Writing tests, creates more code to maintain

I've worked with a engineer before, which goal was to have a code to test ratio of >=2. Which meant for him: every line of code, should at least have two lines of test to cover said code. Even though this was more a vague average to aim for, it shows also the dissonance of usefulness and process. He build a process to generate an expected code coverage, which was supposed to give him a "standartized" confidence. Which in practice just tripled the code to maintain and made management happy. But you don't want to make management happy; you want to build confidence in your code for you and your team. And you also want to build simplistic, maintainable software; more code is always bad and tentative less usefull.

Should you build for testability?

Short answer, no.

Building software geared towards a testing system raises two big problems. First, if you build towards a system means you couple your piece of code to a dependency. Not only in using the API of that dependency – which is a necessary evil, but also in how you structure your code, which usually results in software that can't be followed when read. Secondly, if you have to split up functions only to test them, two questions should appear: a) does that function do too much? more specific: does that function encapsulate too many steps or am i doing something in a too complicated manner? and b) does it make the code more readable if you split that code?; if either of those question is no, don't do it.

Conclusion

Testing is a good thing, it builds confidence in software. Repeatable tests are even better, a quick reload and check if it works is good enough for one, but not for many – make your results verifieable (in that case repeatable). But testing doesn't show you wrote correct and bug free code. It shows that x runs, in multiple configurations of your software on language level work as expected. Testing should never interfer with how you build your software, in most cases it doesn't enhance the structure of your software it just makes it more cluttered.

I personally prefer a more passive testing approach. I try to actually test what i expect the software to do with an example. I usually disregard the coverage metric – sometimes it is nice for the ego. And i also think programming for maintainability, i.e. reduced median-time-to-fix is worth more than a high code test coverage or any tests for that matter.