How do you write good tests?

One aspect of Clean Code is that code is testable and tested.

Sonar enables users to import test coverage reports – which show how much of a codebase has been touched by test code. And, good Code Coverage doesn’t necessarily mean you have good tests.

Sonar offers some rules for writing good tests for Java, JavaScript, TypeScript, Python, PHP, and C#!

We want to ask you – how do you ensure that your tests don’t just touch your product code, but are testing things in the right way? What practices do you (and your team) encourage when writing tests? What gets in the way sometimes?

1 Like

I use test-driven development (TDD):

  1. I write the test code before finishing the main code logic.
  2. I ensure the test is able to fail.
  3. I finish the main code logic.
  4. Now, the test passes, but if one day the main code logic changes, I’m confident the test will be able to fail.
  5. I run all the tests related to my new main code logic to check its coverage.
  6. I add the missing tests to complete the coverage, but I start with some wrong exprected results in the tests to make the tests fail.
  7. Then, I provide the right expected results to make the tests pass.
6 Likes

@Puzzling I don’t think TDD per-se ensures you are “testing things in the right way” as @Colin has asked.
Yes it is great, yes it does ensure your code is testable, but that doesn’t mean it is a good test!

To give you a few examples:

  • If you test the implementation of your code rather than the external behaviour (white box testing rather than black box testing) then your test is very brittle to change. You may decide to change the implementation to make it more efficient for example, but if the behaviour has not changed then your tests should not fail.
  • You write a sorting algorithim. Your test uses exactly the same algorithm to sort the expected values. There is a bug in the algorithm but you won’t find it because both the test and the code under test have the same bug - here you should use a different means to determine your expected results than is used in the code under test to try to expose flaws in the code.
  • Your test checks the “wrong” thing. For example, your code under test produces a list of items. In your test you check the 0th item matches your expectations. Even if the list is only of length one currently, unless order is important, you should not check the 0th item - check that the expected item is in your list somewhere

You can TDD your tests and even reach 100% coverage and the above weaknesses still apply.
Writing good tests is more of an art than a science and I think it is only by working withing with your team and reviewing the tests and discussing the intent of them and implementation that you can get people thinking carefully about what they are testing, why, and how

That aside, Sonar can help you with writing better tests but @Colin we have test detection turned off in all of our projects. The reason is that tests are only covered by about 1/2 a dozen or so rules, making sure that a test case has tests, a test has asserts etc - all the other rules about writing good quality code are turned off.
This makes no sense to me because test code needs to be maintained, understood, refactored, just as much as application code, so why should you be allowed to write them any old way but your application code is subject to hundreds of rules?
In some ways the tests are more important as they serve as “living documentation” and should be describing the expected behaviour of the application code - which is a great aide to understanding it.

There are a few rules we would relax for tests but in general we want them subject to the same quality rules as the code they are testing. It is very unfortunate that we can’t have both these rules and the test specific rules enabled at the same time.

1 Like

1000%. I tried long and hard to get this in place. I thought a couple years ago it was finally going to happen but then, collectively, we said Squirrel!. :frowning:

1 Like

In my experience the issues raised on test code are often dismissed as false positive or irrelevant. This probably happens because the project team has adopted different coding conventions, different trade-offs (performance, brevity, etc.) or because some checks are actually raising false positives.

After making a change to analyze test code in the Sonar SpotBugs plugin there was some push back and I added a flag to make that optional.
For some reason(s), some people are less interested in issues in test code and want to opt-out.

@ganncamp I think the missing piece might be that SonarQube only has project-level quality profiles, but we’d need finer grained control.
It seems reasonable to have a different quality gate for test code, so maybe SonarQube could add support for a split between main and test quality profiles

1 Like

I agree @gtoison. I believe you should be able to selectively turn on “normal” rules for tests, while not impacting whether they run for source code. Unfortunately, that’s not a small change, architecturally, and there are always bigger fish to fry. :slightly_frowning_face:

It would be great to have different test vs app code profiles - or at least the ability to have the test-only rules AND the normal ones fired for test projects.

In practice we generally find that most rules are still “valid” in test scenarios so we employ different means to turn off undesriable rules, depending on the type of project being analysed.

What we do in python projects, where all test files start test_ is to turn off specific normal rules using “ignore issues on multiple criteria” whereas in C# projects we use .editorconfig settings to turn off specific rules for folders of tests/files with a Test.cs suffix, and ad couple of other variations.

1 Like