Feature: Python assert should be consider harmful

In python the assert statements are ignored when running in optimize mode. This can lead to code misbehaving in production because the wrong check is in place.

An old thread on the topic but still relevant:

We would love to have a Sonar rule that would flag assert statements so that we can prevent new code from using it.

Of course this is not an universal truth in python since assert is the primary validation method with pytest so this might be controversial so the rule could be disabled by default, or have an option to disable the check in unittest files.

Hi @sodul,

Thanks for suggesting this rule. I agree with you that using assert in production can create many problems. However I did a rapid search and saw that this rule would make quite a lot of noise, even if we exclude test code.

I see two reasons for that:

Even projects maintained by some of the most experienced python developers use assert in production code. Example: mypy.

I think we should wait a bit before implementing this rule. We need to first improve the way we classify test code and refine code patterns on which we want to raise.

I just want to add here a suggestion made by @JolyLoic: we could create a rule detecting when the code in assert statements has side effects. This would indicate a bug as optimizing code would always change program behavior.

Example: assert commitTransaction()

We do not yet detect functions with side effects so we can’t do it right now. However it seems to me that everybody would agree on this rule.

1 Like

Hi Nicolas,

I agree that assert is used a lot, and I’m very much aware of pytest behavior as our test automation team uses it a lot. And that a bit of the issue here, our team that develops with pytest a lot is bringing these patterns in other libraries that are shared with production code. This is one of my dislikes of pytest since it trains developers into using the wrong patterns.

I agree that the rule would be disruptive but I still think it can be added considering the following.

  • Sonar has a lot of rules that are enabled but are not always relevant, such as flagging that logging or that sys.argv are used safely. It does not mean there is an actual problem but that this pattern should be reviewed. There are similar rules for other languages where the goal is to put a spotlight.
  • there is always the option to mark something as false positive, just like any of the other rules so that in the few cases where an assert is actually sensical the developer can choose to mark it as such.
  • the rule does not need to be active by default and each project can decide the level of strictness on this.

When I tell my colleagues that asserts are ignored in optimize mode they are surprised, many of them do not know of this and having the tooling to automatically inform them is going to be very valuable.

Neither pylint or Sonar have a rule for this and I’m considering writing a quick scraper to block adding assert statement into non pytest python code.

I guess it would be a sort of feature request for Sonar, but I do believe that unittesting code should have some more relaxed rules. For example we use private properties for dependency injections, they are intended to be used in unittests, but of course this is flagged in the tests. Having docstrings on each and every unittest is not very valuable either. In short some rules are actually noise for test code.

Hi Stephane,

Yes SonarQube/SonarCloud/SonarLint provide such rules. The main problem is that every rule takes time to maintain, i.e. bug fix, migration when we change parsers, etc… There is also the cost in term of user experience. As we add more and more rules SonarQube/SonarCloud/SonarLint become more difficult to configure.

This is why we prefer to postpone rules which we can’t activate by default, and focus instead on consensual high value rules (ex: detecting None dereference). It is only a matter of priority.

If we find a way to make this rule consensual by detecting specific code patterns we will of course increase its priority.

We try to avoid this as much as possible. In our experience developers stop using static analysis tools when they see too many False Positives. You can learn more about our philosophy towards False Positives in this blog post.

SonarQube/SonarCloud/SonarLint were not initially designed to analyze test code. Rules are tuned for main code and will raise many False Positives on test code. Usually we recommend to configure where main and test code are located. That way we analyze only main code.

That being said, we started last year to add rules which will run only on test code. For now they exist only in java. This is a first attempt to gather feedback on how test code should be analyzed.

If you want test code to be analyzed you can vote and comment on this thread.

I hope this answers your questions.

1 Like