This rule is a security hotspot. This means that it flags code that may or may not be a security issue depending on the context. It’s perfectly okay to flag these issues as “safe” if you are not at risk. It’s also possible to simply disable the rule, if it’s too noisy in your particular context.
If you’d like to provide some sample code where you think the rule is being too noisy and where we should prevent it from raising issues (for example, by inferring the context), I’d be happy to have a more detailed look.
This is not a security hotspot: using secrets module or even random.SystemRandom is currently python’s way to go for PRNG, unlike random.rand*. If those modules are no longer state-of-the-art for PRNG, please update the documentation of the rule with proper recommendations, and link to CVE detailing why it’s no longer correct.
We have agreed to the rule and changed in our codebase from random.rand* methods to using secrets or SystemRandom, months ago.
I think around the time of writing the original post there was an update in sonar scanner that made the scan now detect false positives.
Above example random.SystemRandom().randint(5,25) is the simplest example of how the rule got wrong, no dereference, no proxy variable…
Even if it’s just in security hotspots section, we have a quality gate setting on it to prevent introduction of new hotspots, so having to review around 100 false positives to unlock deployments is really a pain.
If you need additional, more complex samples of code to add some tests, I’ll be glad to dig back in our codebase
randy = random.SystemRandom() # <- using a proxy variable because I need to seed only once
for i in range(10000):
session.add(Object(value=randy.randint(5,25))) # <-- here I want to make sure it's understood that it's safe