Template for a good new topic, formatted with Markdown:
ALM used GitHub
CI system used Github Actions
Scanner command used when applicable (private details masked)
Languages of the repository: Python, TypeScript
Only if the SonarCloud project is public, the URL: private repository
Error observed (wrap logs/code around with triple quotes ``` for proper formatting)
random.SystemRandom().randint(5, 25)
Now raises: Make sure that using this pseudorandom number generator is safe here.
(Same goes with usage of secrets module in other places of the codebase
Steps to reproduce
Scan code containing a similar line
Potential workaround
Reviewing many false-positive hotspots which have appeared.
Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!
This rule is a security hotspot. This means that it flags code that may or may not be a security issue depending on the context. It’s perfectly okay to flag these issues as “safe” if you are not at risk. It’s also possible to simply disable the rule, if it’s too noisy in your particular context.
If you’d like to provide some sample code where you think the rule is being too noisy and where we should prevent it from raising issues (for example, by inferring the context), I’d be happy to have a more detailed look.
This is not a security hotspot: using secrets module or even random.SystemRandom is currently python’s way to go for PRNG, unlike random.rand*. If those modules are no longer state-of-the-art for PRNG, please update the documentation of the rule with proper recommendations, and link to CVE detailing why it’s no longer correct.
We have agreed to the rule and changed in our codebase from random.rand* methods to using secrets or SystemRandom, months ago.
I think around the time of writing the original post there was an update in sonar scanner that made the scan now detect false positives.
Above example random.SystemRandom().randint(5,25) is the simplest example of how the rule got wrong, no dereference, no proxy variable…
Even if it’s just in security hotspots section, we have a quality gate setting on it to prevent introduction of new hotspots, so having to review around 100 false positives to unlock deployments is really a pain.
If you need additional, more complex samples of code to add some tests, I’ll be glad to dig back in our codebase
For instance:
def generate_fake_data(session):
randy = random.SystemRandom() # <- using a proxy variable because I need to seed only once
for i in range(10000):
session.add(Object(value=randy.randint(5,25))) # <-- here I want to make sure it's understood that it's safe