OSWAP Benchmark

I was looking at the OSWAP Benchmark results for Sonarqube, and the most recent version of this appears to be from 2016:


Is there a more recent version of this benchmark?

Thank you


We used to run that internally on a regular basis but stopped fairly recently. The project maintainer is aware of the fact that our results are a bit out of date. I guess he hasn’t gotten around to updating yet.

I’m sure his hearing from more than just a self-interested party would help.



Would you provide a xml format report for owasp benchmark to prove SonarQube accurate rate? it would helpful for evaluate

Author also would be pleased


I was in touch with Dave Wichers in Jan 2019 about updating the part of the OWASP Benchmark related to SonarQube. I provided a lot of information and I was expecting Dave to do the updates … and then no news.
So I guess that SonarSource will have to contribute at some point the changes to show an accurate rate. This was not in our plans for the comings weeks/months so I have no idea when we will have time to work on this task.

I personally don’t like the way the OWASP Benchmark is organized. Hence why I forked it here so it’s easier to see if an expected issue is indeed raised or not. All test cases are sorted in a dedicated expected/notexpected directory + a directory corresponding to the type of vulnerability.
Waiting for an accurate rate to be available I suggest that you analyze https://github.com/agigleux/Benchmark on SonarCloud.io where the latest version of our analyzers are installed. This way you will see what SonarQube is able to do on OWASP Benchmark.

From my own computation, considering only the additional rules provided by SonarQube Developer Edition relying on taint analysis (SQL Injection, Path Traversal Injection, LDAP Injection, Command Injection, XPath Injection and XSS), we are reaching a TP Rate of 74% and a Score of 71.


1 Like

Thank you for reply, that mean from your computation, SonarQube FP rate less than 4%?

Would you be able to share more detailed results?

Yes. I’m working on a blog/community post with more details.

FYI, more details have been given in this post: Takeaways from building a SAST product, and why OWASP benchmark is not enough.