our configuration is SonarQube 6.7.1 & Postgres 9.5 (on a virtual server). So far everything has worked quite well, but lately we have observed the following phenomenon - here is an example:
Somehow in the analysis some of the code/issues have gone through as passed but in the next analysis the previously “passed” issues have appeared as new issues. This of course causes the quality gate to fail.
This is a bit annoying since there has not been any code changes (in those passed and failed code blocks) and it causes those false negative quality gate failures. So far we have solved this by resetting the baseline, but we would like to fix the possible rootcause.
Are there any knowledgebase or hints, where to look advice for throubleshooting, or is this a bug? Do you know any other cases like this? If you have found a solution, I am very keen to hear about it - thanks in advance.
I can only guess you made some configuration change on your side (exclusions, value of sonar.source, what got checked out, …) that was then reverted.
Starting in 7.4 (E.T.A. late this month) we’ll begin reopening those closed issues, rather than creating new ones.
Out of curiosity, what are your quality gate conditions. From the screenshot, it looks like it’s on the numbers of overall Bug, Code Smells, Vulnerabilities?
as far as I know (and I should be the only administrator) we have not made any changes concerning the SonarQube settings, the only changes are on the analyzed code side - but in these occasions there have not been any changes on those code blocks where the SQ analysis first forget and then reinvents the issues.
The quality gate conditions are simple: no new issues in any categories after baseline (i.e. previous version).
I wonder as the server is running on the wmWare ther might be some occasions where the load is high, might that influence to the SQ&Postgres interaction somehow. The SQ server is on our Jenkins server. Mysterious thing, however.