False-"new" - sonarqube 10.0 local code-tree

I’m using SonarQube on a local directory, which casually gets completely overwritten by the latest sources (and .class-files). A tar-file gets unpacked, so technically all files receive a new “ctime”-stamp, but almost all of them have still same “mtime” and content. (This is necessary, because the actual build system cannot run sonarqube scanner.)

With this setup I’m seeing the same symptoms as others have already described, just that in my case it doesn’t have to do with github branches: whenever I update the sources (java, c++, sql), and let the scanner run on the updated tree, then I get to see a couple of “new” bugs, smells, debt, etc, but examining them I see that many of them are in code entirely unrelated to the actually updated modules.

The “new code” mode is set to “Previous version”…

Am I too far off the beaten path, or is this a problem worth looking further into?

I’d rather not upgrade, unless the problem is explicitly known and fixed in a newer version.

Hi,

What I think you’re saying is not that SonarQube itself gets overwritten, but that you’re performing analysis in a directory where the project gets overwritten because it’s obtained by unpacking an archive.

If that’s correct, then I’m not surprised you’re getting specious “new” issues: analysis relies on SCM data to understand what code is new. You should be running analysis in the checkout directory, after compile but before the tar file is created.

The actual build system can’t run Java? Really?

As a side note, you should consider upgrading regardless of this question. Non-LTS versions are EOL as soon as each subsequent version is released. There will be no patches or fixes for 10.0. You should upgrade to 10.2 at your earliest convenience and plan to keep up with the ~2mo release cycle.

 
HTH,
Ann

Right, the copy of the project’s source tree (on that extra linux machine capable of running Sonarqube-scanner for all our sources) gets updated to the new sources from unpacking a tar-file.

The original sources are in “CVS”, and the CVS-server is not even reachable (firewall) from that one extra machine running the scanner.

If the “new”-detection is strictly tied to SCM-access, then all I can do is tell my folks that they have no choice than ignore the “new issues”-feature completely–well, that’s what I already told them for the meantime.

Otherwise, if it principally can detect what sources changed and what sources are same as before, then there might be a bug in that detection, which I’d have a hope of getting fixed, eventually.

The actual build system can run Java, but only up to java version 8 - (11 would be available, but not used on that machine, higher versions, such as Java17 required by recent versions of sonarqube, can only be dreamt of - there is no newer java available on oracle-sparc-solaris than 11) . Our source also contains c++, which cannot be scanned at all on solaris – shrug. Therefore we transfer the sources to some linux-machine and have them scanned there.

Hi,

In the absence of SCM integration, analysis does a “best effort” attempt to determine what’s new based, I believe, on file dates. So even that is stymied by your “unpack the archive” methodology.

I think it’s worth noting - in case you decide to move where analysis runs - that while we don’t natively support CVS, there’s an unmaintained (:frowning:) community plugin to provide that integration. No idea whether it still works.

C++ generally needs to be analyzed where it’s built because the data the build-wrapper collects will have the build paths in it. If the paths analysis sees don’t match up to what the build-wrapper collected… :boom:

 
Ann

On unpacking a tar-file, the tar utility sets the timestamp to the original time stored along with the original file in the tar-file.

But even, if a particular file were “changed” (has a new timestamp), it probably shouldn’t present a random selection of findings in it as “new”. Rather, it would ideally recognize, which of the findings of latest scan were not already found previously, (or, alternatively, just call all findings of that file “new”).

Regarding C++ … it wasn’t trivial, but I got it working by means of creating a “compiler-database” json-file that contains all the compiler-invocations as they happened on the real production machine. Also, sonar-scanner “sees” a script named “gcc” that returns the relevant settings like it’s solaris-original. Sonar-scanner generally does a good job on the c++ files, except for the few strange things that I’ve been reporting here.

1 Like

I hope this thread is not yet archived…

I noticed something as to why some of these issues frequently showed up as “new” despite not being new.

It is, unfortunately, even more creepy: There is a particular type of finding - possible NullPointer dereferencations - in code that was
auto-generated long ago (by some Oracle tool to wrap some stored procedures in Java classes).

Each possible NullPointer-dereferencing is itself a true positive, but by random chance, a new scan will sometimes
not find some of them, and at some further scan re-find them–and present them as new then.

So, all the analysis about versions, VCS and where the scanner is running seem to be moot -
the real question is: why are scans not “stable” - why can they suddenly lose spots of possible
Null-deref’ing, and later suddenly re-find it, without the source even changing at all?

Hi,

Andreas L.:
I’m using SonarQube on a local directory, which casually gets completely overwritten by the latest sources (and .class-files). A tar-file gets unpacked, so technically all files receive a new “ctime”-stamp, but almost all of them have still same “mtime” and content. (This is necessary, because the actual build system cannot run sonarqube scanner.)

What I think you’re saying is not that SonarQube itself gets overwritten, but that you’re performing analysis in a directory where the project gets overwritten because it’s obtained by unpacking an archive.

If that’s correct, then I’m not surprised you’re getting specious “new” issues: analysis relies on SCM data to understand what code is new. You should be running analysis in the checkout directory, after compile but before the tar file is created.

Andreas L.:
This is necessary, because the actual build system cannot run sonarqube scanner.

The actual build system can’t run Java? Really?

As a side note, you should consider upgrading regardless of this question. Non-LTS versions are EOL as soon as each subsequent version is released. There will be no patches or fixes for 10.0. You should upgrade to 10.2 at your earliest convenience and plan to keep up with the ~2mo release cycle.

HTH,
Ann

Hi,

Nope. Unless you mark it solved, it will stay open indefinitely (or until someone else comes along to resurrect it way past its sell-by date and I close it to cut that off. :sweat_smile:)

Okay, that’s been a bit of a snipe hunt for us. We get occasional reports of this (both internally and externally) but we’ve never been able to track it down because it’s so hard to replicate.

I don’t suppose you could provide a reproducer?

 
Ann

Ok, I’ll keep an eye on it…

I should be able to create a reproducer for the particular type of issue that randomly appears and closes itself, but I can’t promise a reproducer for the disappearing.

Also, I’lll have an eye on the logs and look for suspicious messages. One (once auto-generated) source contains 18 of those possible Null-derefs, Maybe the symptom depends on the number of occurrances of the issue within the source. (wild guess)

1 Like

I recently had a situation, where I ran the scan, then changed one single line of one java source file and then re-ran the scan, and the result was, that apart from the one issue that I fixed, one of the umpty Null-derefs turned to “CLOSED” in that other (entirely unrelated) source-file.

I compared the logs, and I see that the “Null-deref-fixed” source file shows up in it with a message like this: ... DEBUG: Analysis time of ... (1118ms) - would it be thinkable that sonarqube skips certain rules, once the analysis takes beyond a second?

But then again, the disappearing one was somewhere in the middle of all the findings in that source.

Btw., this is one example of such a Null-deref-finding: Right at the start of a method or of a braced block of code:

  sqlj.runtime.ref.DefaultContext __sJT_cc = getConnectionContext(); if (__sJT_cc==null) sqlj.runtime.error.RuntimeRefErrors.raise_NULL_CONN_CTX();
  sqlj.runtime.ExecutionContext.OracleContext __sJT_ec = ((__sJT_cc.getExecutionContext()==null) ? sqlj.runtime.ExecutionContext.raiseNullExecCtx() : __sJT_cc.getExecutionContext().getOracleContext());

SonarQube cannot know that sqlj.runtime.error.RuntimeRefErrors.raise_NULL_CONN_CTX() will always throw an exception, so it has to think that flow goes on to second line where that “…_cc” variable is then used. – the issue itself is not my current concern. my concern is why it disappears and sometimes reappears in a later scan.

The shown block sometimes appears in a nested braced block right at the start of a method, and sometimes in the exception handler within the same method. The one that disappeared here in the case at hand was one in the exception handler.

Hi,

No. It’s a good guess, but we only recently (after talking about it for years) excluded files over a certain size limit (30Mb?) from analysis. Generally, if you have the patience to wait for it, we’re not going to be the ones to get in the way of that.

Yes. I understand and agree.

I’m eager to hear if you get any closer to figuring out when issues disappear.

 
Ann