Interpreting my first run: Is it *really* working?

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension) Version 10.6
  • how is SonarQube deployed: zip, Docker, Helm ZIP
  • what are you trying to achieve A usable environment to scan some java code
  • what have you tried so far to achieve this Lots of stuff, and I think it’s working

Do not share screenshots of logs – share the text itself (bonus points for being well-formatted)!

Hello again,

After some very helpful advice by @ganncamp (and delays on my end acting on it) I was able to get my hands on what I think I need to actually successfully use SonarQube to analyze my project code.

For background, I’m not a developer (Sys Admin), but I’ve been put in charge of scanning the team’s code, specifically with SonarQube - something I’ve never used before so I have a lot to learn.

Today I got a copy of a compiled ANT based Java project which seems to be properly scanning, but I want to check with more experienced members here to make sure I’m not getting my hopes up.

I took the files in question, put them in a folder on the desktop of my scanning server, and ran this command in the directory the files live in:

sonar-scanner.bat -D"sonar.projectKey=CompiledTest01" -D"sonar.sources=." -D"sonar.host.url=http://localhost:9000" -D"sonar.token=<key the web frontend generated>" -D"sonar.java.binaries=."

It then cranked on for about 15 minutes before terminating with these words:

12:17:17.577 INFO  CPD Executor CPD calculation finished (done) | time=1206ms
12:17:18.530 INFO  Analysis report generated in 828ms, dir size=26.7 MB
12:17:20.267 INFO  Analysis report compressed in 1752ms, zip size=5.1 MB
12:17:20.455 INFO  Analysis report uploaded in 188ms
12:17:20.455 INFO  ANALYSIS SUCCESSFUL, you can find the results at: http://localhost:9000/dashboard?id=CompiledTest01
12:17:20.455 INFO  Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
12:17:20.455 INFO  More about the report processing at http://localhost:9000/api/ce/task?id=3dcdfab7-77a3-421e-992f-c35a43ce246a
12:17:20.923 INFO  Analysis total time: 14:38.444 s
12:17:20.923 INFO  SonarScanner Engine completed successfully
12:17:21.472 INFO  EXECUTION SUCCESS
12:17:21.487 INFO  Total time: 14:41.692s

That definitely says Execution Success, but really? That seemed too simple. I had run it without the sonar.java.binaries line once earlier today just to see it fail (which was expected) but I was surprised to see that making it just . actually worked - an idea I got from the sources line.

The web console does pull in data, giving me a list of lots of issues, and explicitly linking them to specific java files. I guess that’s what I should be seeing?

Questions:

So…did it work?

I did notice that “Security” under “Software Quality” was 0, and greyed out - is scanning for Security type issues a paywalled feature for the Enterprise one, or does the free engine actually do that and my project’s code somehow has 0 security findings? “Responsibility” has the same, under “Clean Code”.

Is exporting all this data, into say some kind of report we can pass around at standups, just an Enterprise licensed feature?

Considering the cmd output was an absolute deluge of stuff, is there a keyword to look for to see if some files just didn’t actually get scanned? Basically, how can I be sure what I fed in got worked on completely?

Also I noticed these in the cmd output:

12:17:17.296 WARN  Too many duplication references on file <a project .java file> for block at line 1263. Keep only the first 100 references.
12:17:17.296 WARN  Too many duplication references on file <a project .java file> for block at line 1317. Keep only the first 100 references.
12:17:17.561 WARN  Too many duplication groups on file <a project .java file> Keep only the first 100 groups.
12:17:17.561 WARN  Too many duplication references on file <a project .java file> for block at line 22111. Keep only the first 100 references.
12:17:17.561 WARN  Too many duplication references on file <a project .java file> for block at line 21661. Keep only the first 100 references.
12:17:17.561 WARN  Too many duplication references on file <a project .java file> for block at line 21664. Keep only the first 100 references.

Is this something wrong with my inputs, or is that just something the Sonar scanner has detected with the code itself, and is probably somewhere in the report I see on the web front end?

Thanks to the community here for helping me get my bearings and, evidently now, my feet wet!

Hi @ME2,

To get an idea of what has been scanned, you can go to the measurement tab / size section. You’ll see the number of files, functions and lines of code scanned… Close to it you will find duplication information.

You can find which feature is on each SonarQube Edition in this page Download | SonarQube

To do some reporting with SonarQube Community Edition, you have to do it by youre self using webapi.

1 Like

Hi,

Then we’ve done our job well. :joy:

Give analysis the source files and the .class files, and that’s really all it wants.

For sonar.sources and sonar.java.binaries, analysis does a recursive search from the directory you give it. So while it’s a little more expensive, in terms of search time, giving it . (i.e. telling it to look at everything in the directory) is effective.

That said, the typical project layout is something along the lines of

project_dir
  src
    [source files in appropriate sub directories]
  output
    [class files in appropriate sub directories]
    [coverage reports &etc]
  [project config files]

So to run analysis, you cd into project_dir, and then the analysis configurations include

sonar.sources=src
sonar.java.binaries=output/class

It doesn’t really matter, but going forward it might be convenient for you to segregate source files, class files and other project-related files.

Yes :smiley:

Community Edition includes some bug and vulnerability rules, but the “good” ones start in Developer Edition($).

And… it’s possible that even though analysis ran with your source and class files, it still didn’t have quite enough data to do a thorough analysis. I don’t suppose you fed the libraries into analysis, did you? That would account for only finding the simplest issues.

Yes. And, SonarQube was really built to be used by the development team as part of their daily routine, not passed around as a report. I urge you to get them in front of the UI.

Ehm… Well, you could turn on debug logging in analysis and get an even bigger deluge. :joy: But actually, I don’t think that logs file-by-file in most cases. Your easiest tack here is to use the UI to make sure everything you expect to find is actually there.

Part of analysis is duplication detection, but there are built-in stops so that analysis doesn’t take hours if there are lots of duplicates. That’s what’s going on here.

 
HTH,
Ann

Thanks, all! Sounds like my team is pleased that I got some traction on this finally, and we may be moving towards a commercial version.

Going back at this, I’m checking with the developer to see if all the libraries are included. If that’s the case it sounds like I should just tag on “sonar.java.libraries=.” and let it go again - as @ganncamp mentioned, it takes longer, but I don’t think our project is large enough to matter and it simplifies future runs in case a directory name changes or something.

As for the version types, I do believe they are open to buying a proper license, but does my experience gained here still carry over? I can still plug in the CLI commands and go, or is there some other interface I should get ahead on learning?

There does seem to be some degree of change tracking built in - I’m guessing if I run the same command against the same directory it’ll latch on to it being the same “project” in SQ and report some kind of delta, but I’m not sure it’s something they care much about (yet).

Thanks again!

Hi,

Ehm… that would make life too easy… :sweat_smile:

From the docs

sonar.java.libraries - Comma-separated paths to files with third-party libraries (JAR or Zip files) used by your project. Wildcards can be used: sonar.java.libraries=path/to/Library.jar,directory/**/*.jar

This parameter is not looking for directories it will search. It’s looking for a list of library files. Years ago, at my previous job I wrote a little Ant routine that would iterate the library directory to get a list of file paths and pass that into analysis, but it’s been literally decades. So I can tell you it’s doable, but I have no memory how. (But this is the part that tying into Maven/Gradle makes so much easier.)

Yes! Absolutely! You won’t have to change anything about what you’ve already configured, and you can even retain your existing data. Bumping up to a commercial license will simply expand the capabilities.

Some reading:

 
HTH,
Ann

Ann,

Welcome back! Today I had some time to put into this and was able to get the library files off of the developer. These were split between two files, since the java project is deployed to two interoperating servers with different .war files each.

What I did was copy the test directory from my first successful test from the other week and added the two groups of library files into .\TestTarget2\Libs\srv1 and .\TestTarget2\Libs\srv2, then run the same command (albeit with a new key because I created a new project) against that directory without specifying sonar.java.libraries at all, just to see what shakes out. Interestingly, all the .jar files within the two subfolders under \Libs\ were completely ignored and the results of this test were evidently the same as my first success.

I then created a new project and ran this command instead:

sonar-scanner.bat -D"sonar.projectKey=CompiledTest03" -D"sonar.sources=." -D"sonar.host.url=http://localhost:9000" -D"sonar.token=" -D"sonar.java.binaries=." -D"sonar.java.libraries=C:\Users<testingAccount>\Desktop\TestTarget\Libs***.jar"

It ran way faster this time - down from just under 12 minutes on the earlier test (itself down from like 15 on the first success run) down to 1m 58s. I would have assumed this would be some kind of failure like maybe I had not formatted the libraries path correctly. However, it does seem to have considered itself a success:
image

And there’s absolutely a delta here - a massive one:

What’s really interesting to me is that if I go into the “Code” tab within each of those projects, I see the same 3 directories represented between the two.

So did I do the libraries thing right, then? Instead of going =…\libs***.jar should I have put in =…libs\srv1*.jar,…libs\srv2*.jar instead?

The third test did at least finish, though super fast (some kind of caching going on - same fileset though a different “project” in SQ?)

Seems like I may be getting the idea right but my lack of expectations leaves me wondering.

Thanks!

1 Like

Hi,

In talking about how sonar.java.libraries works, I was relying on my dusty memories and my overslept-on-Monday-after-vacay skim of the docs. And now, several hours later, on a fuller reading, I see that what you did was correct and expected to work. :sweat_smile:

Doh!

So yeah, you’re good.

As to why this ran faster, I think you can chalk it up to analysis having the data (from the libs) at its figurative fingertips, rather than having to cast about to try to deeply understand the code.

There is caching, both in the Java analyzer and for PRs, but it wouldn’t kick in cross-project.

There’s a significant jump here in Reliability issues (bugs). IMO it will be interesting for you to check the Rules facet of the Issues page in CompiledTest03 to see which kinds of rules you’re getting hits from now that analysis has a complete picture of the project.

Also…

As I mentioned earlier, the ‘good’ rules start in Developer Edition($). That’s both taint analysis and what we call “advanced bug detection”. You can get a 2w trial for free, so once you’ve got a handle on the findings that are added with libraries, you’ll want to at least take a look at this too, I think.

 
HTH,
Ann

Ann,

Thanks for getting back to me - I’m glad I can actually read and interpret things well, I wasn’t sure just how “deep” I could wildcard myself!

I will probably jump on the free trial at some point in the near future - this is currently in an on-again, off-again phase, and my higher ups are figuring out who would pay for it (assuming they don’t have a license floating around in inventory they can already use, as is possible). We’re bringing on a new developer to supplement/mostly replace the guy I was working with a bit on this so I’ll need to get him up to speed on how it is now before I can step off to the next phase I guess.

As for the Rules fascet, do you mean this?

There’s a delta there, too - 91 on Compiled Test 1 and 2 (no libraries, libraries present but not called), and 100 on Compiled Test 3 (libraries called via method above). Assuming I’m on the right track, it sounds like this categorizes specific violation types and then lists a quantity of instances (wherein our worst offender is “Unused Assignments Should Be Removed” by far).

Thanks!

Hi,

Yes, that’s the facet I was talking about.

Unused assignments should be cleaned up, but they’re not what I would put at the top of the urgency list. You can make it more interesting by filtering by type and severity. Those facets should be toward the top of the list.

 
Ann

Ann,

Awesome; sounds like I now have a general bearing and can mess around with things from here readily enough - thanks again for laying it out plainly!

1 Like