Not-yet user question: feature evaluation for C language analysis

I’m in the process to evaluate static code analysis tools. While I directly asked the sales representatives and expected to get answers from them, they would tell me to ask my questions here.
Well, so here I am with my first batch of questions. Obviously I can’t indicate a SW revision, and it’s about both SonarQube and SonarLint.
The use case is to analyse C code.

-SonarLint doesnt’t seem to support C language analysis in VS Code, is that correct?
If so, is there a plan to support it in the future? Can it be run via command line to be run from an editor, and has it a (GCC compatible or alike) output format so that a usual editor can jump to the code via double click in the output window?
-is there anything that helps with ISO26262, IEC61508 (like tool qualification packages)?
-is SonarQube/SonarLint able to show SW metrics like Cyclomatic Complexity, LOC etc?
-is it possible to define proprietary stylistic rules (indentation level, naming conventions, bracket placement etc) in order to build a rule set, for example like BARR-C:2018 (see https://barrgroup.com/embedded-systems/books/embedded-c-coding-standard).
-does the analysis also include data and control flow analysis? Out of bounds checks etc?

Thanks in advance for your answers

Hi @daniel_hoenig

I can answer for SonarLint questions.

Correct. Our C/C++ analyzer need a deep integration with the compiler (to collect compiler flags, includes, …) and this is not done yet for VSCode.

We are not sure it is doable on our own, without a collaboration with the maintainers of VSCode C language server, but we would like to give a try this year.

No command line interface. We have considered this use case as super low priority. We think it is simpler for a developer to open a PR, let the analysis be run by a CI pipeline, and browse results in SonarQube.

Regards,

1 Like

Hello @daniel_hoenig,

No. We have some rules aligned with IEC61508 like: Functions should have a single entry and single exit only and complexity checking rules.

Yes, there are rules that are triggered when you exceed the complexity limits (which can be set by the user)

Yes, we have rules for naming conventions.
There are some stylistic rules like line length, trail comments, braces after if… but we don’t go into the details of indentation and bracket placement.

Yes, there are rules based on control flow. This rule check out of bound access.

For more details about the rules that we cover I would recommend going over the list on https://rules.sonarsource.com.

Thanks,

@Julien_HENRY @Abbas
Thank you very much for you answers, they are very helpful.

I have to more questions:

Does this work across function/file boundaries as well?

That’s an interesting statement as the workflow I have worked with throughout my professional life suggests otherwise:
I local analysis possibility with integrated analysis target allows a workflow consisting of “modify code->build” cycles with a final commit/push. That is a very small footprint.

When there is only a server based analysis capability, you have to go through a cycle consisting of “modify code>commit->push->trigger server analysis run”.
When the code is warning free, everything’s good, however, I have experienced cases (more the rule than the exception) where the issue was complicated enough that it was not possible to fix it in one round. Doing this cycle more than once is cumbersome (even with local build, server based it becomes annoying).

So that is probably why you created SonarLint. However, I wonder (for reasons of curiosity and the urge to learn), why the workflow you described (without local analysis) seems to work for many of you but not for the teams I’ve worked in.

Best

Daniel

@daniel_hoenig

Currently, its limitation is translation unit. It works across functions and files as long as they end up in the same translation unit.

Thanks so far for the answers. I’d like to know about the license model.

So you guys are saying in the manual that the LOC of the biggest branch of each project are being summed up to calculate the effective LOC count.

Consider the following use case:
There are several projects with <10k LOC application code that share library code of ~25k LOC.
The git submodule feature is used to checkout the library as source code beneath the actual project, so it lives in the project context and may appear to the Sonar license LOC counter as being integral part of it.

Does SonarQube (or SonarScanner) recognize that the library sources are basically dupes across the projects and skip them in the calculation?

In general, I would like to have libraries static-analysed in the project context, too, but if this is counted towards the license relevant LOC, I would obviously like to exclude them.
Exlusion of files/directories is possible as per the manual, however, is this just a visibility filter or does this setting hide the code for the license relevant LOC count mechanism?
If the former, how can the lib code be excluded in a LOC license relevant way?

Last question: In case library LOCs are always fully counted AND that there is an exclude feature. What happens if, by mistake, SonarScanner is run without having excluded the libraries. Is the LOC count increased irreversibly?

No

Excluded files are not analyzed at all, they do not count in license threshold.

AFAIR if analyzing a project would lead to exceed the license threshold, the analysis report will simply be ignored on SonarQube side. And the license check is about current loc of the last analysis (so if you analyze by mistake too much code, you can simply fix that by adding exclusions and running a new analysis).