About `SonarSource/sonarcloud-github-c-cpp` action

Regarding this action, it seems that it can be quite inefficient to workaround, especially since much of the tests and code-coverage can be already processed in other workflows.

Instead it would be good to document more modern approaches using CMake:

  • CMake guarantees there is a clean configure/build/test everytime, so the build-wrapper is unnecessary. Using compile_commands.json is sufficient
  • compile_commands.json can be enabled in the CMake preset as: CMAKE_EXPORT_COMPILE_COMMANDS
  • Use upload_artifact/download_artifact to get and combine the previous test and coverage runs

Additional unclear settings:

  • It is often useful to combine the project based on different OS that were used in the CI. It is unclear how one would set that up
  • For coverage it is also desirable to split the coverage for unit/functional/intergration tests that are flagged accordingly

Currently the main issue with sonarsource/sonarcloud-github-action is that it is run in a docker environment, but if the volume is mounted properly, it can execute just as well.

Hey @LecrisUT

Keep in mind that using compilation commands is not a replacement for the build-wrapper in our eyes. Many users want (and need) to continue using the build wrapper.

Where do you feel the documentaiton is lacking? We do have a lot of examples for the GitHub action using the compilation database.

Is this an issue with the action or how it’s displayed in SonarCloud?

The issue is that as we navigate from the main action, we are directed to use the C/C++ action which is where the documentation is lacking.

Firstly, the naming is misleading, the convention for those actions is setup-sonarcloud similar to setup-python, setup-github-cli, etc.

The next issue is that the wrapper is obscuring a black-box command, but according to what is documented, the only purpose is to make a clean build environment. But in the context of cmake + github wotkflow, that is meaningless since it is always a clean environment. If it does any additional steps, then that should be clarified and documented.

Thirdly, the wrapper restricts the usage of actions like lukka/run-cmake, that abstracts the calling of cmake. I am pushing to have that action also setup the parallelization, which is otherwise nontrivial for cross-platform.

Is this an issue with the action or how it’s displayed in SonarCloud?

The issue is with sonarcloud itself and/or documentation. Codecov allows to define flags to mark the coverage runs, so you can distinguish where the coverage comes from. The equivalent functionality in sonarcloud is unknown. With regards to the action, it is unclear how to split the static/dynamic analysis from the coverage so that we only do the upload coverage part with different flags, e.g. in this workflow

Hello @LecrisUT

Aligning the names also allows for better visibility. It is a trade-off. I will report the suggestion to the teams.

This action aims to avoid boilerplates when using the Builld-Wrapper to configure the C and C++ analysis. It can also be used if you use a compilation database. It is still our recommended configuration. As you know, C and C++ tooling is very fragmented; multiplying documentation and actions for all combinations is not something we try to do even though we provide many examples with such combinations. Still, I acknowledge that cmake is the most widely used build system. However, C and C++ CIs also carry a wide variety and even if you use cmake for your build, there are many ways to do it. We have tried to come up with an action to analyze the code accounting for this variety. The current action is the best that could cope with it. About the documentation, can you please tell me what you feel is missing if you read the pointed SonarCloud C++ analysis documentation?

This is a brilliant example of the variety of use cases I mentioned earlier.

Please consider this a separate topic unrelated to the SonarCloud C/C++ GitHub action. We are aware of this limitation and of the constraint it puts on CI pipelines. The topic is under active consideration right now. Thanks for your patience. Feel free to follow the thread I mentioned.

About the documentation, can you please tell me what you feel is missing if you read the pointed SonarCloud C++ analysis documentation?

I did mention some points in another thread but:

  • Usage of sonar.projectVersion is confusing. From another thread it seems that you should not change it for PRs, tag pushes, etc.
  • The build wrapper is still a black-box, and it is unclear how it differs with compilation-database. The Reasons to use ... should be a table showing what is processed by each method, e.g. environment variables, defines, cmake configurations, etc.
  • It is unclear if the wrapper works on the level of compiler (gcc), generator (ninja), and/or build backend (cmake)
  • If there are missing metadata, how can we add them, e.g. if we want the static analyzer to report on various OS, toolchains, build variants, etc.
  • How to decouple the testing/code-coverage so that in the Github workflow we can better schedule and re-use artifacts

Thanks for sharing that.

I will see with our documentation team if this can be documented.

We try to shape our documentation with a minimal amount of technicalities. The purpose is and should be functional. For more advanced and rare questions, we are thinking of a way to shape it. It is currently spread across the community forum. We will announce if something more usable comes out of it.

It is probably already on the community forum; I can restate it here. The build-wrapper intercepts the entire processes tree spawned by the passed build command. It works by shared library injection. As a result, it does not work with statically linked tools (especially some compilers). The build-wrapper can recognize supported compiler calls and output this information among all the intercepted command lines. The analyzer then uses it to parse your C and C++ code exactly as your compiler does.

I am not sure I understand your question very precisely. The only way I can think of to attach metadata to an analysis is indeed through code variant. If you have something more specific in mind with use cases, you are very welcome to open a new thread about it. We will be glad to discuss it.

Thanks for reiterating this on the list for completeness. I hope my previous answer was clear.