How could I rename directory without affecting CFamily C++ cache?

I am running SonarQube Developer Edition Version 8.1 (build 31237), CFamily 6.8.0 (build 16475) analyzing a larger C++ project on Jenkins. To speed up scanning, I’m setting sonar.cfamily.cache.enabled=true. This works fine as long as I only do this in the same job.

In my particular case, branches are built via a different Jenkins job (as the pipeline is significantly different). To benefit from the nightly builds and scans of the master, I am copying the master branch’s cache. However it never gets any hits, causing significant overhead in development due to the 30+ minute scan/analysis time (with cache it’d be only 1-2 minutes).

This is due to the cache using the absolute filenames, instead of either using an md5sum of the file (the most sensible solution imho) or at least a relative path (based on sonar.projectBaseDir).

To reproduce:

  1. clone a C++ project
  2. compile with build-wrapper
  3. sonar-scanner -Dsonar.cfamily.threads=10 -Dsonar.cfamily.cache.enabled=true -Dsonar.pullrequest.key=“TEST” -Dsonar.pullrequest.branch=“TEST” -Dsonar.pullrequest.base=“master”

In first execution, this builds up the cache. Doing the exact same again (checking out to the exact same location) gives 100% cache hits and a massive speed improvement. Repeating it a third time, but this time checking the exact same code out to a different folder, gives zero cache-hits :-1:

Ideally the CFamily plugin would improve the cache handling significantly, maybe do as ccache does. This would allow concurrent access to the cache, stop the overhead of copying caches from master to branch and support keeping different versions of files in the cache.

Short of that, is there any parameter I may have missed that would allow me to use the cache when the source code is in a different location? Or any recommendations how to “fake” that? TIA!!

P.S.: Symlinking the workspace does not work as the filenames are resolved to their actual location. Using bindfs works locally, but is “tricky” in docker as it requires elevated privileges (https://github.com/docker/for-linux/issues/321)

Hi @linisgre,

the entire analyzer, starting from the build-wrapper-dump.json file relies on absolute paths and it is currently not possible to relativize paths, no hidden parameters to do that.

I guess from your statement that your jenkins workspace paths are different between jobs. In order to solve your issue you should try to build&analyze on a stable path local to the jenkins slave (i.e. /tmp/path folder is safe if /tmp/path is local to the slave and only one job is running at a time on a slave).

it is currently not possible to relativize paths

Alright, I was afraid of that. Was hoping for some trick in how to fake that. Would you consider this as a feature request then? Try to take a look at e.g. how ccache implements caching, which would be a good fit for CFamily as well and make so many people’s live so much easier. E.g. it would allow to share caches between Jenkins agents, as well as easily between branches. I get dizzy thinking of the time and energy saved globally by not having to re-scan files needlessly, or copying huge cache directories around.

guess from your statement that your jenkins workspace paths are different between jobs

Of course, that’s the way Jenkins works :slight_smile: And with good reason. There are benefits of keeping your workspace between builds (for example: to not have to re-checkout massive repositories, etc). And of course to be able to run multiple jobs on the same slave agent.

to build&analyze on a stable path local to the jenkins slave

That’s indeed what I am doing now. It is working, albeit with a lot of concessions I had to make in terms of the pipeline complexity and performance impacts of other stages (e.g. to reduce my scan time from 30 to 2 minutes, I have to live with an increase of my setup/checkout time from 10 seconds to 5 minutes - overall still better, but it could be even faster …! :wink: )

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.