All About Cache

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
    • SonarQube 9.9.1
  • how is SonarQube deployed: zip, Docker, Helm
    • docker
  • what are you trying to achieve
    • improve analysis speed and making sure space in cache does not fill up

I want to undesstand everything there is to understand regarding SonarQube and Cache. The 2 main questions are; How to improve the speed using cache? and How to make sure the space allocated to the cache does not fill up?

Firstly when I run a scan, at the end of the scan the output will be along the lines of:
1st Jenkins Job which runs C# plugins:

2023-08-16 10:55:33,791	[INFO]	INFO: EXECUTION SUCCESS
2023-08-16 10:55:33,791	[INFO]	INFO: ------------------------------------------------------------------------
2023-08-16 10:55:33,791	[INFO]	INFO: Total time: 21.166s
2023-08-16 10:55:33,922	[INFO]	INFO: Final Memory: 40M/144M

2nd Jenkins Job which runs Python plugins:

2023-08-17 00:52:17,558	[STDOUT]	(3535566) 00:52:17.558 INFO: EXECUTION SUCCESS
2023-08-17 00:52:17,558	[STDOUT]	(3535566) 00:52:17.558 INFO: ------------------------------------------------------------------------
2023-08-17 00:52:17,558	[STDOUT]	(3535566) 00:52:17.558 INFO: Total time: 21.650s
2023-08-17 00:52:17,661	[STDOUT]	(3535566) 00:52:17.661 INFO: Final Memory: 62M/214M

Both of these jobs run in 2 different buildservers. The confusion I have is why are the Final Memory different in both scans? Is this set by the SonarQube server or from the buildserver or a thirdparty?

Also whenever I run the scan, the Final Memory will always be 40M and 62M respectively - so my next question is what determines this value and how do you increase/decrease it?

Lastly, I want to know how the Memory gets capped 144M and 214M respectively. Can this value be increased/decreased?

Thanks

These numbers just indicate how much memory the underlying Java runtime has allocated to the JVM heap, and how much was used. How much memory is used all depends on what is being analyzed.

There’s nothing that you can do to meaningfully affect these values (except for setting the environment variable SONAR_SCANENR_OPTS = -Xms2G, for example, to raise the starting size of the heap to 2GB.

11:04:55.248 INFO: Final Memory: 4M/2048M

What has you concerned about these values?

I see, thank you for the insight. The reason I’ve asked such questions is that I want to know is there a way to increase the speed of analysis using cache? If so how would someone go about implementing such feature? Also I just wanted to know if it was possible to run out of memory but you have ensured that it is not possible as long as you raise the size of the heap?

So my final question is, how do I increase the speed of analysis using cache?

I have the impression that you’ve chosen a solution in search of a problem. Are you facing any concrete issues with analysis performance / memory usage? 20 seconds is pretty good for any project.

For what it’s worth, in the context of Pull Request Analysis (Developer Edition +), SonarQube already uses a cache to enhance performance, but it has nothing to do with memory usage.

No, I am not facing any issues, however I just wanted to investigate whether there is a possibility of using cache to increase the speed of analysis.

Does this mean from my end there’s nothing I can do in order to increase the performance even more? Possibly get it under 20s lets say? If not that’s completely fine, as I said I just wanted to investigate this matter furthermore. However, if there are certain things I can do from my end to increase speed of analysis, could you kindly share those? Thanks

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.