What's the SonarQube DC sizing model by LoC?


(Fabio) #1


At this link there’s a SonarQube DC sizing proposal on 200M issues on an AWS environment.

  • Search Node made of [Amazon EC2 m4.2xlarge]: 8 vCPUs, 32GB RAM - 16GB allocated to Elasticsearch
  • App Node made of [Amazon EC2 m4.xlarge]: 4 vCPUs, 16GB RAM

Being the DC priced on Lines of Code (LoC)… how can I correlate this?

What’s te assumption model to link lines of codes and issues to plan for a correct sizing by LoC?

At the same link you say the proposed sizing should be taken as minimum reccomended size for DC installation.
How can this fit with a DC license for 100M LoC?



(Nicolas Bontoux) #2

Hey Fabio,

There isn’t any such model. And I would tend to agree that in that respect the documentation piece you pointed out is more of an indicator that can be compared with an existing SonarQube setup, rather than an absolute figure that you can relate to the codebase itself.

Essentially that’s because, independently of any benchmark/setup consideration, the number of issues is by no means related only to the amount of LoC. Obviously the more LoC the more potential issues, but then one has to take into account so many other factors, ranging from the Quality Profile config, to the actual quality of the code analyzed.

All things considered, I wouldn’t try to extrapolate this data further. It can serve as a data point once you have a running instance, but by no means is a workaround to setting up a clean monitoring infrastructure, which will give you the only true indicators on whether your setup is sufficiently sized in terms of hardware resources.