We have multiple projects (4) and the total reported LOC for these projects is 322k.
As of this morning, SonarQube is refusing to complete the analysis of the main branch of one of our projects (the largest one).
The error message is:
`This analysis will make your organization to reach the maximum allowed lines limit (having 680954 lines)…`
We have not all of a sudden added 300k + LOC to any of our projects, so not sure where this limitation is coming from. We’ve also not changed the configuration to the best of my knowledge.
There’s not much in the way of out of the box logging available in SonarQube cloud from what I can see.
I have diffed the context for our last successful run and our first failing run. It looks like:
Unfortunately, there’s no way to get LOC logging out of analysis. What you can do is enable verbose analysis logging (-Dsonar.verbose=true) to get a list of the files being indexed by analysis, and check that to see if it’s suddenly including an library or two you didn’t expect.
@dougiewright I seem to be running into a similar issue with our .NET and Azure Dev Ops Pipelines. Is your underlying codebase .NET or something else. I’m trying to understand if this is a .NET thing or something more fundamental. Thanks.
We are currently using Automatic Analysis within SonarQube cloud, we’re not running via our CI. SonarQube is tagging our PRs with its comments etc., but it’s running the analysis itself - not in a GHA runner.
I can’t figure out a way to set this param for this configuration?
With automatic analysis, this gets a bit more complicated. The best approach at this point is a binary search approach with exclusions. I.e. use the UI to set up exclusions for what should be approximately half the project & see if analysis works. If it does, does it show you want you expect to see? And so on.
That seems like a lot of work to fix something that wasn’t broken. The diff between the last passing commit, and the first failing commit is in the order of a couple hundred lines.
We haven’t changed configuration.
Is there anything in the newer build of sonar that could explain it thinking that 300k lines have appeared out of nowhere?
It turns out we’ve been slowly rolling out a new analyzer that scans JSON and YAML for secrets. It’s quite possible it was turned on for your orgs recently. Would you mind posting your Org IDs so we can dig into the data on our side? ?
@groogiam you would see the footprint of this scanner in your analysis logs. (@dougiewright, you obviously don’t have access to your analysis logs.)
You can disable this in the project Administration → General Settings → Languages → JSON → Activate JSON file analysis. And the same for YAML.
Sorry for the confusion. It’s on my list to get a reliable way to look up recent changes to SonarQube Cloud.
I turned off these settings and reran the build on my commit from Dec 4. This seems to have mostly fixed the line count issue. There is still a discrepancy of ~1300 lines of code between the two runs despite having the same number of files analyzed between the old and new run. That’s less than 1% so we can live with that. Thank you for your help getting this figured out.
If I still want JSON file analysis can I just turn the JSON scanning functionality back on for the project and exclude certain directories using sonar.exclusions that have JSON I don’t want to to have the lines counted for?