Second sonar quality gate check is over 1000% slower than first for no apparent reason

The second and later sonar quality check for a branch or PR is for us more than 1000% slower than the first build. Even if the code/branch stayed the same. The first checktakes currenly usually less than 1 minute and the later ones more then 10-20 minutes. To get the orignal speed we can manually delete the branch/PR and then the next build is fast again. Also forcefully changing the branch each build also speeds it up, but then there is no real advantage of buying a developer edition when you can’t properly use the branch/PR feature.

We use sonar developer ( edition and use a reference branch for new code detection. The affected project is a C# project and it is getting worse and worse over time.

The extra time is being spend in the “Detect file moves” step. According to the log it also detects there 4 file additions in a case where there was just 1 commit since the reference branch with just a single file changed but no file added.

Since it only occurs at the first build and not on the second it is obviously a bug. One that makes us dislike sonar more and more since it existed very long and it got worse and worse over time and currently it slows down our reviewing and merging process.

I provided log files with exact times and relevant log files, but I don’t want to post those at a public place like this.

Please contact me privatly if you need those.


Welcome to the community!

Can you characterize your project in terms of size? How many

  • How many LoC?
  • How many files?
  • How many LOC in the largest file?
  • in the average file?


Hello Ann,

thanks for your reply. In regards to your question:

35k LoC
1k files
6k LOC in largest file
35 LOC in average file

As mentioned, the interesting part is that the performance of the sonar quality check is good enough on the first quality gate check of a branch/PR. So the project size can not be the issue. The issue is that sonar seems to do something uncessarly extra on second and later quality checks of the same branch/PR that takes a lot of time.


Thanks for these details. Can you try excluding that 6k LOC file? (Not forever, but as a test?) Because actually, I think this is about size. You’ve already narrowed this down to the file move detection step. I think that algorithm is choking on the large files.

And you’re not seeing the slowdown in the second analysis because there’s nothing to compare moves against.


To clarify: The slowdown is only not visible in the first analysis, second and further analysis have the slowdown.

I still find it strange because it is not supposed to compare with the last analysis but with the reference branch which hasn’t changed (Based on our setting for new code detection).

So it might be probably even a very simple fix for your developers as they just need to remove that uncessary call if new code detection is based on a reference branch and not on previous analysis.

The file can not be easily excluded as it is an IDE generated file with translation text based properties that get references from almost everywhere. I also think it is very common for C# projects to have those.


Let’s say you rename A.cs to B.cs. Should all its old issues show up as new? Probably not. That’s what file move detection is about. It’s not something we can just turn off.

Obviously, you can’t not build it. But we always advise excluding libraries and generated files from analysis. After all, if an issue is raised in this file, what are you going to do about it? Rewrite the generator? Probably not. I suggest you set a file exclusion and retry.


Yes, I understand that you need to keep track what issues got marked as resolved. However this is also true for issues marked as resolved in the reference branch. Since however this is fast for the first branch/PR build, it should also be fast for the second build. It should not make a difference if this information is taken from a reference branch (first build) or the previous build (during second build).
=> So I still think there is some kind of bug causing this

Thanks for the proposal and exclusion link.

I tried defing more exact what needs to be analyzed with a which was mentioned in the file exclusion article you linked. The properties file was rejected by the build with the error “ files are not understood by the SonarScanner for MSBuild. Remove those files from the following folders:”
So I searched and found out that apparently I am supposed to use a SonarQube.Analysis.xml. I tried then such a file with a content like this (but with about 5-20 more directory entries):

<SonarQubeAnalysisProperties  xmlns:xsi="" xmlns:xsd="" xmlns="">
  <Property Name="sonar.sources">path/to/directory1/,path/to/directory2/</Property>
  <Property Name="sonar.tests">path/to/directory4/,path/to/directory5/,path/to/directory6/</Property>

I tried to avoid that way the directory where the large file is located and also some buid directories. It may have speed up the build by about 10% from about 15 times slower to 13-14 times slower. To early to tell yet though.

So thanks for the idea, but it is not the solution yet.


This is not at all about that. File move detection ensures that when you rename a file or move it to a different directory we don’t close all the issues on the old location and open brand new ones on the new location.

Using SonarScanner for .NET, you shouldn’t manually set sonar.sources or sonar.tests. As I stated above, you should set an exclusion that’s narrowly targeted to exclude your generated file(s).

IMO setting an exclusion via the UI is the best way, but it can also be done via analysis properties.


The earlier hope that it might have increased performance by 10% was also wrong, it is not really much better if at all.

I also tried now excludes via “Source File Exclusions” in project settings. It does not seem to have helped. I tried it with patterns analog to:


Ann: Do you agree that it indeed looks like a bug that a build from a feature branch takes over 10 times longer on the second run?

Since we have the “use reference branch for new code feature” active, there is always a reference to compare against even when a branch/PR gets build for the first time. => So there should be no time difference between first and second build of a branch/PR when referance branches get used for new code detection.

Even the things you mentioned last would also be necessary for the first build. Otherwise you would see a lot of issues close/open when the a new branch/PR does a lot of file movements compared to the reference branch.


Not necessarily. That’s why I keep asking about file sizes.

Have you successfully excluded that massive, generated file? And if so, was there any impact?

If not, do you need help crafting the exclusion pattern?


So far the patterns I tried seem to have no effect at all. Even excluding everything seemed to have no effect. I currently have the suspission ** matches only directories and not the content within those directories. If that is true the patterns I posted before could never have worked.


Yep. You would need to exclude **/enormous-generated-file.ext. Could you try that & get back to me with the results?


ok, I tried now to exclude everything with:

<SonarQubeAnalysisProperties  xmlns:xsi="" xmlns:xsd="" xmlns="">
  <Property Name="sonar.exclusions">**/*</Property>

Project Setting Source File Exclusions **/*

The projects have now 0 lines of code but the sonar quality gate check still needs so super long.

One thing that is still a bit strange: The duplicate statistic refers to 400k lines. Clicking on it that there are serval really huge files in TestResults folders. For example in each project there is a huge coverage.opencover.xml file with over 100k each.

Probably from some earlier tests the code duplication exclusion was already set to **/*.* in the project settings. Strangly it had no effect.


Can you run it one more time, please? The first time you set **/* SonarQube still had the big files from the previous analysis to compare.


The file setting xml file I posted before does not seem to be enough alone. So I used the project settings again for further tests even though I don’t like changing the setting project wide. If there is a way to do it via configuraiton it would be great.

So I tried now this:

  1. I deleted two branches (called a and b now) in sonarcube I used for testing before
  2. Triggered a build of each branch => The sonar gate check was in each case fast (branch a 24s, branch b 23s), even though the new code page shows 0% duplication on 400k compared to the refrenence branch (develop in our case)
  3. configured branch b as reference branch of a in the settings
  4. triggered a build of branch a => This time the sonar gate check took 13min 15s.
  5. The new code tab of sonar shows now in the title the hint “Compared to branch b” and has also 0% duplication for 400k. Otherwise branch a and b both have only 0s in new code tab and overall code tab.

=> Since the 400k get also displayed for the fast builds they might not be the cause
=> strangly the 400k lines did not vanish


Based on your OP, this is not about duplications, but about detection of moved / renamed files.

Since you appear to be analyzing C#(???) then you’re using the Scanner for .NET(???) which means isn’t used.

Ideally you should be excluding generated files for all branches of the project. Thus setting an exclusion for your large, generated file in the UI would be normal and reasonable. But since you don’t want to do that, can you please try setting /d:sonar.exclusions=**/[name of large generated file] in the begin step command line?

Once analysis is complete, you can check the Scanner Context ([project homepage] → Project Settings → Background Tasks → [analysis row cog menu] → Show Scanner Context) to verify the exclusions the analysis actually ran with.


As I wrote before, did not work so I tried a SonarQube.Analysis.xml file, which however seems also to have no impact. We use sonar for .NET 6 and C# 10.0

There are multiple such generated files like the large one and they all end with .Designer.cs (which is very common for C# projects). So I added this exclude rule:

The run was not faster, but probably one of the the slowest runs ever. the scanner context showed:

Project server settings:
  - sonar.cpd.exclusions=**/*.*
  - sonar.exclusions=**/SomeFolderWeExcludedBefore/*.*
Project scanner properties:
  - sonar.exclusions=**/*.Designer.cs

So I guess the project server settings were overriden by the new project scanner properties.

I did some further tests with combined exclusions like **/*.Designer.cs,**/SomeFolderWeExcludedBefore/*.* but they were still slow.


Yes, there’s a hierarchy of parameters, as described here. That’s why I asked you to set the exclusion on the analysis command line.

I’m eager to hear how it goes once you’ve successfully excluded your **/*.Designer.cs files.


Well as I just wrote I tried excluding the **/*.Designer.cs files via command line already and it was rather slower than faster.

Just to be sure the seeting has effect I also tried excluding everything via command line. It had an effect on the numbers (all 0) but still took long.


Thanks. That wasn’t clear to me.

Going back to the beginning, you observed that the detection of file moves was the time-hog during analysis report processing. Since the first run without *.Designer.cs files would still be comparing against the *.Designer.cs files from the previous run, could we have two runs in a row where *.Designer.cs files are excluded?

And then I’d like to see the ce.log lines for the processing of that second analysis report. And why not the analysis log too.