SonarQube: v10.5.1 enterprise edition
SonarQube is deployed through helm
After upgrading to v10.5.1 Cognitive Complexity has dropped dramatically after the upgrade, and I wanted to know if this is expected and if so how are we determining Cognitive Complexity in v10.5.1 compared to previous versions. Not sure if this is to be expected or not if this is an actual issue.
This looks to be only for our typescript repositories that is affected. I can see that our python and ruby on rails based projects are not affected in this area. I looked at the other metrics and I don’t see any noticeable changes that could have affected our metrics here is an example of the lines of code for the same project and no noticeable changes after the update.
I think I uncovered the issue while looking at a different project I see this message in the screen shot where we deactivated a rule for cognitive complexity which I believe is to be the culprit where instead of deactivating the rule it should have been modified to a different limit.
Just curious if we deactivate that rule completely will it cause the cognitive complexity metric to display different results ? I am going to test this to see but not seeing any affect after I rerun the scan
Complexity is still calculated regardless of whether or not the rule is activated. The rule just controls whether or not issues are raised on specific functions that exceed the threshold.
I’m interested to know why you folks were thinking about disabling the rule or modifying the limit. Could you share a bit about the problems you faced?
That was actually just a mistake that was made on my part. I re-enabled this rule, I thought that we disabled the rule but I was suppose to just update the severity level to a Major.
Got it. Hmm… yeah so I looked at the project and it now only shows 13 cognitive complexity now compared to 207 from before the update was made. So it is still reporting cognitive complexity errors, but still hesitant to think that just because of fix S3776 this would cause a 194 error reduction in cognitive complexity?
That does sound like a big change. Keep in mind that this might depend on the project’s coding patterns; for example, some frameworks rely more on inline functions.
Could you share more about the project? What frameworks are you using? Where did you previously see the most cognitive complexity before vs now? Were developers complaining about too much warning for cognitive complexity? Did they mark many as False Positive or Accepted? Any detail you can share can help!
I’m also happy to have a quick chat over Zoom if that makes it easier for you
The cognitive complexity issue is affecting our typescript repositories that leverages the NestJS framework. The developers notified me as soon as they noticed the shift after reviewing the cognitive complexity graph, they weren’t complaining more as they wanted to know why the big change after the update was made. From the looks of the projects I don’t see any changes that could have affected this change dramatically.
I would love to hop on a quick chat over Zoom if you have the time so we can uncover this issue. Let’s setup a zoom call meeting to discuss further you can reach me at elijah.taylor-kuni@hingehealth.com.