I’m about to create a quality report in a project and skimming through the metrics captured/computed by Sonarqube. One of the newer metrics is the the Cognitive Complexity.
While I understand on a high level how it is computed and how it differs from Cyclomatic Complexity on computational level, I do not (yet) understand, how I could use it in a decision making process.
- For issues it’s easy
- issue -> fix it,
- on project level: “reduce number of issues” (bugs, code smells whatever)
- For cyclomatic complexity:
- could be read as “theoretical number of shallow single-path unit test cases” and in combination with line/condition coverage is useful to find hotspots/refactoring candidates.
- on project level: “converge number unit tests and cyclomatic complexity” (either write more tests or reduce complexity),
- I could probably also guestimate how many more tests / test development effort is needed to get a test suite that matches the project complexity.
But how do I put the Cognitive Complexity into context? How do I recognize a “call to action” or a quality hotspot?
Does anyone have experience how the Cognitive Complexity relates to:
- maintenance effort (i.e. time to read/understand)
- teamsize / development effort
- what is a “good” value in relation to project size (i.e. kLoC)
I know there are thresholds on method/function level (15 of Cognitive Complexity, 10 for Cyclomatic Complexity), but how is it on project level? Is a project with 150kLoC and 6000 Cognitive Complexity good or bad?
Any shared experience or recommendation is much appreciated!