How to use Cognitive Complexity?

Hi all,

I’m about to create a quality report in a project and skimming through the metrics captured/computed by Sonarqube. One of the newer metrics is the the Cognitive Complexity.

While I understand on a high level how it is computed and how it differs from Cyclomatic Complexity on computational level, I do not (yet) understand, how I could use it in a decision making process.

  • For issues it’s easy
    • issue -> fix it,
    • on project level: “reduce number of issues” (bugs, code smells whatever)
  • For cyclomatic complexity:
    • could be read as “theoretical number of shallow single-path unit test cases” and in combination with line/condition coverage is useful to find hotspots/refactoring candidates.
    • on project level: “converge number unit tests and cyclomatic complexity” (either write more tests or reduce complexity),
    • I could probably also guestimate how many more tests / test development effort is needed to get a test suite that matches the project complexity.

But how do I put the Cognitive Complexity into context? How do I recognize a “call to action” or a quality hotspot?
Does anyone have experience how the Cognitive Complexity relates to:

  • maintenance effort (i.e. time to read/understand)
  • teamsize / development effort
  • what is a “good” value in relation to project size (i.e. kLoC)

I know there are thresholds on method/function level (15 of Cognitive Complexity, 10 for Cyclomatic Complexity), but how is it on project level? Is a project with 150kLoC and 6000 Cognitive Complexity good or bad?

Any shared experience or recommendation is much appreciated! :slight_smile:

Gerald

2 Likes

Hi Gerald,

You might find this SO answer helpful: https://stackoverflow.com/a/45084107/2662707 :wink:

Ann

Hi Ann,

yes, that was helpful! Btw. is the “15” a magic number to start with or is it based on some field related data?

And apart from getting an indicator for methods,
Do you have any additional experience regarding any correlation of team size, efficiency and cognitive complexity?

I’m thinking of, if one brain gets overloaded by 15 cognitive complexity per method, how much cognitive complexity can be handled by a person or a team per “codebase” without affecting the efficiency.

Something like the recommendation to have 10kLoC / team for microservices (regardless of this recommendation makes sense or not).

So just a made-up example:

Given a team of 10 people, and a codebase with cognitive complexity of 10000, and assume the essential complexity is 8000, and the accidental complexity is 2000 (so there is room for improvement through refactoring)

Let’s say there is a statistical average that 1 Teammember can handle a cognitive complexity of 1000 without loosing efficiency.
Although the team has 2000 accidental complexity, it is still capable of handling it all, without the need for refactoring or loosing efficiency.
Now, if the team looses a member or the complexity increases, the accidental complexity has to be addressed (refactoring) in order to stay efficient.

Do you have some numbers/studies/statistics, which puts the cognitive complexity in relation to team size and productivity/efficiency?

Gerald

1 Like

Hi Gerald,

We arrived at this empirically based on internal feedback, starting from Cyclomatic Complexity’s 10, and bumping it up until it felt right. For some languages the default is higher; C sets it at 25. This was based not so much on our guts as on our observations of existing projects; there seems to be a higher tolerance for this type of complexity in C.

Sorry, I don’t have anything for you in this area. However, if you’re interested in pursuing it, it would probably make a good academic paper! :smile:

Ann

1 Like

I’m not sure this data is very relevant. The cognitive complexity of a method is important, because when working on the method, we basically have to have the whole method in our head at the same time. Going to a larger scale, up to the module (for whatever that means in your programming languages), I think it can still make some sense, because a module is still a cohesive entity that needs to be understood as a whole when making changes to it.

But when looking at the project scale, I don’t think it still makes a lot of sense (at least as a metric related to the size of the team required to maintain the project). For instance, parts of a project are often just very stable legacy code that just works, and does not get updated at all. These parts can be very complex, but since nobody is working on them, it does not really matter.

Of course, what parts are active may change over time, but then we can forget one part when we move to another one, so the cognitive complexity of those parts is not really an additive property. So even on project where every part of the code is “active”, the relationship between complexity and team size will probably depend a lot on team organization, rotation of subjects within the team…

I think there are, at a first very naive approach, two kinds of code in a project: The code we work on, and the code we work with . For the code we work on, the notion of complexity matters, because we have to understand it to make changes to it. But for the code we work with, what matters most is well designed interfaces, sound architecture, well decoupled components. The cognitive complexity is related to the implementation of this code, not to its interface.

2 Likes

Hi Ann, does it makes sense to publish such ‘empirical data’ for orientation of public users? SQ delivers so many values, but so often the question remains “what does it mean” and I don’t see the documentation/manual providing any clues. Kind regards and good day.

Hi,

Well… TBH our tuning was based on our observations of how many issues were raised at various threshold levels in the projects we test such things against. So, observations + gut feel. And we didn’t take the time to make any records, so there’s nothing of this to publish.

 
:woman_shrugging:
Ann