Hello,
If one organization has to specify certain standards or guidelines for using SonarQube as static code analyzer, what they could be ?
Hello,
If one organization has to specify certain standards or guidelines for using SonarQube as static code analyzer, what they could be ?
Hi,
Welcome to the community!
This is an extremely broad, open question, but as I think about it I realize that the basics are surprisingly simple:
Beyond that there can be realms of subtleties and team discussions, but ideally:
HTH. It should at least get you started.
Â
Ann
@ganncamp thank you. Just a general question- Different types of code can adhere to different quality gates.In this case how can I ensure specifying a common quality gate across the entire organization and converting it into a standard to be followed?
Hi,
Ah ha! Iâm glad you asked!
If you tried to set a Quality Gate / Release criterion of 0 Bugs, then that would probably be doable with some effort for projects started in the last year or so. At the same time, anything over five years old would never be allowed to release again! The teams working on those old projects would be penalized because of the project history.
But if you say âletâs ignore the pastâ and focus on New Code, i.e. 0 New Bugs, then thatâs a standard everyone can meet. Youâre saying to all the developers: We wonât hold whatâs already in production against you. Just make sure that the code you write today is clean.
And thatâs a fair standard that everyone can meet.
Now⌠youâll find that Coverage on New Code is in the default Quality Gate. Being realistic, I donât think you have unit test tooling available in every language. It would be unfair to tell the ABAP folks (to pick a language at random) that they had to meet a unit test coverage standard when they might have to build the coverage tooling themselves from scratch. (And then add tests on the toolingâŚ? ). So in that particular case, it would be reasonable to have a second Quality Gate for a small subset of projects that used every criterion from the main Quality Gate except Coverage on New Code.
But in general by enforcing standards on only the changes then it is quite practical to come up with a standard that everyone can meet.
Hopefully this helps. If there are specific cases where youâre still not convinced, letâs hash through the details.
Ann
I agree ignoring the past is a nice practical measure to make sure everyone can meet the rule, but itâs somewhat unfair for those projects with a âgood pastâ. I mean, if my project has a coverage above expectations, I donât want to be forced to achieve a certain percentage at every single change. Some changes may be affect unrelevant parts and I prefer to focus my testing efforts in some other areas as long as the total coverage does not fall below expectations.
So, in a typical company with coexistance of legacy and new projects, I think it may be appropriate to have different Quality Gates depending on the current status of a project, i.e. âgoodâ projects having the rule for whole coverage rather than new code.
Does it make sense to anyone?
Hi,
does make sense.
IMO there has to be a specific quality gate for legacy projects, means using only ânewâ conditions,
e.g. no new Blocker/Critical/Major issues.
New projects should start clean, means no Blocker/Critical/Major issues + (60-80 % coverage)
WRT to the code coverage, we donât use this condition for legacy projects.
Most of these projects never really used unit tests, so it doesnât make sense to enforce
a coverage of 80% for new code now.
Otherwise you are not obliged to use the coverage as criteria for your quality gate, however itâs
evaluated nevertheless, so you can track it.
Finally you may use in- and exclusions for specific parts of the code, search for âcoverageâ:
https://docs.sonarqube.org/latest/project-administration/narrowing-the-focus/
I tend to disagree with this â when you donât enforce Coverage on New Code⌠you risk no longer maintaining the Coverage metrics youâre so proud of (as they slowly slide down underneath the threshold). If you are changing this code, what can makes it irrelevant (and not worthy of being tested?)
Mmh, I admit this âdegradation effectâ sounds familiar to me. This might suggest enforcing a certain degree of coverage on new code to prevent developers to simply ignore testing until they lose all the margin.
I also agree âirrelevantâ is not the most appropriate word. âError proneâ reflects better the idea (i.e. the developer knows when a given commit encompasses high risk and should be able to balance testing efforts based on this knowledge).
All in all, despite these nuances I still claim it makes sense to have distinct quality gates for new and legacy projects (at least in my current environment and I suspect itâs quite a common scenario).
Hi guys,
Iâm back from a long Thanksgiving break and full of turkey and beans!
Because everyone can always be trusted to do what they should and never bow to deadline pressure, and no one ever cuts corners âjust this one timeââŚ
Uhm⌠no time like the present to get started?
Okay, beyond the obligatory push back, I will say that internally we do have multiple Quality Gates. There is the base/default QG with conditions on New Code plus a minimal requirement on Security and Reliability on overall code. And then we have additional QGs that increase expectations [1], [2], [3]. BTW, those additional QGâs were requested by the developers. What Iâve seen not just here at SonarSource but also in my previous job is that sometimes machismo kicks in and you get a healthy competition going to meet better and better standards. Thatâs where these extra QGs came from.
For full disclosure, we do have a QG without a Coverage requirement which is used only for projects where a large portion of the code requires interaction with external systems. Projects are allowed to use this exceptional QG only after showing that getting the required coverage with unit tests simply isnât feasible. Perhaps we should be feeding in the IT reports, but⌠thatâs an issue for another day.
So even though we preach the gospel of holding everyone to the same standards we do make an exception for what Iâll call hardship. Aaand in general, we do walk the walk and hold nearly everyone to the same minimum standards. And if they want to hold themselves to higher standards, well⌠we let 'em.
Ann
Hi Ann, thanks for the follow-up. Indeed these practical details are very helpful to understand the big picture. Thanks for sharing them (btw very fun names! I hope we can play in the champions league some day).
Just one more question that is not directly related but that also has to do with practical guidelines to improve quality. Itâs about assertions in tests. I mean, sometimes a code with good coverage fails to detect simple bugs, and the reason is that there are trivial assertions or no assertion at all (i.e. the test executes many lines of code but does not verify whether they behave as expected). I guess TDD could be a long term answer for that but weâre still far from this level. Is there any simple recipe to check that tests are really testing something? (sorry if itâs a well-known subject, Iâm new to this forum)
Hi @marcelubach,
Iâm glad you asked!
In fact we do have some rules specifically for tests. Unfortunately, Iâm not sure many have been implemented for languages other than Java, but at one point an objective to give tests more love was on the table for next year. No idea whether it will make the cut, but if you need these types of rules for other languages, feel free to agitate for them here in new threads.
HTH,
Ann