i am looking for interesting inputs concerning the design of Qualityprofiles. (And sorry for the coming wall of text ) My concern is the following:
What is a better way for me to separate the display of âa projects qualityâ (in sq-server) from the display of âthings to work onâ (in project-synced sonarlint)? (think: assessing different âdimensionsâ of quality)
I am currently assessing the quality of one codebase in three profiles in SQ. I named these profiles âbuildbreakerâ, âfixitâ and âkitchensinkâ.
The âBuildbreakerâ
is used to check the minimal baseline (aka Qualitygate for build pipelines) a.k.a. âMust haveâ
The âFixitâ
is used to advise developers via sonarlint sync as to which findings are (in addition to buildbeaker) fix-worthy a.k.a. âNice to have and encouraged to work onâ
The âKitchensinkâ
is used to be a boyscout radar to see what things possible are available to inspect
I think
that if i also want to to keep a picture of changes over time, i cannot configure the same project with different profiles to see the different results
checking the same sq-project with different profiles would probably make the sonarlint-sync rather hectic (which would make devs rather mad )
After all, configuring this three different projects to get different results is a bit more hassle, but not a real bummer. The real culprit is that this way of doing things leads to a tripled sourcecode-linecount in the Administration/System view.
I would like to find a way to get the mentioned profiles different information but only count source-lines once, not thrice as the source-line-count-amount defines the amount of $$$ to hand over for moar features to sonarsource in DE or EE (Developer Edition / Enterprise Edition).
I currently work with the CE (Community Edition) so that is no concern, yet ⌠but what if i would like to switch?
Maybe someone had this problem already? Or someone maybe has consulted ppl and might be able to chime in with some drops of knowledge? Or the solution is an easy one i just cannot envision?
Thank you for reading this far and TIA for any input at all!
Analysing the same project with different Quality Profiles (or various âlevelsâ of Quality Profiles) is not something really promoted or supported.
With a Quality Profile youâre setting clear quality expectations for the code being written, and with a Quality Gate youâre making sure those quality expectations are being met.
Iâd encourage you to design them in a way so that every result in SonarQube/SonarLint is impactful and relevant to your developers right now.
Since your developers are already using SonarLint â it should be easy for them to follow the Clean as You Code methodology (focus on New Code being written, not Old Code). No need to distinguish between âMust-Haveâ and âNice-to-haveâ rules⌠just the requirements your organisation has for code quality.
Still, thereâs some room for additional configuration (more than just âwhich rulesâ) that can specific to your needs. You can also adjust the severity of rules when you activate them in a Quality Profile (so the ânice-to-haveâ rules could be bumped down to info or minor level, while the âmust-haveâ rules keep a higher severity).
Quality Gates can also adjusted based on your needs (for example, if you donât want new minor bugs to have an impact on the quality gate, but still want them raised for developerâs attention⌠you can adjust the Quality Gate as needed to not fail, but those issues will still show up in SonarQube/SonarLint)
indeed your information is useful. Also it is well appreciated that you, too, took the time to write it all out in detail!
I made a (tiny but impactful) mistake while describing our setup ⌠i did set up three projects ⌠and each is checking the same codebase with a different Qualityprofile.
My concern is, that each mentioned profile-/projecttype serves a (different) purpose. And i am trying to reason about the validity of my design.
Do i need to design the projects the way i did?
Is there a way to structure the whole process differently?
In my view it is suboptimal - like you mentioned, too - to check subsequently a single projects codebase with different rulesets. You also suggested some valid venues to think about changing our ways. Please let me describe what surfaced in my mind after reading your suggestions:
So when i have
people that want a go/nogo decision (qualitygate build pipeline)
people that want a âwhat should i treat important enough to work onâ (devs binding their sonarlint to)
people that want a overview of âwhat could possibly be a finding?â (more managerial view)
how do i cater their needs?
I could configure a project that has qualitygates that can be used for âgo/nogoâ so jenkins can use that ⌠but how do i solve the different needs of
we do NOT want to see this rule inside our IDE because it clutters and i lose guidance what to work on"
vs.
we DO want to see this rule inside the SQ-Dashboard (server) because $managerialReason
Also - as a sidenote - i stumbled upon this post ⌠i have not dug into the configurability of sonarlint too deep yet.
But for example >> changing the severity of all rules i do not want to see in the IDE << would rely on myself digressing from âthe sonar wayâ for example ⌠which would hurt in the long run (profile inheritance, maintenance, âŚ)
i hope that you, too, find this information/questions interesting, too as it shows some thoughts that consumers of your fine product might juggle around with