What was the discussion around Sonar when was introduced to your team? And who introduced it? Was it bottom-up, discovered and advocated by developers? Top-down, mandated by management? Or something else?
We’d like to know what your initial impressions and experiences were; what went well and what could have been improved.
When I introduced static code analysis in my teams, I organized sessions to manage expectations; I noticed in the past that some developers start to fix issues blindly, others just ignore them, and a small minority starts to complain that they don’t agree with the rule.
The next step was to roll out SonarCloud to projects. That is a still ongoing process, but most teams where positive about the feedback so far.
For us, it started bottom up among a few developers, and from there grew organically through the team. Took about 2 years for the benefits to be recognized by management, and from there we could push this out as a requirement for all new projects and all new code in old projects.
Fast forward 2 more years, and it’s now part of our contracts with external development teams, and we’ve even convinced a few clients to provide funding for legacy code cleanup.
An ongoing issue is still resistance by some of our contracted developers (“the code works, so why can’t you just deploy it?”) or clients (“why should I pay for development and testing time without any tangible benefit?”). Slowly but surely those discussions are becoming easier, but it’s still a long road ahead.
Something that could help is if SonarQube didn’t immediately forget about an issue once it’s fixed. I can see in the activity graph that we fixed 1000+ issues in a given project over the past year, but I can’t go back to pick out a few to show some tangible benefits (specific bugs or vulnerabilities fixed). Having a few specific things to point to would make the next funding discussion a lot easier!