[Webinar] Sonar Success: Fireside chat with DATEV

Hi all!

We are hosting a webinar on Wednesday, August 30th, where we will present a fireside chat featuring Sonar customer, Andreas Fischer from DATEV eG.
Whether you are a SonarQube user, admin, manager or just starting to explore, hear directly from an avid SonarQube user on how Sonar helps their organization write Clean Code, reduce technical debt, and improve time to market.

What: Sonar Success: Fireside Chat with DATEV

When: August 30th, 10 am CDT / 5 pm CEST

Who should attend: DevOps Leads, Managers, and Executives interested in learning how Sonar has helped a long-time customer achieve a clean codebase

Register now!

Canā€™t make it to the live session but still interested in learning more? You can register here to receive the recording!

1 Like

Hi all,

Thank you to all who attended our webinar this week! Below youā€™ll find answers to the questions we received during the presentation:

Q: How do you do when you upgrade the set of rules (new rules, Security Hotspots) and a lot of new issues appear?
A: I am going to state the obvious there, but when you modify the set of rules and new issues appear, one would need to fix such issues unless there is a strong reason not to.

Q: Do you recommend starting with Sonarlint before moving on to SonarQube or combining the two from the outset?
A: Combining both from the outset! And utilizing Connected Mode to keep them synchronized.

Q: Any future where R programming language will be supported?
A: We do not have an ETA yet. But if you are interested, please leave a note here: https://portal.productboard.com/sonarsource/3-sonarqube/c/306-r-support?utm_medium=social&utm_source=portal_share

Q: What is the optimal quality gate percentage for any software project? 100 is too extreme for agile projects? (1/2)
A: The key here is to act on New Code findings, which will not be so many if you are scanning the code as it is changed. Then fixing 100% of the issues raised is not overwhelming.

Our default Quality Gate settings are aligned with New Code, not forcing you to fix 100% of your legacy issues.

Q: It will be fine for new code or microservices but for a monolith that has a lot of baggage, itā€™s a bit overwhelming when developers are aligned to change the architecture of the application and processes such as devOps on the cloud environment. How will that scenario be tackled? (2/2)
A: In this situation, commercial versions of SonarQube or SonarCloud will allow you to get findings scoped to individual Pull Requests. So the developers will only see the findings related to their direct changes within the monolith.

Q: How is LLM /AI being used to fix bad code?
A: Our view is LLM & AI are great tools to augment developer learning, but directly leveraging them to fix bad code doesnā€™t help a developer learn anything.

Q: Is it possible to create a single project in SonarQube with code from 50% of repo1 and 50% of repo2?
A: Yes, you can do so by creating an application in SonarQube, starting with SQ DE: Applications

Q: Can we collect the software metrics from function level and export them to files like .csv, .xls etc.?
A: We would encourage you to act on the results as close to ā€˜liveā€™ as possible. Within the IDE using SonarLint, or then in response to issues raised in the SonarQube or Cloud UI. If you truly need an export, regulatory reports include findings that might answer your particular need: PDF reports. It is also possible to use the Web API to fetch such reports: SonarQube

Q: How is the Impact of Bad Code analyzed? How is the Cost Impact determined?
A: You may want to have a look here, where we explain how we define our metrics, including technical debt: metric definition.
The impact of bad code depends on the context, the severity of the issue, etc.

Q: Can we get Report of Unit Tests of a project? Also, is there any way to get Unit Tests Report of all projects in a Portfolio? How can we achieve this in SonarQube?
A: Coverage reports related to unit tests are typically computed outside of SonarQube by language-specific tools and imported to SonarQube as part of an analysis. More information can be found here. One can review the coverage of all projects that are part of a portfolio. See an example here

Q: Could you give an example of cost of bad code that is high?
A: These are examples of engineering mistakes, including software errors, and the related disasters associated to them: Disaster Lessons by Anoop Dixith.

Q: Iā€™ve been struggling to understand how to publish code coverage results (xunit tests in C#) from my DevOps pipeline to SonarCloud. Are you able to offer any guidance, please?
A: Please review these docs, they may help you: Test Coverage Parameters | SonarCloud Docs

Q: Can someone share how the Cost Impact really determined? How did you guys come up with the numbers in the presentation slides?
A: These are some of the sources used: How much could software errors be costing your company? Ā· Raygun Blog; Cost of a data breach 2023 | IBM

Q: In a medium-sized company, with dozens of microservices, and a few hundred devs, do you recommend that all teams converge to a single Quality Profile per Language, or should microservice teams define their own?
A: DATEV uses a baseline Quality Profile and empowers teams to build on it. They seem to have let their team decide by themselves, and they have chosen a stricter approach. It looks like mainly a bottom-up approach.

Q: In Visual Studio 2022, with the SonarLint extension, I want to review the code before uploading it to the DevOps Git repository, but the migration fails by saying this: ā€œMigrate Connected Mode to v7ā€. Could you help?
A: Hi Dario, Iā€™d encourage you to post this question to the community.sonarsource.com forum so that we can help you troubleshoot the specific issue. On the surface, it sounds like your IDE or SonarLint extension itself may be outdated.

2 Likes