Of course i did. If u are running a mixed code project - like TS, JS and C#, you have exponentially more failure points.
there is no mention of the plethora of issues that can occour to stop these stats from being reported, including but not limited too, file paths being incorrectly logged in each of the output files. ive had too write many scripts that take the outputs of various reports, and re-mangle them for SQ ingestion.
Not even my fully modern Electron/TS/Angular stack worked out of the box. Some plugins work for a few releases, then fail after a SQ/npm update. Or a test file, which is valid for a test runner, but maybe not fully syntactically correct, will fail for SQ ingestion silently and just report “no tests”.
Im the devops lead at my company and I feel like my entire life is spent decoding SQ logs and answering questions about why this or that statistic has stopped working suddenly. My longest sonar project takes 2hrs to run against. The test suite takes 24hrs to run. Small issues like this, and the fact that SQ gives you no context as to what is broken, turn my job into a cluster.
IE. if SQ said to me “yes we have a coverage report, we have a unit test report, but we are not showing you any results because the coverage report file path is relative, and the unit test report is absolute” it would make my life soooooo much easier. But it doesnt. It just doesnt work and errors out with the same codes.