I have been searching through many of the posts but have not found any solutions or reasons why so I am asking for a help.
First, I use Sonarqube 8.0 version and run it on a server while the analysis is triggered from sonarqube maven plugin. It is excellent in checking the overall codes.
However, the problem lies on new code period. I wish to only acquire “newly added” code analysis when I have a pull request on github. To do this, the “new code period” is set to “previous version”
This is what I did.
I build on the target branch codes and trigger the sonarqube
I build on the source branch codes and trigger the sonarqube again.
This is simple and easy to follow I believe. When I do this, I am expecting the sonarqube to show me the analysis on only new codes however, the new codes are either 0 or showing wrong values.
Can someone explain why this is not working as expected? Or is there any solution how to solve this problem?
Your complaint is about lines, but your activity graph is for issues. Normally we’d see “on X lines to cover” in the Coverage portion of the New Code bar on your project homepage but… you didn’t pass in a coverage report?
Anyway, your 3rd screenshot hints at what might be happening. How many of the lines changed in those commits were lines of code? Not comments, not whitespace, not test code, but compilable production code? This is often the crux in these situations.
(BTW, 8.4 help with this; no more need to start with an analysis of the base branch; the branch point itself becomes the baseline. MMF-1994)
The new java application codes were about 26 lines were java code lines in this PR according to my jacoco report as well. It is the new codes compared to the base branch of the PR (which is proved by github).
To make this problem simple, the most of the lines added was one pojo (java model) file which contained 20 lines alone so let’s just consider this. To specify what is the problem more simple, I have ran one analysis of base branch then checked if this file exists and ran another on PR branch then check if this file is considered as a new file (The “new code period” is set to “previous version” thus, I am expecting this whole file as a new file)
Run analysis of base branch ([Daily-drive-worker-#297]) -> The file cannot be found (the link has been found from the previous analysis but no more this file in this analysis - could be considered deleted)
Then run another analysis of the PR branch ([PR-drive-worker-#111]) -> The file can be found. However the whole file is not considered as a new code though it is compared to the previous analysis.
I have uploaded a wrong graph on this post so this is the correct coverage activity graph. And I am not sure if this bug has been fixed in the newer versions, in this graph currently, the new code does not match to “previous version” though on the “overview”, it is saying “New code: since [Daily-drive-worker-#297]”.
As you said, I am happy this feature is supported in 8.4 version. However, even for now, I am expecting the correct comparison if I run the analysis twice on the target and the source branch individually one after one. Is this a bug or is there any step that I have missed in order to get the correct analysis?
What’s the date of the file in question? SCM blame dates are used to determine what’s “new”. Your screenshot of the file shows a few dashes where I would normally expect to see the first few characters of blame data, e.g.:
Which makes me wonder if the data needed to properly date the code is available.
Regarding your graph, that screenshot shows that you’re changing the “version” with every analysis, so setting your New Code period to “Since Previous Version” would be meaningless. I suspect your new code baseline has been manually set to that ‘#295’ analysis. At least, that’s what your analysis list and graph indicate.
I now get how it works finally. It seems very complicated.
If I only have a few activities that are just a few days old and set the baseline to be “Number of days” and set it much larger than the history I have, it will only measure those codes after the first analysis as “new codes” unlike the expectation.
Anyway, I misunderstood how it worked and used wrongly. Thanks for your clarification.