I have been doing a bunch of work on SonarQube Enterprise Edition
Version 9.9. My current project contains around 55k lines of code to be analyzed. I recently deleted the original “main” branch of the project in SonarQube UI (and have since switched to a branch named “dev-branch” as the MAIN BRANCH to be the primary location).
This project is very complicated. It consists of a number of Java micro-services that are all built independently of one another. It also consists of a number of Python scripts that have their own independent tests.
Through GitHub Actions, I have set up a workflow that runs every time a PR is opened, modified, or merged. It uses this GitHub action as the primary method of scanning: SonarSource/sonarqube-scan-action@v2. I pass all of the binaries, test files, code coverage files directly to this action to simultaneously scan all of the independent Java micro-services and Python files needed in one single scan.
The issue is that in the SonarQube UI, virtually all of the files there do not have code coverage calculations. This is probably because my project is set up in a very particular way and it could not automatically recognize these files when the SonarQube tracking was originally set up. My question is: how do I manually trigger a re-scan of the whole repository branch so that I can ensure all of the files and their coverages are pulled in? Can I do this manually in GitHub actions? I already have a full workflow built that builds / scans modified micro-services / files on PRs for Java and Python, so I could configure a new one-time-run pipeline that simply builds and scans everything. Would this work? Could I continue to use the SonarSource/sonarqube-scan-action@v2?
I then want to post all these results to the SonarQube UI. I want to have all of the coverage information and all identified code smells / security vulnerabilities / duplications on every line and file contained in the “Overall Code” section. I want to use this full-scan as the benchmark for new code and have all of the future PRs either add / subtract to the metrics calculated in the one-time scan.
Any help regarding this would be greatly appreciated. Thank you in advance for your time.
You say you passed in your coverage reports to analysis. Can you share your analysis log?
The analysis / scanner log is what’s output from the analysis command. Hopefully, the log you provide - redacted as necessary - will include that command as well.
I appreciate the response. I may have mis-stated something. I am currently passing in my coverage reports to the analysis. However, these coverage reports were never passed in for previous scans of the microservices. With my new setup, every microservice that is modified (there are 20+ of them) receives a coverage report on modified lines on each PR through GitHub actions. The coverage calculation shows up correctly, so I know the coverage reports are appearing correctly there.
My question revolves around, will manually specifying to build every microservice then pass them / their tests / their coverage xml files using SonarSource/sonarqube-scan-action@v2 work to populate the coverage data for all the microservices in the repository? I want to add all of this coverage information because it was never included in the past.
I am curious as to what you think about my proposal for a one-off pipeline to perform this scan. Is this a practical solution, and could this be used for my purposes to populate all of the coverage reports?
I am not build nor scanning Java code every time. Sometimes there are only Python changes, only Java changes, or only Terraform changes, or sometimes even there are Python + Java changes on a PR.
When there is no Java code, the relevant lines of the logs read:
INFO: No report imported, no coverage information will be imported by JaCoCo XML Report Importer
INFO: Sensor JaCoCo XML Report Importer [jacoco] (done) | time=5ms
INFO: Sensor JavaScript inside YAML analysis [javascript]
However, when the PR contains Java code and I compile / pass in the binaries and .xml reports there, it reads:
INFO: Sensor JaCoCo XML Report Importer [jacoco]
INFO: Importing 1 report(s). Turn your logs in debug mode in order to see the exhaustive list.
INFO: Sensor JaCoCo XML Report Importer [jacoco] (done) | time=60ms
I can still provide additional log information if needed. Thank you again for your help.
SonarQube reflects what was found by - and passed in to - the last analysis. So analyze your repository with all the coverage reports present, and the coverage for everything should show up.
This might need some untangling. Normally it’s one project in SonarQube per buildable software project. So I would expect each microservice to have it’s own SonarQube project.
In that^ context, I’m not sure I understand your scenario for populating coverage for every microservice. Just make sure running the tests before analysis & passing the reports in is part of the pipeline for each microservice.