SonarQube LTS (8.9.2)
When scanning code, seeing a project that have bugs found, but when clicked the details are empty. True for code smells, security hotspots as well
SonarQube LTS (8.9.2)
I’m working with Chris E. on this. This particular issue is blocking us from rolling out an upgrade to 8.9.2 as it makes it seem like we won’t be able to see details of scan results after we roll this out. Appreciate any assistance that can be provided in determining the root cause. This is being observed on a test system with only 16 projects on it.
I can say that I’ve checked over the server side logs:
- Main logs: clear of errors. “SonarQube is up” is in there.
- Compute engine logs are also clean. Debug level tracing is enabled here as well btw.
- ES logs look clean. Last cluster health status printed is green. (Debug tracing enabled).
- All background tasks report successful. There are no pending background tasks.
Yet most of the (mere 16) projects report bugs and issues, but when you click on them there is nothing there. The only way you can see them is if you navigate to the measures tab, you can clearly see that there are bugs listed in the margins. We are trying to figure out why there is this discrepency.
is it related to SonarQube fails to open issues page?
That question is a good reminder to note that we’re using the sonarqube provided docker image: sonarqube:8.9.2-enterprise. So the instructions in that linked article don’t seem directly applicable to deploying in a container or so it would seem to me.
I’m running out of ideas here. We’ve found that we can get the sonarqube container in this state by restarting through the sonarqube ui. It doesn’t happen every time; more like every other time. Hoping someone from sonarqube can assist here.
Since you are running on docker, you can delete and recreate the volume created for /opt/sonarqube/data to rebuild the ElasticSearch index. You should do this when SonarQube is shut down.
If this does not resolve the issue, please zip and attach the web.log file from your instance so I can investigated further.
Thanks Brian. Yesterday we found this thread:
which mentioned two instances pointing to the same db.
Our acceptance testing pipeline wasn’t swapping out the db url, so it wasn’t unique, and thus two instances pointing to the same db. This has been a bug in our test environment for a while, so I’m hesitant to say this was the problem all along. However I haven’t been able to reproduce the problem since then.
On a related note though, you mentioned deleting the data directory while the container was shut down. This may also be a problem as /opt/sonarqube/data isn’t an externally mounted volume in the container…which I think your response implies.
Did I miss something on the requirements / recommendations around doing this?
Glad to hear you resolved the issue.
Regarding the data directory, our docker installation instructions recommend that you persist this in an external volume so you do not need to rebuild the index upon each restart.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.