Security Hotspots counter is wrong

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
    9.8 enterprise

Since my upgrade to 9.8, I can see a wrong counter for security hotspot.

But when I click on the counter, sonarqube show me nothing.

And nothing in each status. Any idea ?


Welcome to the community!

Are there any errors in your browser console?



Same issue after upgrade to 9.9 ! nothing in browser console.

I guess it’s because I use a docker image on AWS fargate. And I reset every time the /opt/sonarqube/data

So database have maybe only the counter and all data are lost after recreate the container.

After try to understand I can see not only hotspots are empty, bug & code smell too.

It’s like all project in empty after upgrade but first page show counter.

Any suggestion to investigate ?


The “counters” are calculated and stored metric values (i.e. “measures”). They come straight from the DB. I believe your problem is the Elasticsearch indices, which you lose when you reset the data directory. Maybe let them regenerate and then hold on to them?


Thx ann for you quick answer.

Indeed, if we remake an analyse, all data come back in the project.

But in my conception of sonarqube, all data are in the DB and Elasticsearch reload data from DB. And other strange thing, all project are not impact. Only some projects.

I don’t know why.


It takes a while to rebuild those Elasticsearch indices, but SonarQube has progressive availability. That means that once a project is fully reindexed, its data will show up but other projects may still be queued.

Your best bet is to stop throwing away your data.


Hello Ann,

I confirm to you after long time, sonarqube doesn’t get data in ES. ES is full reload after 5min and some project are not full loaded. Only counter (like in my first post) are loaded.

So I try to make command in DB to see if I have failure or job in waiting ==> nothing.

No error or Warning in log. I guess this is a bug on docker image if we don’t have volume for /data.

1 Like


Ehm… I guess it’s a bug in your implementation if you don’t have a volume for data.

What do you expect to happen if you don’t provide all the pieces required for successfully running SonarQube?

BTW, since we started this thread, the new LTS, SonarQube 9.9, has been released. When you modify your deployment to provide a data volume, you’ll also want to take the upgrade.


Hello Ann,

Thanks for your feedback.

What I meant was that normally no important information is stored in the data directory. This directory should only contain the files for elasticsearch and other temporary files or an H2 database (but I use an external POSGRESQL database). When I recreate or upgrade my container, elasticsearch rebuilds itself. I see data reload spots in every project.

Some projects have their tickets going up, others don’t. And even after waiting several hours.

I don’t see how adding a volume for data will solve my problem.

In the official documentation, using a volume is just recommended, not mandatory. I accept having a slower sonarqube at startup, but not losing data.

For information, as mentioned, I am using version 9.9 LTS enterprise and I was able to reproduce the problem.

I also looked in the database to see if I had any jobs in fails or queues but I have nothing.

Thanks again for your time.


Thanks for confirming you see this in 9.9.

Are you using our official Docker image verbatim, or have you customized it? If so, can you share it here, redacted as necessary?


Hello Ann,

I use the official Docker image without modifications.

1 Like

Thanks. I’ve flagged this for more expert attention.



I confirm that this behaviour is mostly caused by ElasticSearch data/es folder to be out of sync. Reindexation (or Data Reload) is run everytime SonarQube detect that the indexes are missing at startup in the data/es folder, and push the data on ES from what we have on the database.

To confirm that it’s an indexation issue, can you please try to reindex the issues for one of the problematic project? You can use this API:

POST api/issues/reindex?project=my_project_key

This will create a new background task to reindex this specific project. Once the tasks is processed, you should see the data updated.

Hello @pierreguillot,

thanks for your answer.

I confirm, with the POST request to reindex, all issues come back.

So I need after all updates to execute this command on each project ?

This endpoint goal is to allow SQ admins to reindex a project individually, the main use case we had in mind being after a full reindexation, if you have some project in error, you can reindex them without triggering a full reindexation. It is not designed to reindex all issues. You can do it, sure, it will work, but you should not need it.

Let’s dig into your docker configuration and usage, as it is most likely the root cause of the problem. Could you please describe your docker setup, how it’s configured?

I’m going to tag an Ops from the team to investigate on your docker configuration side :eyes:

Hello Pierre,

Of course. I deploy my platform with this code : GitHub - cn-terraform/terraform-aws-sonarqube: SonarQube Terraform Module for AWS

Hello @ktibi, thanks a lot for taking the time to participate in the community.

That terraform module is not managed by SonarSource and is therefore not officially supported.

Nonetheless, i want to help you on that topic, could you provide me some more information ?

What is the lifecycle of that docker container ? In your fargate setup, can you ensure the container is properly killed before starting a new one ?

Also when you say

and i reset every time the /opt/sonarqube/data

Do you take action to delete that one, or do you mean that after each creation that folder is empty ?

Just to clarify a bit, the persistency here is not required, nonetheless you must ensure that only one instance of SQ is connected to the database at anytime, and that when the container start, the folder is empty.

A post was split to a new topic: All project in empty after upgrade but first page