Elasticsearch invalid_index_name_exception after upgrade

Hi,

I just upgraded from Community 10.1 version to Developer 10.2, following the upgrade procedure.
This is a Zip install on Linux 64.
Database migration might have succeeded (after checking the /setup URL) but the web server isn’t available.

Elastic Search continuously restart, making the app unavailable.

I tried to remove the es8 data folder with no success. And exploring the rest API, it seems that there is a problem with an index naming, but I don’t know how to fix that. The last log might contain the important information.

es.log sample

2023.09.12 15:58:28 INFO  es[][o.e.h.n.s.HealthNodeTaskExecutor] Node [{sonarqube}{9OETZqmZRG6bsXYpl7PyQg}] is selected as the current health node.
2023.09.12 15:58:28 INFO  es[][o.e.c.r.a.AllocationService] current.health="GREEN" message="Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[metadatas][0]]])." previous.health="RED" reason="shards started [[metadatas][0]]"
2023.09.12 15:58:34 INFO  es[][o.e.n.Node] stopping ...
2023.09.12 15:58:34 INFO  es[][o.e.r.s.FileSettingsService] shutting down watcher thread
2023.09.12 15:58:34 INFO  es[][o.e.r.s.FileSettingsService] watcher service stopped
2023.09.12 15:58:34 INFO  es[][o.e.n.Node] stopped
2023.09.12 15:58:34 INFO  es[][o.e.n.Node] closing ...
2023.09.12 15:58:34 INFO  es[][o.e.n.Node] closed

This sequence loop every 10-15 seconds

API health response

{
  "name" : "sonarqube",
  "cluster_name" : "sonarqube",
  "cluster_uuid" : "8jCGvifOQHuV4zeCDGvI_A",
  "version" : {
    "number" : "8.7.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "09520b59b6bc1057340b55750186466ea715e30e",
    "build_date" : "2023-03-27T16:31:09.816451435Z",
    "build_snapshot" : false,
    "lucene_version" : "9.5.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

API _info response

{
  "error": {
    "root_cause": [
      {
        "type": "invalid_index_name_exception",
        "reason": "Invalid index name [_info], must not start with '_'.",
        "index_uuid": "_na_",
        "index": "_info"
      }
    ],
    "type": "invalid_index_name_exception",
    "reason": "Invalid index name [_info], must not start with '_'.",
    "index_uuid": "_na_",
    "index": "_info"
  },
  "status": 400
}

Thanks for your help!

Hi,

What percentage free disk space do you have on the SonarQube host? If you’re not seeing errors anywhere else in the logs, then it’s likely that you need to free up some space. I believe Elasticsearch wants at least 5% free (regardless of how large your disk is).

The cycle you’re seeing in the logs is:

  1. detect too-little free space
  2. lock the indices and shut down
  3. presumably manual restart
  4. unlock the indices on restart (if they were locked)
  5. GOTO 1

 
HTH,
Ann

Hi,

Thanks for your response. I didn’t understand that. They are 68% (2.6GB) of free space on this partition.
(The other partitions also have at least 45% free space on this server)

That shows that the partition is not that big, but might it be the cause you describe?
I’ll ask for a partition extension but I’m not sure that this is the problem.

About the 3rd point of the loop: the restart is ‘automatic’ because the loop last something like 30 seconds repeatedly for 2 days now (to be fair, almost nobody knows about this base).

Hi,

You don’t need to ask for a partition expansion. Elasticsearch only looks at the percentage of free disk, not the amount.

You said this was a zip install. I think we’ve seen restart loops in Docker/Helm contexts but SonarQube will not start itself. You have some other mechanism at play here. Perhaps you’ve got it set up to run as a service and it’s the OS that keeps restarting it?

Anyway, if you’ve got plenty of free disk (%age-wise) then let’s back up and take a look at your server logs. Anything in the other logs?

 
Ann

1 Like

Thanks for you reactivity!

Your question helped me to solve the problem (and I’m a little bit ashamed…).

Looking at the web.log I found a java.io.IOException trying to create a directory for an extension.
That’s where I understood that the whole installation folder was owned by root:root. Changing it to sonarqube:sonarqube made it.

I just had to restart the service and everything now works perfectly:

sudo systemctl restart sonarqube.service

The auto restart comes from the service I suppose.

Thanks so much and sorry for the inconvenience!

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.