The rule java:S3027 claims that the usage of String.indexOf(char) performs better than String.indexOf(String) and should be used when searching for single characters.
So
"sometext".indexOf('x');
performs better than
"sometext".indexOf("x");
I wanted to see the significance of this and wrote a small test program.
The result was quite surprising, because it showed that the opposite is true:
String.indexOf(String) performs magnitudes better than indexOf(char).
I tested with
Oracle JDK 1.8
OpenJdk 11
OpenJdk 13
on my Windows notebook. The result was always similar: indexOf(String) was several 1000 times faster than indexOf(char).
I do not claim this to be anything close to a scientific approach, but the huge difference in the results, worry me anyhow.
The rule seems to be valid for String.lastIndexOf(String), but not as significant as I expected.
Question to the SonarQube team:
What is the rationale behind “blaming” the indexOf(String) function?
Was it true for older JDKs?
First, I’m afraid your test code is incorrect, you can have a look at this SO post, with a similar mistake. To sum up, you have to use the return value of indexOf to be correct.
That being said, the difference is not magnitudes better, but still questionable.
By playing a little bit with your (fixed) test, JMH and differents java version, it seems that it depends on the java version (and potentially other reasons). The version with char performs slightly better with 8 and slightly worst with 11. Of course, it’s just an observation, nothing formal.
I also observed that other linters also provide a similar rule, and people seem to accept this without questioning.
All in all, I believe we should do something about it, I created a ticket (SONARJAVA-3339) to keep track of it.