Custom Plugin - How to analyze only the new lines of code within the plugin?

What are you trying to accomplish?

I’m developing a custom SonarQube plugin. The main idea is to analyze the naming of new tests added in a pull request and check if they are valid. Additionally, I want to add a quality gate that evaluates the correctness of test names for new code, based on the percentage of correctly named tests. It should work similarly to how new code coverage is evaluated (i.e., it shouldn’t be less than a specified threshold, like x%).

What’s your specific coding challenge in developing your plugin?

The main challenge is that it’s unclear how to analyze only the new code. I know that I can analyze all project files using ProjectSensor and FileSystem (but that includes all project files and lines, not just the new ones), like:

  public void execute(SensorContext context) {
        for (InputFile inputFile : context.fileSystem().inputFiles(context.fileSystem().predicates().all()))
            try {
                try (InputStreamReader isr = new InputStreamReader(inputFile.inputStream(), inputFile.charset());
                     BufferedReader reader = new BufferedReader(isr)) {
                    reader.lines().forEachOrdered(lineStr -> {
   ...

PS I know how to:

  • Setup the quality gate with appropriate measures.
  • Parse lines and process it with format I need.

Would appreciate any suggestions!

Hey there.

SonarQube itself determines what is considered “new code” using its own mechanisms (like SCM data and issue tracking), and handles new issues accordingly. It is, as far as I’m aware, not possible to develop custom metrics or measures that specifically target new code via the SonarQube Plugin API. Calculating measures (like new_coverage) requires additional processing server-side by the Compute Engine.

Instead, the recommended approach is to raise Issues from your plugin for any invalid test names found—SonarQube will automatically classify which issues are “new” based on its built-in logic. If your quality gate is set to fail on new issues, it will react accordingly for pull requests and new code on branches.

I’d be interested in understanding your motivation for having this as a metric!

1 Like

Hey Colin, thank you so much for your answer — this information is very helpful to us!

It seems that issues could also serve as an alternative solution in this case. However, for our team, it feels more intuitive to see the percentage of correct elements in new code (e.g., in PRs) — such as proper naming conventions, the proportion of unit vs. integration tests, correctness of method bodies, etc. From our experience, it’s more helpful for developers to clearly see how much work remains and to accurately configure the requirements for new code.

Overall, your answer is clear — thank you once again! We’d be happy to see the possibility in the future to analyze source code directly within the plugin and calculate metrics based on that!