Is the scanner just adding the times of the testcases?

The time metric doesnt really add up. It seems the scanner calculates the time spent on testing is just adding the time of the cases, not considering they run in parallel. In this example the pipe step takes 4min and also the root object time seems correct, but sonar says it is 15min. the numbers only make sense when adding the times of the cases, which run in parallel

<testsuite name="TestSuite" time="155.149" tests="47" errors="0" skipped="0" failures="11">
  <testcase name=" id:pathToStringTxtPayloadContainer " classname="io.testifi.util.file.FileUtilTestAppRegression" time="15.334"/>
  <testcase name=" id:searchFileNotFound " classname="io.testifi.util.file.FileUtilTestAppRegression" time="59.508"/>
  <testcase name=" id:createNewFile fileName::&quot;[fileTest3.txt]&quot; expectedException2::&quot;[AssertionError]&quot; mock::&quot;[File.setWritable]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="12.391"/>
  <testcase name=" id:pathToStringTxtSpecial fileName::&quot;[fileNameWithPlus+.txt]&quot; fileContentToCheck::&quot;[Test me! with +]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="7.204"/>
  <testcase name=" id:pathToStringTxtSpecial fileName::&quot;[fileNameWithDot..txt]&quot; fileContentToCheck::&quot;[Test me! with .]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="12.181"/>
  <testcase name=" id:findFilesInSubDirectoryTooWithMultiplePattern " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.767"/>
  <testcase name=" id:findFilesNoPattern " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.739"/>
  <testcase name=" id:findFilesInSubDirectoryTooNoPattern " classname="io.testifi.util.file.FileUtilTestAppRegression" time="11.603"/>
  <testcase name=" id:createNewFile fileName::&quot;[fileTest.txt]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="12.623"/>
  <testcase name=" id:streamToString " classname="io.testifi.util.file.FileUtilTestAppRegression" time="10.799"/>
  <testcase name=" id:createNewFile mockException::&quot;[]&quot; fileName::&quot;[fileTest3.txt]&quot; expectedException2::&quot;[AssertionError]&quot; mock::&quot;[File.setWritable]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="12.632"/>
  <testcase name=" id:findFilesNoFiles " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.477"/>
  <testcase name=" id:pathToStringTxtSpecial fileName::&quot;[fileNameWithPercentage%.txt]&quot; fileContentToCheck::&quot;[Test me! with %]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.423"/>
  <testcase name=" id:pathToStringTxtSpecial fileName::&quot;[fileNameWithSemicolon;.txt]&quot; fileContentToCheck::&quot;[Test me! with ;]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.446"/>
  <testcase name=" id:provideFileJar " classname="io.testifi.util.file.FileUtilTestAppRegression" time="21.092"/>
  <testcase name=" id:provideFileJare " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.733"/>
  <testcase name=" id:pathToStringTxtSpecial fileName::&quot;[fileNameWithAnd&amp;.txt]&quot; fileContentToCheck::&quot;[Test me! with &amp;]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="12.484"/>
  <testcase name=" id:streamsToZipTwoStreams " classname="io.testifi.util.file.FileUtilTestAppRegression" time="8.589"/>
  <testcase name=" id:fileToString " classname="io.testifi.util.file.FileUtilTestAppRegression" time="11.306"/>
  <testcase name=" id:pathToStringTxtMultipleLines " classname="io.testifi.util.file.FileUtilTestAppRegression" time="13.926"/>
  <testcase name=" id:createNewFile fileName::&quot;[fileTest4.txt]&quot; expectedException2::&quot;[FileNotFoundException]&quot; mock::&quot;[File.mkdirs]&quot; expectedException::&quot;[IOError]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="11.431"/>
  <testcase name=" id:findFilesInvalidFile " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.012"/>
  <testcase name=" id:pathToStringTxtSpecial fileName::&quot;[fileNameWithComma,.txt]&quot; fileContentToCheck::&quot;[Test me! with ,]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="12.081"/>
  <testcase name=" id:createNewFile fileName::&quot;[fileTest5.txt]&quot; expectedException2::&quot;[FileNotFoundException]&quot; mock::&quot;[File.createNewFile]&quot; expectedException::&quot;[IOError]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="13.034"/>
  <testcase name=" id:findFilesWithPattern " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.726"/>
  <testcase name=" id:pathToStringTxtSpecial fileName::&quot;[fileNameWithSharp#.txt]&quot; fileContentToCheck::&quot;[Test me! with #]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.742"/>
  <testcase name=" id:streamsToZip " classname="io.testifi.util.file.FileUtilTestAppRegression" time="10.502"/>
  <testcase name=" id:findFilesInSubDirectoryTooWithPattern " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.529"/>
  <testcase name=" id:createNewFileAndChangeContent " classname="io.testifi.util.file.FileUtilTestAppRegression" time="11.369"/>
  <testcase name=" id:pathToStringTxtSpecial fileName::&quot;[fileNameWithDollar$.txt]&quot; fileContentToCheck::&quot;[Test me! with $]&quot; " classname="io.testifi.util.file.FileUtilTestAppRegression" time="13.017"/>
  <testcase name=" id:pathToStringTxt " classname="io.testifi.util.file.FileUtilTestAppRegression" time="14.105"/>
  <testcase name=" id:pathToStringZip " classname="io.testifi.util.file.FileUtilTestAppRegression" time="20.516"/>
  <testcase name=" id:searchFileLicence " classname="io.testifi.util.file.FileUtilTestAppRegression" time="10.955"/>

Hey there.

Yes – that’s how it’s working.

To be honest – we have thought for a while that these metrics don’t make sense in SonarQube/SonarCloud. Do you find these useful for something (or would you miss them if they were gone?)

Hey. I ask the question to my team. I think the only purpose would be to see increasing or decreasing time which might point to performance issues of the sut or the infrastructure. will come back later again when my team discussed it

1 Like

Thank you! This kind of feedback is great for us, and we really appreciate it.

1 Like

Hey again. So the team also has the feeling that the test runtime metric is not really useful and noone can say if its good or bad. The only thing which might help something is a metric to show runtime changes for a single test. it might be that the test itself got more complicated, but lets assume the test didnt change and the runtime increase that might be a hint for a performance issue