Must-share information (formatted with Markdown):
- SonarQube 9.4.0-DE, dockerized
- what are you trying to achieve
Understanding metric results, some numbers seem unreasonably high
- what have you tried so far to achieve this
Following query to
.../api/monitoring/metrics endpoint, we observed results in
trillions of seconds:
Full report attached.
compute_engine_statistics.log (11.7 KB)
How best would these statistics be visualized?
You’ve redacted all the project keys and… I guess sorted the log by duration?
Could you at least provide whether or not the long times are all from the same project set?
All from different projects. Not sorted, just collected as-is from API
So, just all of a sudden everything started taking a very long time? Have you looked at your DB? The network? Because what’s happening is that SonarQube is calculating values and storing them in the DB.
Not at all, this is the first time we are observing these metrics and not sure of how to interpret them.
To be honest, I’m not sure either. I’ve flagged this for more expert attention.
All metrics suffixed with
_created corresponds to the time at which the metric was first created, it is defined in the OpenMetrics standard.
This time is expressed in seconds from unix epoch time. You can use websites like epochconverter to transform that to a more human readable time.
1.65164712918E9 is translated to
Wednesday, 4 May 2022 06:52:09.180.
Please give me an idea of how to visualize these metrics (in Grafana):
Is it something like a rate?
_count is the amount of scans
_sum is the total duration
You can have a look at the metric type documentation which also explains how to get meaningful information from it.