API requests sometimes very slow

Hello,

we use Sonarqube 9.7 (Developer Edition). We need to collect metrics for each project. We noticed that the API call /api/measures/search_history is often very slow.

To give an example, I made a time measurement with curl:

$ time curl -k ‘https://sonarqube.service.local/api/measures/search_history?additionalFields=periods%2Cmetrics&ps=1&component=my.project%3Atest&from=2022-04-25T10%3A10%3A06%2B0000&metrics=complexity%2Ccognitive_complexity%2Cduplicated_blocks%2Cduplicated_files%2Cduplicated_lines%2Cduplicated_lines_density%2Cnew_violations%2Cnew_blocker_violations%2Cnew_critical_violations%2Cnew_major_violations%2Cnew_minor_violations%2Cnew_info_violations%2Cviolations%2Cfalse_positive_issues%2Copen_issues%2Cconfirmed_issues%2Creopened_issues%2Ccode_smells%2Cnew_code_smells%2Csqale_rating%2Csqale_index%2Cnew_technical_debt%2Csqale_debt_ratio%2Cnew_sqale_debt_ratio%2Cbugs%2Cnew_bugs%2Creliability_remediation_effort%2Cnew_reliability_remediation_effort%2Cvulnerabilities%2Cnew_vulnerabilities%2Csecurity_remediation_effort%2Cnew_security_remediation_effort%2Cclasses%2Ccomment_lines%2Ccomment_lines_density%2Cdirectories%2Cfiles%2Clines%2Cncloc%2Cfunctions%2Cprojects%2Cbranch_coverage%2Cnew_branch_coverage%2Cconditions_by_line%2Ccovered_conditions_by_line%2Ccoverage%2Cnew_coverage%2Cline_coverage%2Cnew_line_coverage%2Ccoverage_line_hits_data%2Clines_to_cover%2Cnew_lines_to_cover%2Cskipped_tests%2Cuncovered_conditions%2Cnew_uncovered_conditions%2Cuncovered_lines%2Cnew_uncovered_lines%2Ctests%2Ctest_execution_time%2Ctest_errors%2Ctest_failures%2Ctest_success_density&ps=500&qualifiers=TRK&to=2022-10-24T23%3A10%3A06%2B0000’ -H ‘authority: sonarqube.service.local’ … --compressed > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14039 0 14039 0 0 1376 0 --:–:-- 0:00:10 --:–:-- 3538

real 0m10.207s
user 0m0.009s
sys 0m0.006s

$ time curl -k ‘https://sonarqube.service.local/api/measures/search_history?additionalFields=periods%2Cmetrics&ps=1&component=my.project%3Atest&from=2022-04-25T10%3A10%3A06%2B0000&metrics=complexity%2Ccognitive_complexity%2Cduplicated_blocks%2Cduplicated_files%2Cduplicated_lines%2Cduplicated_lines_density%2Cnew_violations%2Cnew_blocker_violations%2Cnew_critical_violations%2Cnew_major_violations%2Cnew_minor_violations%2Cnew_info_violations%2Cviolations%2Cfalse_positive_issues%2Copen_issues%2Cconfirmed_issues%2Creopened_issues%2Ccode_smells%2Cnew_code_smells%2Csqale_rating%2Csqale_index%2Cnew_technical_debt%2Csqale_debt_ratio%2Cnew_sqale_debt_ratio%2Cbugs%2Cnew_bugs%2Creliability_remediation_effort%2Cnew_reliability_remediation_effort%2Cvulnerabilities%2Cnew_vulnerabilities%2Csecurity_remediation_effort%2Cnew_security_remediation_effort%2Cclasses%2Ccomment_lines%2Ccomment_lines_density%2Cdirectories%2Cfiles%2Clines%2Cncloc%2Cfunctions%2Cprojects%2Cbranch_coverage%2Cnew_branch_coverage%2Cconditions_by_line%2Ccovered_conditions_by_line%2Ccoverage%2Cnew_coverage%2Cline_coverage%2Cnew_line_coverage%2Ccoverage_line_hits_data%2Clines_to_cover%2Cnew_lines_to_cover%2Cskipped_tests%2Cuncovered_conditions%2Cnew_uncovered_conditions%2Cuncovered_lines%2Cnew_uncovered_lines%2Ctests%2Ctest_execution_time%2Ctest_errors%2Ctest_failures%2Ctest_success_density&ps=500&qualifiers=TRK&to=2022-10-24T23%3A10%3A06%2B0000’ -H ‘authority: sonarqube.service.local’ … --compressed > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14039 0 14039 0 0 101k 0 --:–:-- --:–:-- --:–:-- 101k

real 0m0.139s
user 0m0.007s
sys 0m0.006s

The same API request sometimes needs 10 secs. Why does this happen?

Regards,
Patrick

Hey there.

These requests are backed by calls to your database. What database provider are you using, and how’s it looking wrt CPU/Memory/Disk space when these calls are made (and are slow)?

Hello Colin,

we know that select statements like the following are triggered by this api call:

select
            pm.uuid as uuid,
            pm.metric_uuid as metricUuid,
            pm.component_uuid as componentUuid,
            pm.analysis_uuid as analysisUuid,
            pm.value as value,
            pm.text_value as textValue,
            pm.alert_status as alertStatus,
            pm.alert_text as alertText,
            pm.measure_data as dataValue,
            pm.variation_value_1 as variation

            from project_measures pm
            inner join snapshots analysis on analysis.uuid = pm.analysis_uuid
            where
            pm.component_uuid = 'AdlJcyRJ8A87dNX852__'

              and analysis.created_at>='1650931810000'


              and analysis.created_at<'1666703411000'

            and pm.metric_uuid in  (  '45' , '53' , '52' , '1' , '3' , '7' , '8' , '9' , '10' , '14' , '15' , '16' , '20' , '29' , '30' , '31' , '32' , '33' , '34' , '35' , '37' , '38' , '39' , '40' , '41' , '42' , '43' , '44' , '48' , '49' , '50' , '51' , '88' , '90' , '92' , '93' , '96' , '102' , '103' , '104' , '105' , '106' , '107' , '108' , '110' , '111' , '112' , '113' , '114' , '115' , '116' , '117' , '118' , '119' , '120' , '121' , '124' , '125' , '127' , '128' , '131' , '132' )
            and analysis.status='P';

The statement takes several seconds to complete (sometimes), if it is triggered by the api call. I triggered the sql query manually with “psql”, and it always completes under 1 second, or under 2 seconds, if there is load on the database. But never 10 seconds like with the api call. This is an inconsistent behaviour we don’t understand.

When the api calls for every project happen, we see 100% CPU spikes for the database (postgresql 11.16, RDS db.t3.medium). After that it goes back to normal (10%-20%). We analyze 4 Mio. LOC.

Regards,
Patrick