Incorrect LOC shown in Licence Configuration

We are using SonarQube 10.1 (build 73491) with a 5mm LOC license and recently migrated from one server to another (we got a new license generated for it). It’s unclear if the issue is related to the migration, but what we are observing right now is that the License Configuration shows that we’ve analyzed 4.9mm+ LOC and, basically, refuses to scan additional projects whereas, in reality, we have not analyzed more than 2.2mm LOC.

We ran the /api/projects/license_usage report and adding up the projects shows 2.21mm LOC and 44% licenseUsagePercentage.

How can we get back the remaining 56% (or 2.8mm LOC) from our license?

Hi,

Welcome to the community!

Would you mind providing some - redacted as necessary - screenshots?

 
Thx,
Ann

I’m not sure how to show you the actual usage but you can see that our projects go down in size very quickly. We’ve actually had to remove some projects just so we can unblock the license from saying we’ve reached our limit (even though we are very far from the limit). I believe we are now below 2mm LOC analyzed but the server is showing 4.75mm LOC

image

Hi,

Thanks for the screenshots. I understand it’s hard to know what to show.

You said you deleted some projects. Did a subsequent analysis reset the license numbers? Or are you still at almost used-up?

 
Ann

No, deleting projects brought us down from 4.9 to 4.7mm according to the license page but we actually have 2.2mm (2mm after the deletion). So we are still in a situation where we pay for 5mm but we are only getting 2mm LOC.

Any ideas on how to “fix” the license?

Hi,

Since you started by saying you’ve already checked /api/projects/license_usage and that shows what you expect, I’m not sure where else to look. I’ve flagged this for the experts.

 
Ann

Any update on this?

Hi,

We are looking into this behind the scenes. The current line of investigation is why different parts of the UI use different ways of getting the LOC.

Hopefully we’ll have something more concrete for you soon.

 
Ann

Hey @nkojuharov, thanks for reporting this. I think you just found a bug.

To confirm, could you please run this SQL query on your database, and tell us if the query returns any rows? I don’t need you to post the result here, I only need to know if the query found any results.

SQL
select loc_grouped_branches.projectUuid,
        loc_grouped_branches.projectName,
        loc_grouped_branches.projectKey,
        loc_grouped_branches.ncloc as loc,
        p.ncloc
    from (
       select pb.project_uuid as projectUuid,
       p.name as projectName,
       p.kee as projectKey,
       pb.kee as branchName,
       pb.branch_type as branchType,
       lm.value as ncloc,
       row_number() over (partition by pb.project_uuid order by lm.value desc, pb.uuid asc) row_number
       from live_measures lm
          inner join project_branches pb on pb.uuid = lm.component_uuid
          inner join projects p on p.uuid = pb.project_uuid
          where lm.metric_uuid = (select uuid from metrics m where name = 'ncloc')
          and p.qualifier ='TRK'
      ) loc_grouped_branches
    inner join projects p on p.uuid = loc_grouped_branches.projectUuid
        where loc_grouped_branches.row_number = 1
        and loc_grouped_branches.ncloc != p.ncloc
    order by loc_grouped_branches.ncloc desc

If this returns any result, it means that some branches from your projects were deleted (manually, or through an automatic purge for inactivity), and the “new” LOC score for the project was not correctly updated after the purge. If my assumption is correct, if you compute the sum of diff between the loc and ncloc columns, you will find the same gap you already noticed between SQ UI and api/projects/license_usage.

In this case, as a workaround, you can re-analyze any branches from the projects found in the query: the new analysis will fix the LOC count for the project.

3 Likes

I created SONAR-20733 to track this bug.

1 Like

I fixed the identified bug, it will be shipped with SQ 10.3.

PS: I’m still curious to know the result of the SQL query on your database, to confirm the problem you face is actually the bug I identified.

1 Like

Thanks Pierre, I am still trying to get the query results. Turns out our databases are pretty locked down so it’s hard to run a query directly.

Thanks for letting us know. Don’t hesitate to keep us posted :+1:

What is the planned release date for 10.3? Until then, is there a way for us to apply the fix since we are still blocked by it?

early November 2023.

Yes, if you confirm with this query that you are indeed impacted by this bug, you can follow the workaround I suggested:

Thank you - I missed the workaround part. And Nov timeline is ok so we can wait.