Environment:
- SonarQube Server Developer Edition v2025.3.1 (109879)
- SonarScanner CLI 7.1.0.4889
- build-wrapper, version 6.67 (win-x86-64)
- Compiler TI_CGT_C2000_22.6.2.LTS
Hello everyone,
I have successfully installed SonarQube Server and started using it to run analyses of several C projects.
However I worry about the analysis quality score which is well below 100% for every project.
For example on the smallest one, here is the info from the scanner:
44.83% of functions were parsed successfully (48 out of 87 have parsing errors)
74.55% of statements were parsed successfully (199 out of 782 have parsing errors)
100% of the project includes directives were resolved (0 out of 61 were not resolved)
I have noticed that rule S2260 can be activated in order to list parsing failures, which I did and found 72 issues from that rule.
Question 1: Do these parsing failure issues match the missing percentage of successfull parsing? In other words, will analyzing these issues allow me to find exactly what prevents analysis quality to reach 100%?
Question 2: The parsing issue that has the most occurrences is worded as follows:
x86-64 âinterruptâ attribute only applies to functions that have only a pointer parameter optionally followed by an integer parameter
This issue is flagged at every point in the code where I declare an interrupt service routine, like this one for example:
__attribute__((interrupt)) void BUFFER_currentControlActive(void);
The thing is, my architecture is not x86-64 but the C28x architecture (Texas Instruments C2000 microcontrollers) so the normal syntax is:
__attribute__((interrupt)) void func(void)
Is there a property or attribute that I missed that would allow me to specify this?
Question 3: The secondmost frequent parsing issue is related to bitfield sizes.
It seems that the scanner doesnât have the correct underlying integer sizes for this specific architecture.
The flagged code is as follows:
typedef struct {
uint64_t status : 8;
uint64_t errors : 40;
} Parameters
The issue is worded like so:
width of bit-field âerrorsâ (40 bits) exceeds the width of its type (32 bits)
However, for the C28x compiler the long long type has a size of 64 bits, and is defined as uint64_t in stdint.h:
typedef int int16_t;
typedef unsigned int uint16_t;
typedef long int32_t;
typedef unsigned long uint32_t;
typedef long long int64_t;
typedef unsigned long long uint64_t;
So there shouldnât be a problem with a 40-bit field in a 64-bit bitfield.
I have checked the values of the following compiler-defined macros:
__SIZEOF_INT__is 1__SIZEOF_LONG__is 2__SIZEOF_LONG_LONG__is 4__SIZEOF_FLOAT__is 2__SIZEOF_DOUBLE__is 4__SIZEOF_LONG_DOUBLE__is 4
I wonder if this could explain the problem? The C28x architecture is special in the sense that its byte is 16-bit, as stated in the compiler documentation:
TMS320C28x Byte is 16 Bits
By ANSI/ISO C definition, the sizeof operator yields the number of bytes required to store an object. ANSI/ISO further stipulates that when sizeof is applied to char, the result is 1. Since the TMS320C28x char is 16 bits (to make it separately addressable), a byte is also 16 bits. This yields results you maynot expect; for example, size of (int) = = 1 (not 2). TMS320C28x bytes and words are equivalent (16 bits).
Thank you for reading my questions, and please do let me know if I should split my message in order to address each question as a separate topic.
Kind Regards,
Pierre
