C2000 scanner getting incorrect sizes of native types

I’m using SonarQube 9.3 to scan a TI C2000 project, and it’s getting false positives for numeric overflow which suggests SonarQube is not handling the C2000 data-type sizes correctly.

“implicit conversion from ‘unsigned long’ to ‘const unsigned int’ changes value from 1000 to 232” from the following code-snippet:

    /// @brief Periodic time to send ACK
    static const unsigned periodic_activity_ms = 1000U;

“implicit conversion from ‘long long’ to ‘const uint32_t’ (aka ‘const unsigned long’) changes value from 1289999999 to 54911” from the following code-snippet:

static const uint32_t encoder_mid_range = 1289999999;

The numbers (1000 → 232) and (1289999999 → 54911) indicate SonarQube is convinced that an int is 8-bits, and a long is 16-bits.

This may be caused by SonarQube failing to recognize an architectural oddity of the C2000 processors best described in section “6.4 - Data Types” of the C2000 compiler manual https://www.ti.com/lit/ug/spru514r/spru514r.pdf:

NOTE: TMS320C28x Byte is 16 Bits

By ANSI/ISO C definition, the sizeof operator yields the number of bytes required to store an
object. ANSI/ISO further stipulates that when sizeof is applied to char, the result is 1. Since
the TMS320C28x char is 16 bits (to make it separately addressable), a byte is also 16 bits.
This yields results you may not expect; for example, size of (int) = = 1 (not 2). TMS320C28x
bytes and words are equivalent (16 bits).

SonarQube may be assuming 8-bits per “sizeof”. In other words it may think that sizeof(char) == 1 means that a char is 8-bits.

The build-wrapper-dump.json file is capturing the TI C2000 compiler:

“cmd”: [

This isn’t the first time this issue has been reported - see Rule S3949 "Integral operations should not overflow" false positives - #3 by Amelie for someone else reporting similar issues on the C2000 processor.

Hello, @Malcolm_Nixon,

Thank you for your detailed analysis of the false positives. Your guess is right on the nose, indeed our analyzer (being based on Clang frontend) assumes char is 8-bit wide. Unfortunately, this assumption is quite widespread in the Clang code base, so it might require some considerable engineering effort to remove it.

I have recorded your report and the one you mentioned in this ticket. When it gains sufficient traction, we will consider it for implementation.

In the meantime, you can disable the rules affected by the type size in your quality profile.

1 Like

Thanks, I can appreciate that modifications to the clang front end are probably not going to happen. I doubt LLVM will want to get their code polluted with non-standard type issues which are only relevant for whacky embedded targets where the silicon vendors are playing wild with the C/C++ type system.

We’ll see what we can get by disabling in the Quality Profiles. Unfortunately this cascades into other false positive situations such as “result of comparison of unsigned expression >= 0 is always true” in:

if (address >= flash_start)

This is where flash_start is a 32-bit large non-zero value which is erroneously being truncated to a 16-bit value which looks like zero.

We’ll flag these secondary failures as false-positives for now on the C2000 target, and rely on the fact we’re also cross-compiling for Windows so we get a secondary analysis on a “sane” platform.