On versions 7.0, 7.7 and 8.5, I see rule squid:S2111 / java:S2111 (the key changed at some point), which states that calling the BigDecimal constructor with a double parameter is bad for things like 0.1, which lead to a BigDecimal monstrosity with many digits of precision, as 0.1 isn’t represented in FP exactly.

But I see this rule flagged for cases where this clearly isn’t a concern, such as 0.0. Since any integer up to 15 digits can be represented exactly in a double, it follows that any double literal with no more than 15 digits to the left of the decimal point, and only zeroes to the right, can be represented exactly in a Java double. (I’ve checked this out; a call like “new BigDecimal(5000.00)” produces a BigDecimal with an unscaled value of 5000 and a scale of 0; “new BigDecimal(0.0)” leads to an unscaled of 0 and a scale of 0.)

So it seems that anything of the form ^\d{0,15}.0*$ (or the equivalent in whatever regex SQ uses) should be a smell rather than a bug. (Actually, anything in which the fractional part reduces to a fraction in which the denominator is a power of 2, is fine, so things like 0.5 or 1.625 would also be OK, but picking those out would be harder. Perhaps the scanner can actually call the constructor and compare the printed output to the double, or compare the precision to the length of the double literal, or something like that.)

If the “OK” cases were to be considered smells while “bad” cases like 0.1 remain bugs, how would we handle one rule being classified two ways? Perhaps fork off a separate rule for the “OK” cases and make sure the rules are exact enough to guarantee that any use of the double constructor triggers exactly one of the two rules.