Just a minor inconsistency that is primarily opinion-based:
The rule flags literals like
1d and recommends to use the upper case version
1D, since that would be less likely to be confused. But
1D seems actually more likely to be confused with
1d would be.
In fact, the very source that is referenced by the rule states that
0 and the upper case
D are likely to be confused. The other reference only describes that the lower case
l can be easily confused for
1 and should not be used in long literals. Extending that reasoning to other literal types seems a bit like a stretch that was made to have a unified style.
Oracle says in this tutorial:
The floating point types (
double) can also be expressed using E or e (for scientific notation), F or f (32-bit float literal) and D or d (64-bit double literal; this is the default and by convention is omitted).
So instead of recommending to upper case the
d suffix, perhaps the rule should recommend to remove it and just indicate the double literal with the presence of a decimal point, like