As one who has been using C# for longer than many of these “conventions” have been in place, I am not one to complain much about the overall idea of good practices. Over the years my own development has evolved, and when I was first introduced a few years ago to SonarQube (and it’s associated plugin SonarLint) I found that a vast majority of the rules I naturally adhered too without any previous awareness of an agreed upon “convention” or standard. However, standards change. As evident by a previous standard that was Hungarian notation to identifier names, but it suddenly went out of favor. So there is precedent for standards to evolve as well.
SonarSource’s interpretation of C# ‘enums’ is entirely suspect. First we have to understand the nature of what the enum is. First and foremost an enum is a syntactically readable depiction of an Ordinal value. By default the C# enum is an extension of the Int32 type. Yet, that can be overridden to make it extend from byte, short, or long as per the developer’s needs. The idea of enum is to encapsulate related values - as opposed to the old C/C++ method of macro defined constants - while still providing an identifier to represent the underlying value. We (developers) care about the underlying value, the 1, 2, 4,17, 128, and so on, but remembering which value means what, can be taxing as well as very error prone, so we use the enum identifier to simplify this process.
Here is where we enter the realm of bitwise operations. Bit flags are a very powerful tool in practically any language because fundamentally computer languages are built up from the binary core of the processor. Memory is tracked by a series of one’s and zero’s - on and off flags. You put enough bits together and you can record any kind of data. Numerical - integral or real - characters, dates, times, or even a simple boolean true or false. the Enum does not exist at the processor level, or even the exe level. It is a syntactical place holder. So it only means something to humans.
If an enum is intrinsically an ordinal - integer as per C# - than the idea of bitwise operations is also intrinsic. Yet, constraints by the Sonar systems negatively affect development in the intent that drove the creation of enums. Enums were there to provided names to values in a relatable fashion for developer sanity not for the compiler.
This leads me to the three rules in question:
First 3265 - Requiring FlagsAttribute or not performing Bitwise operations
Not all enums are precisely flag based. Neither are integers. I can use a bitwise operation on an int any time I want and there is no check, no warning, no convention whatsoever that limits when, where, or why I perform an bitwise operation on an integer.
The FlagsAttribute was not Initially part of the .Net Framework. It was added later, and I have found little in any of the .Net Framework that cares about the FlagsAttribute. Whether it is present or not nothing in the C# compiler has any concern. In all intents and purposes it appears to have been an abandoned direction for the language, since all evidence surrounding this attribute indicates it is for developer decoration only. Meaning, if I see the FlagsAttribute I get an indication that the original developer intended this Enum to be used as bit flags and thus i should be aware that possible values could be a combination multiple enum identifiers within the returned value. But no system in .Net or C# ever looks to the FlagsAttribute as a requirement or a limiter for enum behavior. It is solely for the Developer’s benefit.
A good example is the ConnectionState returned from DbConnection.Open(). This enum is flags based, but on the whole it is rare to see it used in a bitwise manner. Contrary to this, other .Net enums, like the CodeDom’s MemberAttributes are bitwise in nature but do not have the FlagsAttribute, even though they are treated as flags.
This means that if some developer did not use the FlagsAttribute, and yet the enum is to be used as bit flags, my code is accosted with “Code Smells” because this was not a previous standard. I agree that it is a good idea to include the FlagsAttribute so future developers are notified that the enum is bit flag based, but that in no way should affect my code when they didn’t. If the only way to check a flag is present is to perform a bitwise operation, then it is absolutely not a code smell to do so.
This would be the code smell:
int i = (int)MyEnumValue;
if (i & (int)MyEnum.Test == (int)MyEnum.Test) {}
Casting the enum to an int in order to silence rule 3265 is atrocious coding, and rather inefficient. The usage of bitwise operators can be used on any underlying ordinal type, and should not be limited by this arbitrary constraint.
Second 2346 - Requiring the 0 value be called “None” in Flags based enums.
This is meaningless semantics and subjective domination. If the enum identifiers are to signify what the value “Means” many times 0 does not mean “none”. It means something else. It can mean the default value for that enum. If i have an enum that is flags based and has a hi-order and lo-order series of bits, much like the Keys enum of Windows Forms, which can then be used as a viable option to define valid schema types.
[Flags]
enum TypeEnum
{
String = 0,
Int = 1,
Long = 2,
Double = 3,
DateTime = 4,
Unsigned = 256,
PreSign = 257,
PostSign = 258
Types = 255
Modifiers = 65280
}
In this situation i can bitwise and the value with TypeEnum.Types to get the actual type, and if the resulting value is an int, long, or double i can bitwise and the value with TypeEnum.Modifiers to determine where the sign indicator in the serialized text is placed: as prefix, suffix, or simply unsigned.
Granted certain combinations are valid, others are not, but this is not unheard of not a code smell in any way with the purpose and usage of enums.
However, the 0 value, the default that always works regardless of anything else, is String. If the type is forgotten in the original schema, or if the value provided is incongruous with the enum definition, i can always default to String as the interpreted type. When serializing the data from a database, an xml file, or an Excel file, you name it, i can always do value.ToString() and that will work. (Baring value being null of course). So the 0 value is String. That provides meaning to the developer where None would confuse the issue. None would seem to indicate there is “No Type” to that element - which should NEVER occur, because in a schema defintion there is always a type, even if that type is simply defaulted to “String”. If i made String = 1, and None = 0, that would mean there would require some error checking and other such code annoyances that are very specialized in nature and would make development against this system non-intuitive. None does not translate to a valid meaning here, where String is perfectly clear.
As well, in case and point to your own which rule states:
“The value 0 should not be used to indicate any other state, since there is no way to check that the bit 0
is set.”
There are two flaws in that sentence.
- Bit 0 = 1, the 0 value says “all bits are 0”. This goes back to numerical bases. 2^0 = 1, 2^1 = 2, 2^2 = 4, and so on Bit 0 is the one’s column, bit 1 is the two’s column, and bit 8 is 128’s column. Just like base 10, where column 0 is the one’s,1 the tens, and 2 the thousands, and so on, each column contains a number of 0 up to the base, and is multiplied times the base to the power of the column index. Base 8, column 0 is 0-7 times 1, column 1 is 0-7 times 8, and so on. So you can totally check if “bit 0” is set.
- I will admit #1 is a semantic disagreement, because what was intended to be said was that without an enum value equated to 0 there is no way to check if all bits are cleared, or off. This is also a falsehood because of the default() keyword which will always return the enum equivalent of “0” regardless of whether or not there is an associated enum identifier that is assigned to 0.
You can test this if you want.
[Flags]
enum MyEnum {
value = 1,
last = 2
}
default(MyEnum) == (MyEnum)0;
so regardless of there being a “None” to indicate 0, or whether or not any flag in the enum is associated to 0, the default() keyword can always provide a 0 value.
Lastly 4070 - Remove FlagsAttribute on enums that appear non flags oriented.
Typecasting enums is not a code smell, a bug, or a flaw of any kind. Often data is migrated from many different sources and the idea of an “Enum” is often times relegated to a string or a number. A Database has no understanding of a C# enum, but it does understand ‘int’. So when i read or write to a database a set of values that came from a C# enum, what you get is a number.
int myColumn = dbReader.GetInt(“MyColumn”);
MyEnum myEnum = (MyEnum)myColumn;
or
MyEnum myEnum = (MyEnum)Enum.ToObject(myColumn);
Either way, we are casting an integer into the enum type. That integer can be -1. Now in most circumstances the -1 is not a typical enum value. If you don’t provide values for the enum names the first enum starts at a default value of 0, and each subsequent enum identifier is associated with the next increment. This means that in a exception-less parser of strings to enum names it is a viable and almost perfect method of indicating an invalid string parse. For an unaware (generic) parsing system this means that all my enums could have a value “Error = -1”. So if i attempt to parse (Enum.TryParse) and that fails, i can return ((MyEnum)(-1)) and the resulting enum will have the value of “Error”.
Again with the FlagsAttribute, the -1 is deemed “non-flag” aligned. This is categorically untrue.
Way back when, before .Net and C#, the original C syntax had the interpretation that “0” is false, and “non-zero” is true. This often was relegated to a simple 0 or 1, but 2, 65536, or -7 were also evaluated as true because they are non-zero values. At the introduction of .Net they made a decision to make true -1. Why? Two’s compliment. The 32nd bit of a 32-bit number is the Sign bit. If it is set, on a signed integer, the values are counted inversely down from -1. (sort of, there is some funky math there ) But that means if all 32 bits are 1 the value in numerical terms is -1. They made this decision to say true = all bits on, and false = all bits off. This made any validation test succeed regardless if a boolean type was cast to any ordinal type. LIke enum, boolean is 32 bit aligned, so its intrinsic underlying type is an int, or signed 32 bit numerical value. If a boolean variable is set to true had to be true in all circumstances, regardless if that value was cast to an int and used in a bitwise test. Suffice it to say, the boolean evaluation behind the scenes can be redefined as:
- if (x & -1 > 0)
Now personally, I like this change because it makes sense. But it also means that -1 equals all bits are set, which in no way invalidates the “FlagsAttribute”. Now if applying the FlagsAttribute required the enum to be based upon a uint or ulong, or other unsigned ordinal, then it would make sense that -1 wouldn’t be permitted. However, if you assign -1 to a uint, you get in numerical terms 4,294,967,295 as the MaxValue of a unsigned integer, because that is the value of all bits turned on in a uint. Since FlagsAttribute does not affect the underlying type of the enum, -1 is a valid value for and int based typed for saying “all bits on”. It is the corollary opposite of 0, which is “all bits off”.
Summary:
All in all, when you take these three rules in conjunction with each other, they create an almost microscopically narrow view of the Enum type. If Bitwise operations can’t be performed on a non-flags enum, then conversely the == comparison should not be allowed on flags enum. If the enum is flags, then you aren’t looking for a value comparison (x == 1) you looking for a bitwise test (x & 1 == 1). But if == is valid on a Flags enum, then by the defacto reality that all enums are ints, a bitwise operation is valid regardless of presence of the FlagsAttribute.
Universally it seems that SonarSource has taken it upon themselves to create these rules about the FlagsAttribute that were never intended or enforced anywhere else. There is no citation of the standard or spec of where these rules came from on the rules.sonarsource.com site. They are just listed as a “convention”. Basically it’s you’re idea of how they should behave instead of the language’s design for how enums can be used.
I would recommend revisiting these rules to ascertain a better understanding of what enums are and how they should be used, albeit with best practices in mind, but with the understanding that the FlagsAttribute is decoration, not design.
Thanks,
Jaeden “Sifo Dyas” al’Raec Ruiner