Where have all the bugs gone?

Hi Sonar
In Introducing Clean Code in our products it was announced that you have deprecated all of the well understood concepts of “bug” and “vulnerability” and “code smell” - with the exception perhaps of the last, the others need no explanation whatsoever for anyone to understand them,

Now we have some rather woolly hand-wavey new concepts of consistency, intentionality, adaptability etc.

So…where have all the bugs gone? Is there no such thing as a bug anymore?
Looking at your clean code definition page: Clean Code definition | SonarQube Docs I am struggling to see how we can adequately recognise bugs:

  • Formatted - a bug can be well formatted
  • Conventional - a bug can use all conventional features and behave consistently
  • Identifiable - buggy code can be well structured, following all coding guidelines etc
  • Clear - the bug may be self-explanatory, transpatent, and straigtforward
  • Logical - buggy code may be well formed and free of explicit errors
  • Efficient - it could be a very efficient bug
  • Adaptability - a bit woolly bit okay it could be a modular, extendable bug
  • Focused - a bug can be narrowly scoped
  • Distinct - a bug can be distinct
  • Modular - you have already covered this in “Adaptability” so not sure why we need it again but okay, it is a very modular bug
  • Tested - the bug may well be tested and it behaves as per the testing - still a bug, just a tested one.
  • Responsibility - I’m not going to bother with these as they are not things that can be automatically determined and no relevance to bugs per se.

I left out of that list two of your attributes - Intentionality and Complete as one could construct a somewhat thin argument that a bug is one or both of these.

Intentionality

The code is precise and purposeful. Every instruction makes sense, is adequately formed, and clearly communicates its behavior.
Intentional code is clear, logical, complete, and efficient.

I guess you could argue that it is not “adequately formed” or not “complete” if it does not behave as intended but it seems a bit of a stretch.

Complete

The code constructs are comprehensive and used adequately and thoroughly. The code is functional and achieves its implied goals. There are no obviously incomplete or lacking solutions.

This perhaps is the closest to a “bug” category - you could certainly argue that the code does not achieve it’s implied goals because there is a problem with it - but really this seems seems to be trying to fit a very very very well understood concept in the computer industry, (ever since Ada Lovelace “debugged” Babbage’s Analytical Engine back in the 1840’s) into some new rather woolly concepts that do not give the immediate understanding that we have had for almost two centuries.

“I made my code complete” does not quite convey the same thing as “I fixed a bug”

Maybe Sonar your code is so clean that you never have bugs anymore but I think those of us in the real world do indeed have bugs to fix!

Hi Tony,

Thanks for sharing your views on this topic.

You’re not alone in your thoughts. We’ve had the same classification for years because we care about bugs, too.

In SonarQube and SonarCloud, code issues previously called “bugs” are now under the Reliability filter. You’ll also find Security and Maintainability filters, which we care about as well.

But Why?

1/ Naming is hard

We want to highlight important code issues and not stop at “code that works”.

The term “bug” has different meanings for different people [1] [2] and continues to confuse. Security terms like weaknesses, vulnerabilities, and exploitability are often mixed up. Users’ feedback frequently dismisses issues as “not a bug” or “not a vulnerability”.

It turns out our previous classification mixed inadequate code with its consequences. For example, a buffer overflow can affect both Reliability and Security.

We needed more accurate concepts to discuss code issues without resorting to consequences. And we feel the new terms describe what is desirable in code [3].

If you look up the history of clean code, you’ll see that these ideas are not new. They were already in use in the late 1960s by the likes of Dahl, Hoare, Dijkstra, Parnas, Constatine, Kernighan, et al. Great stuff was published during this time that shaped our field in fundamental ways [4] [5] [6].

2/ Off by one, or two

Sonar has deep experience classifying code problems and we want to keep a forward-looking mindset.

People have a growing set of non-functional expectations about software. We increasingly see the need to address aspects beyond Security, Reliability, and Maintainability, like Sustainability or Accessibility.

We are already working on highlighting code issues that can affect other software qualities. However, this all applies to the same code.

I hope that helps.

3 Likes

Hi Gabriel
Thanks for your thoughts.
I’m not sure that linking to a wikipedia article about a definition of a “bug” in techology, where the article is prefaced with

This article is about a term used for a defect in any technology. For a defect in computer software, see software bug. For a defect in computer hardware, see hardware bug.

So this is specifically not about software bugs and I struggle therefore to see how it helps your argument that bugs mean different things to different people?
We are talking about bugs in code here so isn’t the other wikipedia article the only one of the two that is relevant?

Granted that other wikipedia article does say that

Sometimes the use of bug to describe the behavior of software is contentious due to perception. Some suggest that the term should be abandoned; replaced with defect or error .

And, whilst I am fine with “bug”, I could be onboard with that idea. It is still clearly telling me something is wrong with thw code - certainly far more than “intentional” or “complete” does.
I don’t think introducing these imprecise terms does anything at all to help clearing up anyone’s confusion when they themselves are confusing.

You go on to explain why clean code is a good thing and link to various articles - I have absolutely no issue with the idea of clean code - I’m not sure that requires nuking well understood terms that have been around for a very very very long time and nor am I convinced that it requires you to impose your idea of clean code on companies who want to use your tool to help them enforce their own ideas of clean code.
You can highlight things about sustainability and assign additional attributes to code without having to sacrifice those other terms and without requiring users (in a point release, not even a major version change) to reevaluate their standards and practices simply so they can continue to use a supported version of a tool they use.

You can be forward-looking without throwing out the past, which your customers have used to build their processes on.

I notice that one of your supporting links, The Elements of Programming Style - Wikipedia, bullet point 2: “Say what you mean, simply and directly” is one you do not appear to be subscribing to here.
Which is simpler and more direct - “this is a bug” or “this may not be fully intentional or complete” and then the reader has to go look up what they mean, then squint, and realise it means “this is a bug”?

Finally it would be great if you could answer the question about where the bugs have gone given that bugs can easily meet most, if not all, of the new criteria.
Lets take a simple example of an if statement that always evaluates to true, so the else branch is never executed.
This was previously a bug - how would SonarQube now make me aware that this likely a pretty serious bug?

thanks

1 Like

You might have missed my response:

code issues previously called “bugs” are now under the Reliability filter

You might have missed my response:

code issues previously called “bugs” are now under the Reliability filter

Yes I missed that thanks.
Your documentation on that: Software qualities | SonarQube Docs says that “Reliability is a measure of how your software is capable of maintaining its level of performance under stated conditions for a stated period of time.”
Given my example bug from before, it can easily meet that description - just because it is a bug does not mean it’s performance varies under a specific set of conditions.

But lets assume that the documentation is just poor and we can say that “reliability” is sort-of equivalent to “bug”. That page is very woolly in saying that the various clean code attributes “contribute to” the software qualities of Security, Reliability and Maintainability. Can you be more precise about that?

The implication of this is that some aspect of what was formerly knows as “bugs” will now mean they have certain clean code attributes which will affect one or more software qualities.
So which clean code attributes does my sample bug fail sufficiently to affect the “reliability” software quality and thus be surfaced with an equivalent degree of severity that I would treat it in the same way I would have treated a “bug” in the past?

As I said in my original post, bugs could easily pass most, if not all of the clean code definitions, so it is completely opaque to me how we can get from these (to me at least vague) terms that bugs can easily meet to then be qualified at the appropriate (for a bug) software quality

I am asking this not only w.r.t the build in sonar analysis capabilities but also if we are to import issues in sonar generic format (which is something we are doing) then we need to be able to map what an external tool would consider a “bug” or a “vulnerability” into your new concepts - so we would need to be sure that we assign the right clean code attributes to these issues that they would qualify as “Reliability” or “Security” issues.

Thanks