There was a conversation around software quality over at the Software Craftsmanship slack team. It's an interesting subject but it often becomes confusing. Some of the questions that comes up are: What is software quality and how can we measure it? I'll try to add my twopence to the topic here, and perhaps make it slightly less confusing.
Lets start with a definition and then go from there. Oxfords English Dictionary has this to say on the subject:
The standard of something as measured against other things of a similar kind; the degree of excellence of something
To make sense of that definition we need to understand the context. What "something" is and what "other things of a similar kind" is. To me this is pretty simple. The only person that can define what "something" is are the users of the software. In an enterprise system this is the enterprise user. In a API library it's the programmer using it. In the case of an end user application, such as a word processor or a game, it's the end user. And they will naturally measure this against other similar products or against their perception of how other similar things should work.
So my definition of software quality is:
The standard of said software as the end user of this software perceives it, measured against other similar software or the end users perception of how other similar software should work.
This makes my measure of quality rather narrow. I'm actually only interested in how my software is perceived by the users. This is also how I receive the measuring points needed to define what quality in my solution is. Therefor such tools as code analysers and test coverage is only important as long as the feedback from them helps me guarantee that the end user receives high quality.
So now what we are left with is understanding what the end user values as quality. As we start out, trying a new idea on the market, quality for the user may well be speed. It's certainly what we need to know if we should continue. To achieve this we create a PoC (Prof of Concept). This is often not a very well designed piece of software. It may well be something we will throw away a week after release. It's is as little as is possible to receive the feedback we need. If the users likes to use it we know it provided them with quality. Now we need to sustain and improve this quality.
As we continue to stabilise and improve our product we need to ensure that the user does not experience a regression in quality. Then they may well leave us for a competitor. To achieve this we need to ensure we have well tested and well crafted code. Here all the tools we can use to detect if our code is up to the high standard we need are of great help. Tools such as static code analysers, test coverage and so forth. Practices such as code reviews and pairing. The definition of quality is till the same however. Only the measures differs.
With this kind of definition I think it's easier to justify and measure what quality is. Keeping it this way makes it easier for all involved to relate to quality. And quality often comes up in conversations when we talk about the software we work with. Most commonly people try to refer to code quality and has a perception that well crafted clean code is what we always must strive for to achieve software quality. Clean code is important in many contexts. But not always. Sometimes speed is what we need. Short term speed and throw away code. It is, as always a matter of trade offs.