Tag Archives: agile

There was a conversation around software quality over at the Software Craftsmanship slack team. It's an interesting subject but it often becomes confusing. Some of the questions that comes up are: What is software quality and how can we measure it? I'll try to add my twopence to the topic here, and perhaps make it slightly less confusing.

Lets start with a definition and then go from there. Oxfords English Dictionary has this to say on the subject:

The standard of something as measured against other things of a similar kind; the degree of excellence of something

To make sense of that definition we need to understand the context. What "something" is and what "other things of a similar kind" is.  To me this is pretty simple. The only person that can define what "something" is are the users of the software. In an enterprise system this is the enterprise user. In a API library it's the programmer using it. In the case of an end user application, such as a word processor or a game, it's the end user. And they will naturally measure this against other similar products or against their perception of how other similar things should work.

So my definition of software quality is:

The standard of said software as the end user of this software perceives it, measured against other similar software or the end users perception of how other similar software should work.

This makes my measure of quality rather narrow. I'm actually only interested in how my software is perceived by the users. This is also how I receive the measuring points needed to define what quality in my solution is. Therefor such tools as code analysers and test coverage is only important as long as the feedback from them helps me guarantee that the end user receives high quality.

So now what we are left with is understanding what the end user values as quality. As we start out, trying a new idea on the market, quality for the user may well be speed. It's certainly what we need to know if we should continue. To achieve this we create a PoC (Prof of Concept). This is often not a very well designed piece of software. It may well be something we will throw away a week after release. It's is as little as is possible to receive the feedback we need. If the users likes to use it we know it provided them with quality. Now we need to sustain and improve this quality.

As we continue to stabilise and improve our product we need to ensure that the user does not experience a regression in quality. Then they may well leave us for a competitor. To achieve this we need to ensure we have well tested and well crafted code. Here all the tools we can use to detect if our code is up to the high standard we need are of great help. Tools such as static code analysers, test coverage and so forth. Practices such as code reviews and pairing. The definition of quality is till the same however. Only the measures differs.

With this kind of definition I think it's easier to justify and measure what quality is. Keeping it this way makes it easier for all involved to relate to quality. And quality often comes up in conversations when we talk about the software we work with. Most commonly people try to refer to code quality and has a perception that well crafted clean code is what we always must strive for to achieve software quality. Clean code is important in many contexts. But not always. Sometimes speed is what we need. Short term speed and throw away code. It is, as always a matter of trade offs.


During quire a few assignments I've been struck by the fact that estimation seems to be done purely out of habit. No-one seems to know what value it brings or if the data collected is ever used. So I've started to question the value of estimation and the way we do it.

This is not to say that estimation is bad and we shouldn't do it. I'm only asking the question of weather the estimation done in many places serves a value and what value is that. I would like to write something about this, based on my observations. However my observations are, by nature, quite limited. I would therefore love to get your feedback on how you do estimation. I would basically like to see if my theories are supported.

I will publish the results and my thoughts and assumptions during the fall. If there is material for it I will also do a talk on the subject.

Please help me out by taking part in this short survey.

Thank You!

1 Comment

I am currently working with a rather complex e-commerce system. It is a web application archive which is using Spring 2.5 throughout to load wire the application. I need to expose some of its functionalty as a REST API to other applications. I am using Spring 3, Jersey and javax.inject to make this possible.

But to understand and verify that my interaction with the underlying e-commerce system works I have to redeploy the whole application to a server. This is very expensive. Even when using such a brilliant tool as JRebel from Zero Turnaround, which saves me from restarting the server. Many of the integration points are built up of several small steps that needs to be done in order to ensure that the function works. To debug errors, which happens often, I have to redeploy, test and redeploy.

To speed this process I am using a technique called exploratory testing. Instead of writing production code which runs on a server and then test this production code I create unit tests that acts against the e-commerce APIs I need to use. This removes the need to redeploy and restart a web server every time I discover an issue with the integration.

This is made possible using two simple constructs. One is the @RunWith annotation provided in JUnit 4. It enables the tests to run with a different test runner. Spring provides a SpringJUnit4ClassRunner that can load a Spring application context into the normal JVM. The other is the Spring annotation @ContextConfiguration which is used to point the test runner to the correct Spring configuration to use.

Exploratory testing has other advantages too.

The tests are not throw away tests. They live their own life in a maven project which is part of the larger reactor build. It is executed automatically in our CI server prior to all other projects. If the e-commerce system is changed, or we need to upgrade it, we only need to run the exploratory tests to verify if our application will work.

Another great benefit is that the tests also allows provides executable documentation. They help clarify assumptions that are used in the production code and tests which otherwise would have to be written down as text documentation or comments. And we all know what usually happens with comments and text documentation.

The conclusion is that whenever you are faced with understanding a complex third party API exploratory testing almost always pays of. If not for time saved due to redeployment and restating of an enterprise server so at the very least for regression testing and documentation.

I have recently been working on a e-commerce project. We needed to create an extension to a third party e-commerce application. The source code is open to customers and partners. It is a pretty comprehensive and complex application.

The e-commerce domain is a pretty complex domain. With a system that is designed for a global market with all national tax rules, shipping rules and other variations and also the requirement to adjust prices according to customers and running campaigns makes for a very complex system. On top of this it needs to integrate with and in large parts understand the logistics and stock keeping domain, provide customer service interfaces and all other things that comes with running a reputable global web store.

When I started planning for the extension I read the documentation and the JavaDocs and looked at the classes and database schemas. This gave me some initial ideas that the code would be domain driven. They have borrowed some of the patterns from domain driven design. It looked promising.

As I dug deeper I started looking at the tests. In my experience a well crafted system can often be understood by reading the tests. Unfortunately this was not the case. There were not many tests and the tests that were there did not provide me with the information I needed.

The next step was then to start on the first of my user stories. To retrieve a product catalogue and return it to my client. Looking at the domain this seemed pretty simple. I could clearly see which objects I needed to work with. But I could not find any repositories, factories, or services that would give me access to the objects.

I talked with my contact at the third party vendor. I specifically asked for repositories, factories and services. I also mentioned that I had got the impression that the application had a domain driven design. He talked with some of the developers and came back to me sounding a bit embarrassed. He had asked about domain driven design. The development team had told him that well, in part, perhaps, a little. It was kind of work in progress. No I would have to use their service tier. And to understand how the service tier worked would have to look at their controller tier. And to understand the controllers I had to look at the view beans. And then there was all the configuration hidden in the spring XML. They also said that it may prove quite time consuming and difficult to do what I intended to do. This was due to most of the logic actually being part of the view. Which was what I needed to replace. The system wasn't really designed to do what I had set out to do.

Ouch. It was a rather awkward moment for my contact at the vendor. He didn't like to give me this message. He is a pretty technical guy and he knows when the code sucks.

I spent the next three weeks just untangling how they put together their request from the view to the search server so that I could retrieve products matching a product catalogue. It took me a disproportionate amount of time just to figure out how they configured the database settings.

So how come I am telling this story?

With the knowledge and understanding I now have of the code base I can read the history of this application. I can clearly see the different struggles they have had from when they set out to nock up a quick web store in Java. How they made the decision to sacrifice code cleanliness and testing to maximise speed. How they had to scramble to meet new customer feature requests never quite having time to tidy up the mess. At some point some time was invested in cleaning up some of the mess and release the source code to customers. This to allow customers and partners to make modification that would not be supported by the vendor.

This is not what I should see when working with a code base. This is not what the code was written for. It was written to solve the problems with online shopping on a global scale. When I read the code that is what it should tell me. How they solved that problem.

My guess is that no one in the team that started this project knew quite what they sacrificed. They didn't have the experience of a well working clean code base. They had not worked with the confidence that a test driven, automated code base gives.

This is unfortunate. What if one of the initial senior architects/developers would have pushed through clean test driven code? I think a lot would have changed. As the system grew more complex and it became apparent that there were to much logic in the view the team would gradually implement domain driven design. With a full regression test suite to support the change the risk would be minimal. Further to this it could be done in verticals that would have made sense to the reader. As new architectural challenges were encountered, for example scalability, the system would be robust and tested enough to implement it in the best possible way with easy and next to no risk.

For me it would have been easy to understand what I was looking at. For my contact at the vendor, and probably the developers replaying to my questions, they could proudly have said that yes, it's a fully tested, domain driven architecture. If you look here and here you will find the repositories. And why don't you read through test suite x, y and z to see how we use them in out view tier. Perhaps I wouldn't even have had to ask.

So my lesson from this story is that the clean code and the tests were needed from the start. It should have been there in order for the code base to evolve. And it would have saved money and time. For them and for me. From the start.

When writing my article AMQP Hello World as an Exploratory test I realised that the test is written in a different way then many tests I read. I felt that I should explain why I choose a different design when writing tests since I think this would benefit many projects.

The code below is taken from the example provided in the AMQP article and can be found at my github repository.

My main driver when writing tests is readability. A test should read pretty much like a well written requirement specification. It will detail the requirements for set up before testing can start. It will then details the given state of the application, moving on to the state when something happens and then verify that expectations were met. Before finishing the application needs to exit gracefully.

When I work with tests in Java I tend to use JUnit 4. JUnit 4 does not need naming conventions, such as setUp, testXyz and tearDown. This is all driven with annotations, which frees up the name space and allows the developer to instead name the methods according to what they do.

Below is the set up method for my test:

The method name clearly states what the method does. The annotation clearly states which role the method plays in the test life cycle. This makes the test code easier to read and debug.

When working with the actual test methods I use names that clearly identifies what the test verifies. Below is a test methods from the example. The method name points out what the method tests. The content is then written in a given-when-then pattern to detail the steps that are required.

This way of working has a couple of side effects that are quite useful. When adding more tests, either as new test methods or as new test classes, it is easy to spot similarities which can be shared in between tests. This in turn helps to enable the creation of a domain language for the tests of the application. It is important though not to forget that tests should first and foremost be readable. Creating to much abstraction makes it harder to follow the test code.

The given-when-then methods clearly defines the steps taken to perform the tests but does not slow the reader down with large amounts of implementation details. This makes the tests a good source of documentation. It further more makes it very easy to see if a test failed due the state being changed before or during the operation that is under test.

The tear down method is, just as the set up method named according to what it does rather then its function in the test life cycle. Its function in the test cycle is defined by its annotation.

I find that this style of test design makes the tests much easier to read, maintain and understand. They also work much better as documentation when trying to understand generalised well written code.