Monthly Archives: May 2011

I have recently been working on a e-commerce project. We needed to create an extension to a third party e-commerce application. The source code is open to customers and partners. It is a pretty comprehensive and complex application.

The e-commerce domain is a pretty complex domain. With a system that is designed for a global market with all national tax rules, shipping rules and other variations and also the requirement to adjust prices according to customers and running campaigns makes for a very complex system. On top of this it needs to integrate with and in large parts understand the logistics and stock keeping domain, provide customer service interfaces and all other things that comes with running a reputable global web store.

When I started planning for the extension I read the documentation and the JavaDocs and looked at the classes and database schemas. This gave me some initial ideas that the code would be domain driven. They have borrowed some of the patterns from domain driven design. It looked promising.

As I dug deeper I started looking at the tests. In my experience a well crafted system can often be understood by reading the tests. Unfortunately this was not the case. There were not many tests and the tests that were there did not provide me with the information I needed.

The next step was then to start on the first of my user stories. To retrieve a product catalogue and return it to my client. Looking at the domain this seemed pretty simple. I could clearly see which objects I needed to work with. But I could not find any repositories, factories, or services that would give me access to the objects.

I talked with my contact at the third party vendor. I specifically asked for repositories, factories and services. I also mentioned that I had got the impression that the application had a domain driven design. He talked with some of the developers and came back to me sounding a bit embarrassed. He had asked about domain driven design. The development team had told him that well, in part, perhaps, a little. It was kind of work in progress. No I would have to use their service tier. And to understand how the service tier worked would have to look at their controller tier. And to understand the controllers I had to look at the view beans. And then there was all the configuration hidden in the spring XML. They also said that it may prove quite time consuming and difficult to do what I intended to do. This was due to most of the logic actually being part of the view. Which was what I needed to replace. The system wasn't really designed to do what I had set out to do.

Ouch. It was a rather awkward moment for my contact at the vendor. He didn't like to give me this message. He is a pretty technical guy and he knows when the code sucks.

I spent the next three weeks just untangling how they put together their request from the view to the search server so that I could retrieve products matching a product catalogue. It took me a disproportionate amount of time just to figure out how they configured the database settings.

So how come I am telling this story?

With the knowledge and understanding I now have of the code base I can read the history of this application. I can clearly see the different struggles they have had from when they set out to nock up a quick web store in Java. How they made the decision to sacrifice code cleanliness and testing to maximise speed. How they had to scramble to meet new customer feature requests never quite having time to tidy up the mess. At some point some time was invested in cleaning up some of the mess and release the source code to customers. This to allow customers and partners to make modification that would not be supported by the vendor.

This is not what I should see when working with a code base. This is not what the code was written for. It was written to solve the problems with online shopping on a global scale. When I read the code that is what it should tell me. How they solved that problem.

My guess is that no one in the team that started this project knew quite what they sacrificed. They didn't have the experience of a well working clean code base. They had not worked with the confidence that a test driven, automated code base gives.

This is unfortunate. What if one of the initial senior architects/developers would have pushed through clean test driven code? I think a lot would have changed. As the system grew more complex and it became apparent that there were to much logic in the view the team would gradually implement domain driven design. With a full regression test suite to support the change the risk would be minimal. Further to this it could be done in verticals that would have made sense to the reader. As new architectural challenges were encountered, for example scalability, the system would be robust and tested enough to implement it in the best possible way with easy and next to no risk.

For me it would have been easy to understand what I was looking at. For my contact at the vendor, and probably the developers replaying to my questions, they could proudly have said that yes, it's a fully tested, domain driven architecture. If you look here and here you will find the repositories. And why don't you read through test suite x, y and z to see how we use them in out view tier. Perhaps I wouldn't even have had to ask.

So my lesson from this story is that the clean code and the tests were needed from the start. It should have been there in order for the code base to evolve. And it would have saved money and time. For them and for me. From the start.

Have you ever got a ClassNotFoundException when running tests and yet the dependency that contains the jar is clearly in your pom file? This is most commonly due to collations in transitive dependencies. If Maven cannot decide which version of a dependency to use it will not import it. It will expect you to sort the problem out first.

Handling transitive dependencies should always be done pro-actively. And there is some really good tooling for it. Being a IntelliJ/IDEA developer my self it saddens me to say that in this instance my favourite IDE is outdone by NetBeans. NetBeans uses Maven pom files as it's project descriptors which makes it very easy to open the project.

When the project is opened in NetBeans you will find the pom.xml file located under project files in the project navigator. Once opened you will find an item named "Show Dependency Graph" in the context menu. This will render a graphical representation of all the projects transitive dependencies.

If any of the dependencies has a red top left corner there is a version conflict. Resolving it is done quickest by opening the context menu for the dependency and selecting "Fix Version Conflict". Unless you have a specific preference that differs from the default suggestion in the dialogue it is usually fine to got with the default solution.

If any of the dependencies has a yellow top left corner there are more then one version of the dependency requested. Maven will always choose the latest version in the list. This can, and should, however be overridden using the exclusions tag. I tend to take control over all such dependencies buy making sure I clearly choose which one should be included. You have to do this manually by excluding the dependency from each of the dependencies where the version you do not want is included and then ensure that the dependency requiring the correct version is the only one still including it. Sometimes it's better to exclude it as a transitive dependency altogether and add it as a direct dependency.

Whenever a new dependency is added to the project I make sure to perform the same procedure to ensure that I have full control of the dependencies in my project.

I would be very pleased to find such a great tool in IDEA as well, then I wouldn't have to run two IDEs at the same time, they do require a lot of resources.