Tag Archives: automation

I have been at many places, in many teams, where no-one has ownership of the build scripts. It’s an area where a company can bleed a vast amount of time and money without realising it. It  may even be what makes senior member leave in the end due to frustration with to much inaction.

What is unfortunate is that it is relatively simple to fix. Sure, there are risks associated with any legacy clean up. But getting the build to work is far simpler then many other tasks in a legacy codebase. I find that the main reason it is not done is that the team can't communicate the value to management (including product owners). Therefor it gets low priority and ends up at the bottom of the back log.

I have always found that one of the best ways to communicate value to management is in money. By putting forward a business case with a clear RoI (return of investment). When it comes to build scripts this is a pretty easy task. First, ask everyone on the team who uses the build scripts to keep track of how much time they spend doing build related stuff. How much time do they spend working around a broken or non-functional build. Try to count only time which is spent unnecessarily. Once you have enough data to know what the broken builds costs on a weekly/monthly/quarterly (which ever suites your needs) basis it is time to look at how much it will cost to fix the build. I suspect the ratio will be quite scary. The difference is how much you loose each week/month/quarter. Hopefully this should get managements attention and priority.

Now that fixing the build is top in the backlog we need to make sure it doesn't fall back into despair again. This is where ownership comes into play. In a well functioning self organising team this is never an issue. Everyone owns the build. Everyone makes sure the build is just as snappy as everything else.

But If you don't work in senior or self organising team? Who should be responsible? A good place to start is the team. Perhaps someone feels very strongly for it. Passion goes a long way. Granted, you can't have just one who owns the build. It is knowledge which is far to important not to share between at least two. But with one lead it's usually easy to follow. Make sure to pair often to distribute knowledge.

If it's difficult to find someone who wants to take responsibility the team should make sure to pass it around. Share the pain to make it lesser for everyone. The main responsibility will however always fall on the architect. A working build is part of the non functional requirements that makes up an architects main concerns. How will you as the architect be able to guarantee that the solution you are the master builder of will ever reach it's intended user or customer if it cannot be built?

Now, go and fix that build and remove the pain. It is so worth it. And remember, with a working build you'll have so much more time to code, and that is what's really fun!

The other day a colleague asked if I had ever tried to set up any of my GitHub projects using one of the free CI services available. I hadn't but thought it an interesting thing to do. It proved to be a ridiculously easy task.

I could find two service providers on the market. One is BuildHive from < href="http://www.cloudbees.com/" target="_blank">CloudBees. This is the the company behind the CI server Jenkins. The other is called Travis CI and is provided by Travis. Travis id a, to me new, CI company who also seem to offer some enterprise services in the cloud.

I have used Jenkins allot in the past and it's a very good CI server. The company behind, CloudBees, also offer an enterprise service which I think is a very good fit for any company who want to get first class support with their CI and CD needs. However I choose to go with Travis since they requested less permissions to my GitHub account. I have not contacted CloudBees yet to find out why the want write access to my account setting, followers and private repositories when I just want to build public ones but I'd like to know.

There are three simple steps to go through to set up Travis CI with your GitHub projects:

  1. Log in to Travis CI using your GitHub OpenId.
  2. Enable the project you want to build in the Travis CI control panel.
  3. Add the .travis.yml build script file matching the needs of your build to your project on GitHub.

It is literary this simple. Granted, the project I selected has no special requirements. It builds nicely (well actually it doesn't since I have test failures) with just a simple mvm install command. But I think this is remarkably easy and something that everyone should try. If for no other reason so just to have the experience of how simple some things in software can be when done well.

Thanks Travis CI for such a spotless integration!

1 Comment

I am currently working with a rather complex e-commerce system. It is a web application archive which is using Spring 2.5 throughout to load wire the application. I need to expose some of its functionalty as a REST API to other applications. I am using Spring 3, Jersey and javax.inject to make this possible.

But to understand and verify that my interaction with the underlying e-commerce system works I have to redeploy the whole application to a server. This is very expensive. Even when using such a brilliant tool as JRebel from Zero Turnaround, which saves me from restarting the server. Many of the integration points are built up of several small steps that needs to be done in order to ensure that the function works. To debug errors, which happens often, I have to redeploy, test and redeploy.

To speed this process I am using a technique called exploratory testing. Instead of writing production code which runs on a server and then test this production code I create unit tests that acts against the e-commerce APIs I need to use. This removes the need to redeploy and restart a web server every time I discover an issue with the integration.

This is made possible using two simple constructs. One is the @RunWith annotation provided in JUnit 4. It enables the tests to run with a different test runner. Spring provides a SpringJUnit4ClassRunner that can load a Spring application context into the normal JVM. The other is the Spring annotation @ContextConfiguration which is used to point the test runner to the correct Spring configuration to use.

Exploratory testing has other advantages too.

The tests are not throw away tests. They live their own life in a maven project which is part of the larger reactor build. It is executed automatically in our CI server prior to all other projects. If the e-commerce system is changed, or we need to upgrade it, we only need to run the exploratory tests to verify if our application will work.

Another great benefit is that the tests also allows provides executable documentation. They help clarify assumptions that are used in the production code and tests which otherwise would have to be written down as text documentation or comments. And we all know what usually happens with comments and text documentation.

The conclusion is that whenever you are faced with understanding a complex third party API exploratory testing almost always pays of. If not for time saved due to redeployment and restating of an enterprise server so at the very least for regression testing and documentation.

I have recently been working on a e-commerce project. We needed to create an extension to a third party e-commerce application. The source code is open to customers and partners. It is a pretty comprehensive and complex application.

The e-commerce domain is a pretty complex domain. With a system that is designed for a global market with all national tax rules, shipping rules and other variations and also the requirement to adjust prices according to customers and running campaigns makes for a very complex system. On top of this it needs to integrate with and in large parts understand the logistics and stock keeping domain, provide customer service interfaces and all other things that comes with running a reputable global web store.

When I started planning for the extension I read the documentation and the JavaDocs and looked at the classes and database schemas. This gave me some initial ideas that the code would be domain driven. They have borrowed some of the patterns from domain driven design. It looked promising.

As I dug deeper I started looking at the tests. In my experience a well crafted system can often be understood by reading the tests. Unfortunately this was not the case. There were not many tests and the tests that were there did not provide me with the information I needed.

The next step was then to start on the first of my user stories. To retrieve a product catalogue and return it to my client. Looking at the domain this seemed pretty simple. I could clearly see which objects I needed to work with. But I could not find any repositories, factories, or services that would give me access to the objects.

I talked with my contact at the third party vendor. I specifically asked for repositories, factories and services. I also mentioned that I had got the impression that the application had a domain driven design. He talked with some of the developers and came back to me sounding a bit embarrassed. He had asked about domain driven design. The development team had told him that well, in part, perhaps, a little. It was kind of work in progress. No I would have to use their service tier. And to understand how the service tier worked would have to look at their controller tier. And to understand the controllers I had to look at the view beans. And then there was all the configuration hidden in the spring XML. They also said that it may prove quite time consuming and difficult to do what I intended to do. This was due to most of the logic actually being part of the view. Which was what I needed to replace. The system wasn't really designed to do what I had set out to do.

Ouch. It was a rather awkward moment for my contact at the vendor. He didn't like to give me this message. He is a pretty technical guy and he knows when the code sucks.

I spent the next three weeks just untangling how they put together their request from the view to the search server so that I could retrieve products matching a product catalogue. It took me a disproportionate amount of time just to figure out how they configured the database settings.

So how come I am telling this story?

With the knowledge and understanding I now have of the code base I can read the history of this application. I can clearly see the different struggles they have had from when they set out to nock up a quick web store in Java. How they made the decision to sacrifice code cleanliness and testing to maximise speed. How they had to scramble to meet new customer feature requests never quite having time to tidy up the mess. At some point some time was invested in cleaning up some of the mess and release the source code to customers. This to allow customers and partners to make modification that would not be supported by the vendor.

This is not what I should see when working with a code base. This is not what the code was written for. It was written to solve the problems with online shopping on a global scale. When I read the code that is what it should tell me. How they solved that problem.

My guess is that no one in the team that started this project knew quite what they sacrificed. They didn't have the experience of a well working clean code base. They had not worked with the confidence that a test driven, automated code base gives.

This is unfortunate. What if one of the initial senior architects/developers would have pushed through clean test driven code? I think a lot would have changed. As the system grew more complex and it became apparent that there were to much logic in the view the team would gradually implement domain driven design. With a full regression test suite to support the change the risk would be minimal. Further to this it could be done in verticals that would have made sense to the reader. As new architectural challenges were encountered, for example scalability, the system would be robust and tested enough to implement it in the best possible way with easy and next to no risk.

For me it would have been easy to understand what I was looking at. For my contact at the vendor, and probably the developers replaying to my questions, they could proudly have said that yes, it's a fully tested, domain driven architecture. If you look here and here you will find the repositories. And why don't you read through test suite x, y and z to see how we use them in out view tier. Perhaps I wouldn't even have had to ask.

So my lesson from this story is that the clean code and the tests were needed from the start. It should have been there in order for the code base to evolve. And it would have saved money and time. For them and for me. From the start.

A little while back I wrote about test driving Tornado development. This is all good and well, but how about getting instant feedback? If you are developing on a Mac or a Linux machine this is certainly possible. I found this great blog post on how to set up a change monitor on a directory that will alert you on test failures using Growl or Notify. Thanks Rodrigo Pimentel for the write up.