For some reason I could not get Karma and Angular to work with Jasmine on my machine and there was precious little debug help. So instead I decided to try out QUnit as the test runner inside Karma. This required some puzzling together of different blogs and other instructions so I though I'd putt it here for future reference.

Fist of all I created a new project directory. This is where all commands are executed from and all paths starts from.

Next we will need the libraries Angular and Karma-qunit. Lets start with Angular. Download it from the website. I choose the zip which includes all the Angular files. Expand the zip into lib/angular

To install Karam I use npm. Run the following command:

This will install karma-qunit into the project folder. I prefer this to using a global version since it happens all to often that tools break when upgraded and having them locally means that I can control which version is used for each project. The drawback is that in order to run karma you need to issue the command ./node_modules/karma/bin/karma every time you need to run it. To make life easier for your self you can add an alias for Karma like so:

Put the line at the bottom of your zshrc or bash_profile to ensure it's always loaded.

Next we need to configure Karma so that it can find the source code and libraries. This will also allow us to configure which browsers should be captured during testing. To do this we will use Karmas builtin init function. Run the command karma init and follow the instruction on the screen. Look at the config file below to for guidance when answering the questions.

Now that we have a configuration file there are a couple of things that may need changing. Open the file karma.conf.js in a text editor and compare it to the example config above. I made the following changes to mine:

  • In the file section there are two exploded values, I.E. values in {}. When specifying a file pattern to Karma you can either use a string or expand it to set some properties. Since we will not change the files we serve from Angular there is no need to set watchers on them which is why then are exploded. The lines below should be first in the list (make sure that the same file pattern is not in the list twice).

  • Make sure the following files are in the exclude list:
  • If you want to see more output from the running tests you can sett the logLevel value to config.LOG_DEBUG.

You should now be able to start Karma from the root of your project. When it starts it will report an error:

Firefox 26.0.0 (Mac OS X 10.9): Executed 0 of 0 ERROR (0.235 secs / 0 secs)
This is as it should be. There are no tests available and this is reported as an error. You can now leave karam running. It will detect changes to the files in js/ and test/ and rerun any tests it finds.

Now lets add our first test. To make sure the test runner works with Karma we'll start with a "non test". Since I will be testing a file called controllers.js when I write the real tests I'll add the test to the file test/controllerTests.js.

When I save the file Karma detects the change and runs the test. This should show up in your Karma console:

Firefox 26.0.0 (Mac OS X 10.9): Executed 1 of 1 SUCCESS (0.651 secs / 0.001 secs)

Lets add some real test code instead. I am going to create an application called freecycleApp. It will have a controller called NodeController. To load this into the test there is some scaffolding required so I'll add the following to test/controllerTest.js:

The module call will allow you to set up the scaffolding required to launch your Angular controller. The injector will load the application, the ctrl is your controller and the scope is the data object passed to the controller.

The first test added to test/controllerTest.js looks like this:

The test will fail, complaining that there is no module called freecycleApp so we better create it. The production code goes into the file js/controller.js.

This will create the angular module freecycleApp and the controller NodeController, which returns a list of nodes.

This should set you up to start test driving your Angular development using QUnit. Have fun!

Legacy Code is important. Not just to us, as programmers, and the companies we happen to be working for. It has much larger implications then that. A couple of years ago I heard in a podcast that software has a 98% penetration rate on our everyday lives (I can unfortunately not find the reference now). The figure was for the northern hemisphere but the south is catching up fast. This means that our legacy as programmers, our legacy code, has an impact on pretty much every part of the society we live in. It affects us all every day of our lives. At home, at work, going in-between and pretty much everywhere else. I think we often forget this when we hack away at the task at hand. I certainly often do. We should take credit for all the great things we are able to create. But we also have to take responsibility for the bad. It is up to us to ensure that we create great software.

Because of this I think it's important to analyse how we deal with legacy code. Can we make sure that the code we leave behind does not become a burdon, a liability, but an asset? If we can, then how can we do this? And if we can't, then how can we ensure that cost is as low as possible?

Before I dig in to this subject I will try to define what I mean with legacy code. It's a term that is used by many and in many different contexts. Looking around online I find Wikipedias definition:

"Legacy code is source code that relates to a no-longer supported or manufactured operating system or other computer technology."

Another well known definition is Michael M. Feathers's from his book Working Effectively with Legacy Code:

"To me, legacy code is simply code without tests."

These two are in a way two ends of a spectra, where Wikipedias definition is rather weak compared to how we use the term in our profession and Feathers definition is rather harsh, considering that most programmers don't test drive there work and therefor create legacy as soon as they sit down at the keyboard.

There are a couple of other interesting pages on the subject that I have stepped over in my search for a good definition, one being the C2 wiki page, which I found the most inspirational, the other is /*Programmers*/ which discusses the subject in quite some detail.

My definition is:

Any part of a system, or whole system, that is hard or impossible to change when the requirements for it changes in such a way that the code needs changing to continue to serve it's users in the best possible way.

This means that code becomes legacy as soon as the context in which the code functions in changes and the code cannot keep up with the change in an easy way. What I mean with the context is anything that effects the code, this includes users requirements, regulations and the technical environment as well as anything else that affects the codes function. Easy to change is when the difficulty of change is proportionate to the perceived impact of the change. Some things are harder to change, for example a storage solution. Others are simpler, for example UI widgets or fields.

This makes all legacy code a bad legacy. That is how we use the term in our every day work. Therefor we also have to try to minimise the legacy code we create.

I have found two possible ways to deal with this. One is test first or TDD. The other is to ensure that each component, each function, is small enough and decoupled enough to be thrown away in it's entirety when it needs changing. I'll look closer at each in turn.

Test first or TDD is a way to work with the low level code as well as with higher level requirements by writing tests first. We have all heard the debate about this. I think this way of working is the only way to ensure that all aspects of importance in the code are tested in such a way that we can ensure no regression happens when we change them. This is, as far as I know, the only way that we can keep a system agile and easy to change. In other words the only way that we can keep a system from becoming legacy. Higher level tests are bound to requirements, lower level tests are bound to implementation on a class and method level.

What I have also found is that in such cases, sometimes when the tests are to hard to change to fit new requirements it is possible to throw them and the accompanying code away. The tests provides the documentation needed to improve the subsystem to work according to the new and old requirements. And the high level tests will catch functional regression. For this to work properly everyone has to realise that all code is design, every single line, and that everyone is responsible for the design.

I know that not everyone think they can do Test first. And perhaps it is not required as long as all requirements are covered in functional, BDD style, tests. In my experience as a Test first person this is not true but perhaps I fail on the more general level since I have the safety net further down and closer to the implementation.

The other way to avoid costly legacy is to make sure that each piece of legacy is so small that it is disposable when requirements change. I realised this quite recently when understanding Micro Service Architectures. The way I understand this is that the concept of a "class" is pretty much a service. Sure, the classes we normally create are much smaller but the higher level classes that we aggregate to provide a function of the system, the "public" interface, can be defines as a service. They are then bound together using a message or service buss instead of the normal object messaging used in classic class oriented systems. This means that it's an external orchestrator that provides the system of functions to the outside world. Tests are still essential and I believe they should be written first. But test in this context are functional tests which verifies the services functions as provided to the buss, and other services. Here BDD and similar methods are excellent. And the boundaries are made very clear. It also means that one service can be written in one language and another in another language. When a service needs to change it is quite possible to throw it away and create a new one. Chances are it's even better when reworked a second or third time.

The risk with this approach is of cause that the orchestration becomes so complex that the legacy is introduced on higher level instead. It is however often easier to mange complexity on higher levels then further down.

There are of cause situations when code does not have to be all squeaky clean. PoCs, pilots and other early incarnations of systems are often put together in a hurry and can often be thrown away as a learning experience. The problem with legacy arises when we risk loosing important functionality. So here it is our responsibility not to give in to pressure from others to let system with to much and badly controlled technical debt go live. This is not easy but I think we have to in order to build a reputation as professionals.

I hope to get your comments on the subject so please share your thoughts below or in your own blog with a ping back.

I got the opportunity to go to Øredev last week. Thanks to Martin Stenlund, the CEO at the consultancy Diversify where I work, I got a full three day pass. I’ll try to summarise here what I found most valuable.

There were a couple of talks about Agile and how to bring Agile forward that I found very refreshing. They kind of go below the surface, the process, method and so on and look at the underlying values that can empower us if we only let them. There was the Mob Programming talk by Woody Zuill and also Implementing Programmer Anarchy by Fred George. Both of them focused on what can happen when you let the team get the jobb done and get out of the way. As Zuill put it it’s not about making great software, it’s about creating an environment where great software is inevitable. Much of this is about competence, both from the business side and from the programmers side. If either is not advanced enough to work in this way it will fail. But we can still strive to educate each other and our self to become knowledgeable enough to work like this. If you have the opportunity I warmly recommend you to see the talk by Fred George, it has unfortunately not been published online yet. You can see the Mob Programming talk here.

The most surprising talk I attended was a talk on performance testing by Scott Barber. The reason I was surprised was that I had planned to go to see a different talk, about Netflix architecture, but went to the wrong room. I think that was a strike on luck, the talk was an enlightenment on how we can do performance better, and much easier. What I brought with me was the thought that performance testing is something that should be done from the very start to the very end on every level. Scott pushed it as far as down to the unit level. Measure how long each operation takes by logging start and end time. You can then use the data to quickly pinpoint where and when a bottleneck was introduced. My first though when he said this was to write a new JUnit runner which logs start end end time for each test. This can be applied on all levels where JUnit is the test framework, which in Java is almost everywhere. If you have the opportunity to see this guys talk it’s worth the time. He is a good presenter with a very strong and contagious passion for performance.

I of cause watched Adam Petersen Tornhill’s talk Code as a Crime scene. He puts forward an interesting argument on how to track bug hotspots, places in dire need of fixing, using forensic physiology. What it really boils down to is to look at what changes the most at the same time, and has the most code in it. This is probably the place which will have the most issues. He also introduces a tool he has written which will help do the investigation by analysing SCM history. It’s available at Github. There is more to it then this and the talk is available online so please do watch it.

The last session I will mention here was J.B Rainsberger’s talk Extrem Personal Finance. In his talk he goes through how he managed to become fanacially independent without winning the lottery. What stuck with me was a method to calculate how much of my life I spend to be able to do something. As an example he mentions that his habitual coffee on the way to work cost him 23 minutes of his life each morning. Was it worth it? It is also well worth a watch and available online.

There was of cause several other good sessions. Some I missed and some I saw. But above are the sessions that have had the most impact on me and which will stick with me. All in all it was a very good conference and I am very grateful that I had the opportunity to go.

The other day a colleague asked if I had ever tried to set up any of my GitHub projects using one of the free CI services available. I hadn't but thought it an interesting thing to do. It proved to be a ridiculously easy task.

I could find two service providers on the market. One is BuildHive from < href="http://www.cloudbees.com/" target="_blank">CloudBees. This is the the company behind the CI server Jenkins. The other is called Travis CI and is provided by Travis. Travis id a, to me new, CI company who also seem to offer some enterprise services in the cloud.

I have used Jenkins allot in the past and it's a very good CI server. The company behind, CloudBees, also offer an enterprise service which I think is a very good fit for any company who want to get first class support with their CI and CD needs. However I choose to go with Travis since they requested less permissions to my GitHub account. I have not contacted CloudBees yet to find out why the want write access to my account setting, followers and private repositories when I just want to build public ones but I'd like to know.

There are three simple steps to go through to set up Travis CI with your GitHub projects:

  1. Log in to Travis CI using your GitHub OpenId.
  2. Enable the project you want to build in the Travis CI control panel.
  3. Add the .travis.yml build script file matching the needs of your build to your project on GitHub.

It is literary this simple. Granted, the project I selected has no special requirements. It builds nicely (well actually it doesn't since I have test failures) with just a simple mvm install command. But I think this is remarkably easy and something that everyone should try. If for no other reason so just to have the experience of how simple some things in software can be when done well.

Thanks Travis CI for such a spotless integration!

I had the pleasure to organise the first ever Øresund Software Craftsmanship Group Meeting on the 12th of September 2013. We were 14 attendees who met at Avega Group's office in Malmö. We opened with some beer and chit chat.

We then started the meeting with me presenting why I think this community is important and why I organised the meeting. Then we proceeded to the main topic of the night:

What is Software Craftsmanship?

We spent a good hour discussing this with the back drop of the manifest. The conversation was started with a question on if it is required to practice TDD to be a software craftsman. This was then quickly linked to the notion of well-crafted software and the conversation moved on to try and define this. Testing had a clear focus here.

The group then proceeded to talk about how to steadily add value to the customer through code. Upgrading infrastructure, language and tool versions and so forth and when this is a good decision, rather then a decision made by a developer who think new technology is cool.

After the break with pizza and beer we re-conveend to sum up and talk about what to do next.

It was decided that the group will move to FooCafe and we will meet the second Thursday of every month.

The following topics came up as possible for future meetings:

  • Tools and practices used to create well crafted code
  • What personal skills are required of a software craftsman
  • War Stories - tails from trying to implement SC in different contexts
  • TDD
  • BDD
  • Clean Code - Book and films
  • Clean Coder book
  • Study group about other books
  • Design Patterns Group
  • Interesting Speakers

For next meeting, the 10th of October, the subject will be:
Clean Code Episode 6 - TDD - Part 1
More info and registration at FooCafe