Tag Archives: tdd

For some reason I could not get Karma and Angular to work with Jasmine on my machine and there was precious little debug help. So instead I decided to try out QUnit as the test runner inside Karma. This required some puzzling together of different blogs and other instructions so I though I'd putt it here for future reference.

Fist of all I created a new project directory. This is where all commands are executed from and all paths starts from.

Next we will need the libraries Angular and Karma-qunit. Lets start with Angular. Download it from the website. I choose the zip which includes all the Angular files. Expand the zip into lib/angular

To install Karam I use npm. Run the following command:

This will install karma-qunit into the project folder. I prefer this to using a global version since it happens all to often that tools break when upgraded and having them locally means that I can control which version is used for each project. The drawback is that in order to run karma you need to issue the command ./node_modules/karma/bin/karma every time you need to run it. To make life easier for your self you can add an alias for Karma like so:

Put the line at the bottom of your zshrc or bash_profile to ensure it's always loaded.

Next we need to configure Karma so that it can find the source code and libraries. This will also allow us to configure which browsers should be captured during testing. To do this we will use Karmas builtin init function. Run the command karma init and follow the instruction on the screen. Look at the config file below to for guidance when answering the questions.

Now that we have a configuration file there are a couple of things that may need changing. Open the file karma.conf.js in a text editor and compare it to the example config above. I made the following changes to mine:

  • In the file section there are two exploded values, I.E. values in {}. When specifying a file pattern to Karma you can either use a string or expand it to set some properties. Since we will not change the files we serve from Angular there is no need to set watchers on them which is why then are exploded. The lines below should be first in the list (make sure that the same file pattern is not in the list twice).

  • Make sure the following files are in the exclude list:
  • If you want to see more output from the running tests you can sett the logLevel value to config.LOG_DEBUG.

You should now be able to start Karma from the root of your project. When it starts it will report an error:

Firefox 26.0.0 (Mac OS X 10.9): Executed 0 of 0 ERROR (0.235 secs / 0 secs)
This is as it should be. There are no tests available and this is reported as an error. You can now leave karam running. It will detect changes to the files in js/ and test/ and rerun any tests it finds.

Now lets add our first test. To make sure the test runner works with Karma we'll start with a "non test". Since I will be testing a file called controllers.js when I write the real tests I'll add the test to the file test/controllerTests.js.

When I save the file Karma detects the change and runs the test. This should show up in your Karma console:

Firefox 26.0.0 (Mac OS X 10.9): Executed 1 of 1 SUCCESS (0.651 secs / 0.001 secs)

Lets add some real test code instead. I am going to create an application called freecycleApp. It will have a controller called NodeController. To load this into the test there is some scaffolding required so I'll add the following to test/controllerTest.js:

The module call will allow you to set up the scaffolding required to launch your Angular controller. The injector will load the application, the ctrl is your controller and the scope is the data object passed to the controller.

The first test added to test/controllerTest.js looks like this:

The test will fail, complaining that there is no module called freecycleApp so we better create it. The production code goes into the file js/controller.js.

This will create the angular module freecycleApp and the controller NodeController, which returns a list of nodes.

This should set you up to start test driving your Angular development using QUnit. Have fun!

I use nosetests when running my python tests. It is a really neat little tool that will automatically discover and run all tests in the project with one single command line command - nosetests.

But it has more neatness then this. With a simple option it will also print test coverage for the project: --with-coverage, and if coverage should be calculated base on specific packages then use --cover-package which takes a list of packages to calculate coverage on. This is useful when there are library packages for a virtual env or similar within the same directory as you run nose.

I don't like to leave lose ends so to ensure that there are no lingering compiled *.pyc files sitting around I run the following script before commit:

Where <source-dir> should be replaced with directories where source files are. This is a space delimited list. And <package> is replaced with packages which should have coverage statistics calculated.

Most of us have at least heard about the DRY principle. But how is it applied in a pragmatic way in software development?

I will take you though my thoughts as I work with an application to illustrate how, when and why I apply it as well as where it may be better not to.

The code is in python. It is not a fully working application since it isn't open source.

The example application is a simple stock management application. It has a warehouses. The warehouse has sections and each section contains SKUs (Stock Keeping Units). It also has a catalogue with categories and products. Each product is bound to one or more SKUs.

I am using SQLAlchemy as ORM and this also maintains the database schema using declarative_base. In the tests I am using fixture to create data fixtures that can be injected to the database when executing the tests. Since the application is so simple SQLite is a sufficient database engine.

The first requirement is to create a function that will find a warehouse using a warehouse ID. The first step is to start the database and inject data into it. This is done in the setUp() method in the test like so:

The WarehouseFixture and SectionFixture are fixture.DataSet classes, Warehouse and Section are domain classes.

This makes it possible to query data through the DOA and then assert the results using the fixtures.

After completing the other tiers needed in the application to expose this as a JSon front end I need to create some functional tests. They will need to inject the data into the database in the same way that the DAO test did. Then the functional tests will validate the JSon response from the server using the fixtures.

I could, and I know some would, C&P the function calls above into a new setUp() method in the functional test. But doing so will disconnect the fixtures and DAO tests in the functional tests with the tests used in the DAO. Since the DAO tests and fixtures will evolve with the data model and database we don't want the functional tests lose that connection. Instead I pull up some functions from the DAO test case into stand alone functions. This can then be used from the functional tests as well as the DAO tests.

We now have generic functions that can be called in different, unrelated, tests. And the code is DRY.

Next it is time to add some new DAO tests. This DAO will be in a new module which handles the catalogue. Since the previous refactoring did not require decoupling from the warehouse, both DAO and functional tests test the warehouse, it took place within the warehouse dataaccess test module. Since this will create a new decoupled module the test modules should not be coupled. To enable this we create a utility module called data_fixtures with a new class called DataFixtureTestCase like so:

Since python supports multiple inheritance it is possible to subclass this class and AsyncHTTPTestCase when creating functional tests. In the test cases setUp() function setupDatabase() is called to set up the database and teardownDatabase() is called in tearDown() to tear it down. The setupDatabase() also ensures that the required properties have been set before setting up the database.

To keep the connection from functional tests to dataaccess the functional test module imports the engine, env and fixtures properties from dataaccess.

The above refactorings helps keep the code DRY whilst easy to understand. It also creates a place where common assertions bound to data fixtures can live.

It should be noted that when working with test DRY is not always your friend. If the tests loses any of it's expressiveness as "executing documentation" (tests is the first stop for understanding production code) expressiveness wins any day. This is actually reversed from production code which must be kept DRY at all times.

I have recently been working on a e-commerce project. We needed to create an extension to a third party e-commerce application. The source code is open to customers and partners. It is a pretty comprehensive and complex application.

The e-commerce domain is a pretty complex domain. With a system that is designed for a global market with all national tax rules, shipping rules and other variations and also the requirement to adjust prices according to customers and running campaigns makes for a very complex system. On top of this it needs to integrate with and in large parts understand the logistics and stock keeping domain, provide customer service interfaces and all other things that comes with running a reputable global web store.

When I started planning for the extension I read the documentation and the JavaDocs and looked at the classes and database schemas. This gave me some initial ideas that the code would be domain driven. They have borrowed some of the patterns from domain driven design. It looked promising.

As I dug deeper I started looking at the tests. In my experience a well crafted system can often be understood by reading the tests. Unfortunately this was not the case. There were not many tests and the tests that were there did not provide me with the information I needed.

The next step was then to start on the first of my user stories. To retrieve a product catalogue and return it to my client. Looking at the domain this seemed pretty simple. I could clearly see which objects I needed to work with. But I could not find any repositories, factories, or services that would give me access to the objects.

I talked with my contact at the third party vendor. I specifically asked for repositories, factories and services. I also mentioned that I had got the impression that the application had a domain driven design. He talked with some of the developers and came back to me sounding a bit embarrassed. He had asked about domain driven design. The development team had told him that well, in part, perhaps, a little. It was kind of work in progress. No I would have to use their service tier. And to understand how the service tier worked would have to look at their controller tier. And to understand the controllers I had to look at the view beans. And then there was all the configuration hidden in the spring XML. They also said that it may prove quite time consuming and difficult to do what I intended to do. This was due to most of the logic actually being part of the view. Which was what I needed to replace. The system wasn't really designed to do what I had set out to do.

Ouch. It was a rather awkward moment for my contact at the vendor. He didn't like to give me this message. He is a pretty technical guy and he knows when the code sucks.

I spent the next three weeks just untangling how they put together their request from the view to the search server so that I could retrieve products matching a product catalogue. It took me a disproportionate amount of time just to figure out how they configured the database settings.

So how come I am telling this story?

With the knowledge and understanding I now have of the code base I can read the history of this application. I can clearly see the different struggles they have had from when they set out to nock up a quick web store in Java. How they made the decision to sacrifice code cleanliness and testing to maximise speed. How they had to scramble to meet new customer feature requests never quite having time to tidy up the mess. At some point some time was invested in cleaning up some of the mess and release the source code to customers. This to allow customers and partners to make modification that would not be supported by the vendor.

This is not what I should see when working with a code base. This is not what the code was written for. It was written to solve the problems with online shopping on a global scale. When I read the code that is what it should tell me. How they solved that problem.

My guess is that no one in the team that started this project knew quite what they sacrificed. They didn't have the experience of a well working clean code base. They had not worked with the confidence that a test driven, automated code base gives.

This is unfortunate. What if one of the initial senior architects/developers would have pushed through clean test driven code? I think a lot would have changed. As the system grew more complex and it became apparent that there were to much logic in the view the team would gradually implement domain driven design. With a full regression test suite to support the change the risk would be minimal. Further to this it could be done in verticals that would have made sense to the reader. As new architectural challenges were encountered, for example scalability, the system would be robust and tested enough to implement it in the best possible way with easy and next to no risk.

For me it would have been easy to understand what I was looking at. For my contact at the vendor, and probably the developers replaying to my questions, they could proudly have said that yes, it's a fully tested, domain driven architecture. If you look here and here you will find the repositories. And why don't you read through test suite x, y and z to see how we use them in out view tier. Perhaps I wouldn't even have had to ask.

So my lesson from this story is that the clean code and the tests were needed from the start. It should have been there in order for the code base to evolve. And it would have saved money and time. For them and for me. From the start.

A little while back I wrote about test driving Tornado development. This is all good and well, but how about getting instant feedback? If you are developing on a Mac or a Linux machine this is certainly possible. I found this great blog post on how to set up a change monitor on a directory that will alert you on test failures using Growl or Notify. Thanks Rodrigo Pimentel for the write up.