Tag Archives: unit testing

I use nosetests when running my python tests. It is a really neat little tool that will automatically discover and run all tests in the project with one single command line command - nosetests.

But it has more neatness then this. With a simple option it will also print test coverage for the project: --with-coverage, and if coverage should be calculated base on specific packages then use --cover-package which takes a list of packages to calculate coverage on. This is useful when there are library packages for a virtual env or similar within the same directory as you run nose.

I don't like to leave lose ends so to ensure that there are no lingering compiled *.pyc files sitting around I run the following script before commit:

Where <source-dir> should be replaced with directories where source files are. This is a space delimited list. And <package> is replaced with packages which should have coverage statistics calculated.

The façade pattern is one of my most used patterns. It works as the glue both within an application and as a boundary to dependencies.

It mainly comes in handy when solving the following problems:

  • Testability
  • Fluid design and readable APIs
  • Protection against external forces and changes

Testability
Ever so often test driving development becomes extremely expensive because the interfaces the code depends on is impossible to test against. This may just as well be internal legacy libraries as external libraries. To remedy the problem I use a façade interface that provides the operations I need. I then mock the expected behaviour of the operations when writing code required. With some careful planing the binding of the library implementation to my façade can then be done without to much hassle.

Fluid design and readable APIs
Each component within an application strives to resolve it's own domain. This often results in the boundaries becoming a bit awkward. To help creating fluidity in design and APIs it is therefore often useful to create façades matching each domains language and a thin layer of bindings that glues the libraries together.

Protection against external forces and changes
When using external libraries, apart from the above issues with testability and fluidity, façades also help reduce the cost of upgrading the dependencies. Whenever an external library changes the only code in the application that will need to change is the binding to the façade.

The overhead for getting the above benefits is actually remarkably low. It is usually enough to create an interface that delegates it's operations to the library it is a façade for. The binding can the be done using dependency injection or factories.

Even when it's more difficult it's usually only a matter of composition. I have some times had to compose the the logic of the integration on a couple of abstraction levels but then it has still been well worth it since it has simplified the usage of the API considerably.

It is a shame that I don't get to see this pattern used more often. It would have reduced cost and complexity on many occasions.

1 Comment

Ever wanted Maven to print the test output to the terminal to save yourself from digging around in the surefire report directory to find out what happened with the failing tests? I have many times but have never invested the time to find out how until now.

It's pretty straightforward. Set the useFile flag to false in the surefire plug-in. To also skip the xml report output you can set the disableXmlReport to true. Then there will be no reports created when running the tests. Use the profile below to enable this in your pom.

Please note that for some odd reason my syntax highlighter lower case all tags. It should be groupId, artifactId, disableXmlReport and useFile in camel case.

To activate this profile you need to add -Pdev. You could have it active by default but then your CI server will not have any test reports. And whilst no reports are fine on the developer machine it's not very practical on the CI server.

A little while back I wrote about test driving Tornado development. This is all good and well, but how about getting instant feedback? If you are developing on a Mac or a Linux machine this is certainly possible. I found this great blog post on how to set up a change monitor on a directory that will alert you on test failures using Growl or Notify. Thanks Rodrigo Pimentel for the write up.

2 Comments

The past week I have had the pleasure to write a small web based system in python. Being a devout TDD practitioner the first step in learning python was of cause to understand the unittest module. Next thing was to see what mocking frameworks are available. To my delight I found that mockito is available for python as well (here).

The application I'm working on is a web service API to access our application using Cloud to device messaging (c2dm) from Google. To provide the web API I am using the Tornado framework.

As web frameworks goes they are usually tricky to test drive. Not so with Tornado. It ships with it's own testing framework that allows the developer to set up a round trip request/response cycle within a unit test using the tornado.testing module. Given clear module boundaries and mockito to stub out dependencies it's an easy thing to test drive Tornado web application development.

The tornado.testing module comes with a couple of very useful classes. I used AsyncHTTPTestCase to test the RequestHandler classes. To make sure that only failing tests writes to the log I also inherited LogTrapTestCase.

Below is some sample code written to allow device registration with our application server. First is the test in it's entirety and then the implementation.

The tests are in the module c2dm_registration_service_test.py:

The production code is in the module c2dm_registration_service.py:

Added to this one more class in the mondule c2dm.py. So far it is just a stub:

With this knowledge I can't find any excuses not to test drive web development in Python. It is such an easy task and takes no time at all. It is well worth the effort even when creating pilots, which most likely will end up becoming production code in the not to distant future.