Tag Archives: testing

I use nosetests when running my python tests. It is a really neat little tool that will automatically discover and run all tests in the project with one single command line command - nosetests.

But it has more neatness then this. With a simple option it will also print test coverage for the project: --with-coverage, and if coverage should be calculated base on specific packages then use --cover-package which takes a list of packages to calculate coverage on. This is useful when there are library packages for a virtual env or similar within the same directory as you run nose.

I don't like to leave lose ends so to ensure that there are no lingering compiled *.pyc files sitting around I run the following script before commit:

Where <source-dir> should be replaced with directories where source files are. This is a space delimited list. And <package> is replaced with packages which should have coverage statistics calculated.

Most of us have at least heard about the DRY principle. But how is it applied in a pragmatic way in software development?

I will take you though my thoughts as I work with an application to illustrate how, when and why I apply it as well as where it may be better not to.

The code is in python. It is not a fully working application since it isn't open source.

The example application is a simple stock management application. It has a warehouses. The warehouse has sections and each section contains SKUs (Stock Keeping Units). It also has a catalogue with categories and products. Each product is bound to one or more SKUs.

I am using SQLAlchemy as ORM and this also maintains the database schema using declarative_base. In the tests I am using fixture to create data fixtures that can be injected to the database when executing the tests. Since the application is so simple SQLite is a sufficient database engine.

The first requirement is to create a function that will find a warehouse using a warehouse ID. The first step is to start the database and inject data into it. This is done in the setUp() method in the test like so:

The WarehouseFixture and SectionFixture are fixture.DataSet classes, Warehouse and Section are domain classes.

This makes it possible to query data through the DOA and then assert the results using the fixtures.

After completing the other tiers needed in the application to expose this as a JSon front end I need to create some functional tests. They will need to inject the data into the database in the same way that the DAO test did. Then the functional tests will validate the JSon response from the server using the fixtures.

I could, and I know some would, C&P the function calls above into a new setUp() method in the functional test. But doing so will disconnect the fixtures and DAO tests in the functional tests with the tests used in the DAO. Since the DAO tests and fixtures will evolve with the data model and database we don't want the functional tests lose that connection. Instead I pull up some functions from the DAO test case into stand alone functions. This can then be used from the functional tests as well as the DAO tests.

We now have generic functions that can be called in different, unrelated, tests. And the code is DRY.

Next it is time to add some new DAO tests. This DAO will be in a new module which handles the catalogue. Since the previous refactoring did not require decoupling from the warehouse, both DAO and functional tests test the warehouse, it took place within the warehouse dataaccess test module. Since this will create a new decoupled module the test modules should not be coupled. To enable this we create a utility module called data_fixtures with a new class called DataFixtureTestCase like so:

Since python supports multiple inheritance it is possible to subclass this class and AsyncHTTPTestCase when creating functional tests. In the test cases setUp() function setupDatabase() is called to set up the database and teardownDatabase() is called in tearDown() to tear it down. The setupDatabase() also ensures that the required properties have been set before setting up the database.

To keep the connection from functional tests to dataaccess the functional test module imports the engine, env and fixtures properties from dataaccess.

The above refactorings helps keep the code DRY whilst easy to understand. It also creates a place where common assertions bound to data fixtures can live.

It should be noted that when working with test DRY is not always your friend. If the tests loses any of it's expressiveness as "executing documentation" (tests is the first stop for understanding production code) expressiveness wins any day. This is actually reversed from production code which must be kept DRY at all times.

1 Comment

I am currently working with a rather complex e-commerce system. It is a web application archive which is using Spring 2.5 throughout to load wire the application. I need to expose some of its functionalty as a REST API to other applications. I am using Spring 3, Jersey and javax.inject to make this possible.

But to understand and verify that my interaction with the underlying e-commerce system works I have to redeploy the whole application to a server. This is very expensive. Even when using such a brilliant tool as JRebel from Zero Turnaround, which saves me from restarting the server. Many of the integration points are built up of several small steps that needs to be done in order to ensure that the function works. To debug errors, which happens often, I have to redeploy, test and redeploy.

To speed this process I am using a technique called exploratory testing. Instead of writing production code which runs on a server and then test this production code I create unit tests that acts against the e-commerce APIs I need to use. This removes the need to redeploy and restart a web server every time I discover an issue with the integration.

This is made possible using two simple constructs. One is the @RunWith annotation provided in JUnit 4. It enables the tests to run with a different test runner. Spring provides a SpringJUnit4ClassRunner that can load a Spring application context into the normal JVM. The other is the Spring annotation @ContextConfiguration which is used to point the test runner to the correct Spring configuration to use.

Exploratory testing has other advantages too.

The tests are not throw away tests. They live their own life in a maven project which is part of the larger reactor build. It is executed automatically in our CI server prior to all other projects. If the e-commerce system is changed, or we need to upgrade it, we only need to run the exploratory tests to verify if our application will work.

Another great benefit is that the tests also allows provides executable documentation. They help clarify assumptions that are used in the production code and tests which otherwise would have to be written down as text documentation or comments. And we all know what usually happens with comments and text documentation.

The conclusion is that whenever you are faced with understanding a complex third party API exploratory testing almost always pays of. If not for time saved due to redeployment and restating of an enterprise server so at the very least for regression testing and documentation.

When writing my article AMQP Hello World as an Exploratory test I realised that the test is written in a different way then many tests I read. I felt that I should explain why I choose a different design when writing tests since I think this would benefit many projects.

The code below is taken from the example provided in the AMQP article and can be found at my github repository.

My main driver when writing tests is readability. A test should read pretty much like a well written requirement specification. It will detail the requirements for set up before testing can start. It will then details the given state of the application, moving on to the state when something happens and then verify that expectations were met. Before finishing the application needs to exit gracefully.

When I work with tests in Java I tend to use JUnit 4. JUnit 4 does not need naming conventions, such as setUp, testXyz and tearDown. This is all driven with annotations, which frees up the name space and allows the developer to instead name the methods according to what they do.

Below is the set up method for my test:

The method name clearly states what the method does. The annotation clearly states which role the method plays in the test life cycle. This makes the test code easier to read and debug.

When working with the actual test methods I use names that clearly identifies what the test verifies. Below is a test methods from the example. The method name points out what the method tests. The content is then written in a given-when-then pattern to detail the steps that are required.

This way of working has a couple of side effects that are quite useful. When adding more tests, either as new test methods or as new test classes, it is easy to spot similarities which can be shared in between tests. This in turn helps to enable the creation of a domain language for the tests of the application. It is important though not to forget that tests should first and foremost be readable. Creating to much abstraction makes it harder to follow the test code.

The given-when-then methods clearly defines the steps taken to perform the tests but does not slow the reader down with large amounts of implementation details. This makes the tests a good source of documentation. It further more makes it very easy to see if a test failed due the state being changed before or during the operation that is under test.

The tear down method is, just as the set up method named according to what it does rather then its function in the test life cycle. Its function in the test cycle is defined by its annotation.

I find that this style of test design makes the tests much easier to read, maintain and understand. They also work much better as documentation when trying to understand generalised well written code.

We have taken the decision to use Rabbit MQ as our internal message queue. To understand the way that Rabbit MQ talks with Java applications I decided to create a little Hello World application. Using Rabbit MQ's Java API guide I created the exploratory test below.

The complete code, in a Maven project, can be found on my github repository here.

The test starts with setting up the connection to the Rabbit MQ broker which is running on localhost. Since the client comes with sensible defaults the connection does not need to be configured with anything but the name, type and durability of the exchange. A queue is then bound to the channel to enable us to send messages.

The actual test only details the steps needed to verify that the message sent matches the messages received. The implementation will be discussed in each of the supporting methods:

To send a message to the MQ we simple publish it to the given exchange using the channel:

We then pick up the message using a consumer. When implementing this in production code the producer and the consumer should not be running in the same thread since both are blocking. But since we are only testing the availability of the MQ the blocking is actually working or out advantage.

The last step of the test is to assert that we received the same message as we sent:

When the test is done we need to close the channel and the connection:

The benefits of using this approach to understand new technology is that the tests can be put to use to verify the integrity of the libraries and the MQ used when running automated integration tests rather then just writing throw away code.

If you find the structure of the test code different you can find an explanation in this article which discusses the drivers behind the design as well as the given-when-then pattern used in the test method.