I am currently working with a rather complex e-commerce system. It is a web application archive which is using Spring 2.5 throughout to load wire the application. I need to expose some of its functionalty as a REST API to other applications. I am using Spring 3, Jersey and javax.inject to make this possible.
But to understand and verify that my interaction with the underlying e-commerce system works I have to redeploy the whole application to a server. This is very expensive. Even when using such a brilliant tool as JRebel from Zero Turnaround, which saves me from restarting the server. Many of the integration points are built up of several small steps that needs to be done in order to ensure that the function works. To debug errors, which happens often, I have to redeploy, test and redeploy.
To speed this process I am using a technique called exploratory testing. Instead of writing production code which runs on a server and then test this production code I create unit tests that acts against the e-commerce APIs I need to use. This removes the need to redeploy and restart a web server every time I discover an issue with the integration.
This is made possible using two simple constructs. One is the @RunWith annotation provided in JUnit 4. It enables the tests to run with a different test runner. Spring provides a SpringJUnit4ClassRunner that can load a Spring application context into the normal JVM. The other is the Spring annotation @ContextConfiguration which is used to point the test runner to the correct Spring configuration to use.
Exploratory testing has other advantages too.
The tests are not throw away tests. They live their own life in a maven project which is part of the larger reactor build. It is executed automatically in our CI server prior to all other projects. If the e-commerce system is changed, or we need to upgrade it, we only need to run the exploratory tests to verify if our application will work.
Another great benefit is that the tests also allows provides executable documentation. They help clarify assumptions that are used in the production code and tests which otherwise would have to be written down as text documentation or comments. And we all know what usually happens with comments and text documentation.
The conclusion is that whenever you are faced with understanding a complex third party API exploratory testing almost always pays of. If not for time saved due to redeployment and restating of an enterprise server so at the very least for regression testing and documentation.
We have taken the decision to use Rabbit MQ as our internal message queue. To understand the way that Rabbit MQ talks with Java applications I decided to create a little Hello World application. Using Rabbit MQ's Java API guide I created the exploratory test below.
The complete code, in a Maven project, can be found on my github repository here.
The test starts with setting up the connection to the Rabbit MQ broker which is running on localhost. Since the client comes with sensible defaults the connection does not need to be configured with anything but the name, type and durability of the exchange. A queue is then bound to the channel to enable us to send messages.
The actual test only details the steps needed to verify that the message sent matches the messages received. The implementation will be discussed in each of the supporting methods:
To send a message to the MQ we simple publish it to the given exchange using the channel:
We then pick up the message using a consumer. When implementing this in production code the producer and the consumer should not be running in the same thread since both are blocking. But since we are only testing the availability of the MQ the blocking is actually working or out advantage.
The last step of the test is to assert that we received the same message as we sent:
When the test is done we need to close the channel and the connection:
The benefits of using this approach to understand new technology is that the tests can be put to use to verify the integrity of the libraries and the MQ used when running automated integration tests rather then just writing throw away code.
If you find the structure of the test code different you can find an explanation in this article which discusses the drivers behind the design as well as the given-when-then pattern used in the test method.
I have been using mock objects for a very long time now. I started back in the day of EasyMock 1.x. This was a great help when removing the extra complexity of an enterprise server and the libraries it used when creating unit tests. And EasyMock has evolved tremendously since then. I followed this development closely using it for several year.
However, about two years back a colleague of mine suggested I try out something new. A mocking framework that tasted as good as a well shaken cocktail. The idea spoke to me. So I decided to try out mockito.
I have never turned back since. Mockito offers a very simple way to create and verify mocks. It is transparent to if I am mocking classes or interfaces. And it provides a readable easy to use API.
I find that I mock away far more boilerplate code in my tests now then I ever used to do with EasyMock. This is since Mockito provides the flexibility to only verify the methods calls that needs to be verified and will not reports errors on methods calls that are not recorded unless told otherwise. This makes it easy to mock away even value objects and data transfer objects when this makes the tests easer to read.
To create a mock object without recoded behaviour three lines of code is required. Annotate the class with @RunWith(MockitoJUnitRunner.class). Create a field for the mock object with an annotation to define that it is a mock:
private ClassToMock response;
That's it, it's ready to use. And any call to it will succeed unless specific behaviour is needed.
Mockito also has support for BDD like syntax with the BBDMock class. Since I use given-when-then as a way to create readable tests this is a great help.
I am currently working on implementing SLAMD Distributed Load Generation Engine in favour of JMeter for performance testing. JMeter has served us well and given us some very quick results to work with. Unfortunately we are now finding JMeter rather limiting since we need to test a networked application with several different custom built servers. We need to be able to monitor system load, such as CPU, memory and network load, in combination with tests data. SLAMD comes with such functionality out of the box.
SLAMD was initially developed at Sun to test LDAP servers (ldapd) but has since been open sourced and further developed to support different networked applications such as HTTP, SMTP, IMAP and POP.
It is an old and in certain aspects a very dated code base though. As an example the HTTP servlet presentation tier uses no JSP. Instead it creates all the view code in the servlets or servlet helpers.
Yet it does seem to be the best tool available on the market. It ships with web administration and reporting interface, test client performance monitoring, application server monitoring and extensible APIs.
Right now I am working on extensions that maps well to our domain and hope to have a running performance test system up soon with custom test jobs that are easy to administer and maintain. It is good to be back in the coding trenches again after to much manual performance testing using JMeter. And to be able to automate what was previously done manually is always a good feeling.
If you have any previous experience working with SLAMD that you would like to share, please post a comment/link.
Yesterday I went to a breakfast seminar at Jayway's office here in Malmö. Hugo Josefson and Renas Reda showed us a new way to write tests for Android called Robotium.
The framework aims to remove the pain of writing traditional Android instrumentation tests. When demonstrated it is striking how easy it is to create UI tests using Robotium.
Hugo showed us an example test case written using normal Android instrumentation. He then proceeded to show a similar test written using Robotium. The Robotium test was about a tenth of the amount of code required to create a traditional Android test. It also removes the need to have any awareness of the underlying application being tested. This makes the framework useful both as a very powerful QA tool and for creating up front acceptance tests.
Robotium is also fully integrated with Maven using Maven for Android. This makes it easy to get up and running and to automate. It also makes it easy to work in other IDEs then Eclipse.
The one thing I missed was influence of BDD. I almost exclusively use the given-when-then format when I write tests. I find that it helps increase readability. It also helps me staying focused on the domain I work with. This is because it is closer to how the customer expresses them self when discussing features.
I have started to sketch on how the main class, Solo, would look given some given-when-then syntactical sugar. The Robotium source code is on GitHub so I forked it. Feel free to check out my progress. The changes are added in the same manner Mockito have added BDD, by extending the main framework class. In Robotiums case this means there is now a BDDSolo class that inherits from Solo. I will add methods to it as and when I have time.
Please come with feedback and ideas. What is currently there have been done from a very limited grasp of how Robotium works. I hope that I will get a chance to use it soon so I can add some more substance to it. In the mean time I would like to get others feedback. Both on Robotium specific things and on general phrasing.
Me and Hugo talked about how to design some lower level APIs for Robotium, and there will be a discussion on the Robotium Google group at some point. But in the mean time, I like to discuss APIs by actually designing them and GitHub makes it easy to maintain such a discussion in code. So feel free to join in.