Automated, test-driven, server-side Javascript with Maven, jsUnit & Sinon.JS

Here’s some background to a problem I faced recently: I have quite a few server-side Javascript scripts which I need to expand and refactor. Having moved from my usual comfort-zone of test-driven Java, I wanted to work in the same style to ensure the quality of the scripts I was writing.

The following describes how I reached a solution, with the many blind-alleys and wrong-turnings I took, ignored for simplicity.

Step 1: Find a Javascript unit-testing library

I needed a library that would work for server-side Javascript. The execution environment of the production code is Rhino, so I needed a compatible unit-testing framework. Unfortunately, what seem like the better Javascript testing frameworks, are focused on the problem of client-side multi-browser testing. These frameworks often adopt a server model, allowing tests to be submitted and run on different browsers. Even a headless browser would not match the specific environment (Rhino) so these frameworks were ruled out.

I found a couple of interesting projects that I began to look into:

and, in the absence of documentation, a project that used these:

Step 2: Get a simple test to run with Maven

By installing first RhinoUnit, and then the Maven plugin, in my local repository, I was able to incorporate Javascript tests into my build. I did this using the following Maven plugin in my pom.xml:


Note: implicit in the Maven plugin are two directories:

  • src/main/scripts, the directory relative to which the includes are applied
  • src/main/js, where the tests reside. Can be overidden, as I have above, using the testSourceDirectory configuration option.

Given the pom.xml configuration above, to run a basic test I need the following…

1: A valid project structure, e.g.

  - pom.xml
  - src/main/scripts/my/tools/datadictionary.lib.js
  - src/test/scripts/DataDictionaryTestSuite.js

2: A valid pom.xml

3: DataDictionaryTestSuite.js contains a test such as:

function DataDictionaryTestSuite() {

DataDictionaryTestSuite.prototype.test1 = function test1() {

Then everything should be ready to start automated testing.

Run ‘mvn test’, and you should see output like the following:

[INFO] [rhinounit:test {execution: default}]
> Initializing...
> Done initializing
> Running test "test1"
<testsuite time="0.003" failures="0" errors="0" tests="1" name="DataDictionaryTestSuite">
<testcase time="0" name="test1">
> Done (0.005 seconds)
Tests run: 1, Failures: 0, Errors: 0
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------

Step 3: Write some real tests

The next step was to start writing some useful tests. I wanted to test the ‘delete’ action for a file management API. The API I was writing was dependent on an underlying API providing the raw functionality, this API was not Javascript, but Java objects injected into a Rhino as Javascript placeholders. As such, the underlying API needed to be stubbed. I decided to use a very comprehensive and test-framework-agnositic library for creating spies, mocks and stubs: Sinon.js. This library made the following tests possible, the first to test successful deletion, the second to test failed deletion (file not found):

DataDictionaryTestSuite.prototype.testDeletion = function testDeletion() {
	root.childByNamePath = sinon.stub().withArgs("/file1").returns(file);
	dataDictionaryDir.removeNode = sinon.spy().withArgs(file);

	var model = new dataDictionaryModel();
	var message = model.deleteFile("file1");

	assertEquals("File 'file1' deleted", message);
	assertTrue("removeNode(file) not called", 	

DataDictionaryTestSuite.prototype.testDeletionOfNonExistantFileReturns404Error = 
		function testDeletionOfNonExistantFileReturns404Error() {
	root.childByNamePath = sinon.stub().withArgs("/file1").returns(null);
	dataDictionaryDir.removeNode = sinon.spy();

	var model = new dataDictionaryModel();

	try {
		fail("An exception should have been thrown");
	} catch(error) {
		assertEquals(404, error["code"]);
		assertEquals("Unable to delete file 'file1' does not exist", 
		assertTrue("removeNode(...) should not have been called", 

The assert… statements are provided by jsUnit and require no additional configuration. The stubs, spies and verifications are provided by Sinon.JS and are documented here.

I won’t go into the syntax of the tests above in too much detail, but essentially I am stubbing out the underlying API and ensuring the following:

  • A ‘removeNode’ action occurs for a successful deletions
  • A ‘removeNode’ action does not occur if the file could not be found
  • The response messages are relevant to the outcome

To use sinon.js, I found I needed to do the following adjustments:

1: Remove some code from sinon.js that conflicts with Rhino. The following extract is from line 1601 of sinon-1.1.1.js. Remove this code, and several of the associated functions. I am not sure exactly what should be removed, and even less sure of whether I am undermining the sinon library, but removing a few functions worked for me.

sinon.timers = {
    setTimeout: setTimeout,
    clearTimeout: clearTimeout,
    setInterval: setInterval,
    clearInterval: clearInterval,
    Date: Date

2: Locate sinon.js in /src/main/scripts and add the following include:


Step 4: Inject global mocks

You’ll notice that all the file API tests above refer to a variable ‘root’. This is a global variable which is injected into the Rhino context on the production system. For the Javascript in the includes directories to run, these global variables need to be present. I found it necessary to create a ‘global-mocks.js’ file in src/main/scripts containing the following:

var root = new Object();

Step 5: Run the tests

A failed test run, where the following assertion fails:

assertEquals("Unable to delete file 'file1' does not exist", error["message"]);

Will give you test output to indicate the nature of the failure:

[INFO] [rhinounit:test {execution: default}]
> Initializing...
> Done initializing
> Running test "testDeletionOfNonExistantFileReturns404Error"
testDeletionOfNonExistantFileReturns404Error failed
[object Object]
> Running test "testDeletion"
<testsuite time="0.05" failures="1" errors="0" tests="2" name="DataDictionaryTestSuite">
<testcase time="0.031" name="testDeletionOfNonExistantFileReturns404Error">
<failure type="jsUnitException">
Expected Unable to delete file 'file1' does not exist (string) but was Unable to delete file 'file1' not found (string)
<testcase time="0.015" name="testDeletion">
> Done (0.057 seconds)
Tests run: 2, Failures: 1, Errors: 0

WARNING: There are test failures.

Finally, a successful build…

[INFO] [rhinounit:test {execution: default}]
> Initializing...
> Done initializing
> Running test "testDeletionOfNonExistantFileReturns404Error"
> Running test "testDeletion"
<testsuite time="0.045" failures="0" errors="0" tests="2" name="DataDictionaryTestSuite">
<testcase time="0.027" name="testDeletionOfNonExistantFileReturns404Error">
<testcase time="0.016" name="testDeletion">
> Done (0.055 seconds)
Tests run: 2, Failures: 0, Errors: 0


This was my first attempt at test-driven Javascript, and I’d love to get some feedback if there are cleaner ways to achieve this, or even just some alternatives.

Testing Recipes, Part 2 – Utility methods

From part 1…

Should the testing frameworks or methodologies used, influence the design of the implementation code? I’m sure most would respond in the negative, but sometimes this is very hard to avoid. I have been attempting to overcome some of this influence by finding ways to test code that I had previously refactored to make it more ‘testable’. To achieve this, I have been building on use of the standard Mockito syntax, to apply the extensions provided by PowerMock (with Mockito).

Next up, private static utility methods.

When writing a service, as the complexity increases, it is useful to extract more services to provide sub-functionality. This approach is commonly the best way to extract inner functionality, but sometimes this functionality is very contained and local and does warrant a new service. In these scenarios, one will often start to refactor the code to produce utility methods.

Take the follow example: I wish to make a service to provide string padding (I have tried to simplify the context of this example, so excuse the unrealistic service). This service is not supposed to be a multi-purpose padding utility, but instead needs to provide a restricted functionality to its users. Here’s the interface:

public interface Padder {

	public String padWithSpaces(String text, int size);

	public String padWithHashes(String text, int size);

	public String padWithDashes(String text, int size);


A padder will pad the supplied text to the indicated size using spaces or hashes. So how do we go about building up the implementation using tests? Notice that the methods provide similar functionality but with varying initial configuration. We could start with the first method and explore all the edge cases with tests:


An then we test the next method, where the first two tests can be ‘copied’.


and again for the final method


Often this creeping addition of virtually identical tests happens as a service gets extended over time and often the duplication is missed.

My suggestion is that when the common functionality is identified through refactoring of the implementation, we address this common functionality at the test level (whilst considering if a separate service might be the cleaner route).

Testing private static utility methods using PowerMock

Let us assume that we intend to refactor the service to provide an internal private static utility method to manage the core functionality, as follows:

public class PadderImpl implements Padder {
	public String padWithSpaces(String text, int size){
		return pad(text, " ", size);

	public String padWithHashes(String text, int size){
		return pad(text, "#", size);

	private static final String pad(String text, String padding, int size) {
		//pad the string with a supplied padding


Note that we want to keep the utility method private because we do not intend to expose this service to our users as it would complicate the interface.

We can now directly test the utility method, which is purporting to provide a generic padding service. We can use PowerMock’s WhiteboxImpl.invokeMethod() method to invoke the private utility method in the tests:

import static org.junit.Assert.assertEquals;
import static org.powermock.reflect.internal.WhiteboxImpl.invokeMethod;

import org.junit.Before;
import org.junit.Test;

public class PadderImplTest {
	private Padder padder;
	public void setUp(){
		padder = new PadderImpl();	
	public void anEmptyStringInputWillReturnAStringOfPaddingWithALengthOfSize() throws Exception {
		String result = invokeMethod(padder, "pad", "", " ", 1);
		assertEquals(" ", result);

	public void aSizeLongerThanTheLengthOfTheStringReturnsAPaddedStringUpToTheSize() throws Exception {
		String result = invokeMethod(padder, "pad", "text", "*", 8);
		assertEquals("text****", result);

	public void aSizeShorterThanTheLengthOfTheStringReturnsAStringTruncatedToALengthOfSize() throws Exception {
		String result = invokeMethod(padder, "pad", "sandwich", "*", 4);
		assertEquals("sand", result);
	public void aSizeEqualToTheLengthOfTheStringReturnsTheSameString() throws Exception {
		String result = invokeMethod(padder, "pad", "sandwich", "*", 8);
		assertEquals("sandwich", result);
	public void paddingWithALongStringCausesThePaddingToBeTruncatedIfNecessary() throws Exception {
		String result = invokeMethod(padder, "pad", "$", "abc", 5);
		assertEquals("$abca", result);


We now have a padding utility service that we can trust as tested within our service. Next we need to verify the behaviour of each of the service methods. This can now be done in terms of interaction with the utility method. Testing our static method requires the test to be altered to be run as a PowerMock test and to prepare the class with the static method using PrepareForTest:

import static org.junit.Assert.assertEquals;
import static org.powermock.api.mockito.PowerMockito.mockStatic;
import static org.powermock.api.mockito.PowerMockito.verifyPrivate;
import static org.powermock.reflect.internal.WhiteboxImpl.invokeMethod;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

public class PadderImplTest {

We can now add the final tests:

	public void padWithSpacesCallsPadWithASpace() throws Exception {
		padder.padWithSpaces("text", 10);
		verifyPrivate(PadderImpl.class).invoke("pad", "text", " ", 10);
	public void padWithHashesCallsPadWithAHash() throws Exception {
		padder.padWithHashes("text", 10);
		verifyPrivate(PadderImpl.class).invoke("pad", "text", "#", 10);

mockStatic() sets up our class for static verification, and using the verifyPrivate method, we can verify that the private static utility method is called correctly.

Adding an additional public method is as simple as describing only its interaction with the utility method, rather than copying the same set of more complex padding tests:

	public void padWithDashesCallsPadWithADash() throws Exception {
		padder.padWithDashes("text", 10);
		verifyPrivate(PadderImpl.class).invoke("pad", "text", "-", 10);

One of the advantages with this approach is that refactoring to extract a service (where previously we had used a utility method) is much easier. This is because the tests reflect the service-like behaviour of the utility method rather than the black-box behaviour of the public methods.

There are problems with this approach however. If one is not careful to treat the utility method as ‘service-like’ then the coverage could be compromised and bugs introduced. It is not the way to test any internal method and it could be particularly dangerous if used to test entirely at a method-level. Generally speaking, the service should be treated as a black box, but occasionally it pays off to compromise and test internal method calls.

Testing Recipes, Part 1 – Self-service

Should the testing frameworks or methodologies used, influence the design of the implementation code? I’m sure most would respond in the negative, but sometimes this is very hard to avoid. I have been attempting to overcome some of this influence by finding ways to test code that I had previously refactored to make it more ‘testable’. To achieve this, I have been building on use of the standard Mockito syntax, to apply the extensions provided by PowerMock (with Mockito). This first example uses just Mockito.

The first problem I will address is how to test services that use their own services (‘self-service’). This scenario is best explained by the code below showing a ‘client’ service that allows the service-user to call methods on the client.

public class Client {

	public void create(){
		//perform action

	public void delete(){
		//perform action

	public void recreate(){


Three methods are available: create(), delete() and recreate(). Each method performs an action on the client, but recreate() is different because the action performed makes use of other service actions. More complex scenarios could be presented where a method would call other service methods multiple times, or vary calls according to business logic, but this simple example is suitable to explain a typical use-case.

The testing problem comes when wanting to test the recreate() method. Let’s assume that our client makes calls to a ‘databaseService’ to perform the underlying actions.

Approach 1 – Test the underlying interactions

A simple approach would be as follows:

	public void recreateDeletesThenCreates() {

		//verify underlying actions

The problem, I feel, with this test, is that we are not testing the functionality of the recreate() method, but instead we are treating the Client as a black box and testing how the client interacts with the underlying service. This would be the correct approach for the simpler create() and delete() methods but we are now open to duplication in our test suite. If the lower level methods become more complex, then tests for the higher level methods will increasingly duplicate already tested functionality. This problem begins immediately because the line


will presumably already be present in the test for delete(). If we want to refactor delete() then we have to change two or more tests.

Approach 2 – Extract an ‘extended’ service

Alternatively we can extract an ‘extended’ service which provides only the functionality that is defined in terms of the basic service methods:

public class BasicClient {

	public void create(){
		//perform action

	public void delete(){
		//perform action


public class ExtendedClient {

	private BasicClient client;

	public void recreate(){


We can then test the recreate() in terms of interactions with the BasicClient (the setInternalState() method is part of the Mockito Whitebox class):

public class ExtendedClientTest {

	@Mock BasicClient basicClient;

	ExtendedClient extendedClient;

	public void setUp(){

		extendedClient = new ExtendedClient();
		setInternalState(extendedClient, "client", basicClient);

	public void recreateCallsDeleteAndCreateOnTheBasicClient() {



The problem I see here is that we have let our testing needs influence our implementation. The approach of extracting services to reduce complexity is often the correct approach to writing more testable code. However, if no concept is reflected in the domain and the extended service is extracted purely for the purposes of testing the higher level methods, then extracting the service only adds complexity, and should therefore be avoided.

Approach 3 – Verify interaction between methods using spy

My suggested approach is to take advantage of Mockito’s spy() method. Creating a spy of the Client creates an object that can be treated like a mock (stubbing and method verification is possible), but maintains the original implemented code until stubbed.

public class ClientTest {

	Client client;
	public void setUp(){
		client = spy(new Client());		
	public void recreateCallsDeleteThenCreate() {

We now have a Client that cleanly represents the service we wish to offer, but at the same time we can test method in terms of their interactions – testing methods in this way is generally best avoided but our Client presents a counter-example. Using this approach we must be careful to avoid the common pitfall of testing the implementation rather than the behaviour. I feel however that this approach allows us to build up interactions between methods inside a class without resulting in an explosion of test complexity.

Even the comments for the spy() method itself warn against it’s usage:

Real spies should be used carefully and occasionally, for example when dealing with legacy code.

and in general I concur, but the example presented above represents a scenario where attempting to avoid using this feature will comprise the design of the system.

These techniques are entirely compatible with a test-driven approach and provide a much cleaner way of adding higher level methods to services that already provide a set of lower level methods. I intend to explore testing internal private utility methods later in this series which is also closely related to this problem.

Thanks to Johan for recommending improvements to the code examples.

Why Android isn’t ready for TDD, and how I tried anyway

Pick a station to find out more...

Over the last few weeks I’ve been attempting to develop an small application for the Android development platform. In particular, I’ve been applying the techniques of test-driven development (TDD) to this project in an attempt to understand whether they are compatible with the platform. But before launching into more a more detailed explanation of the problems I’ve faced I will provide a brief description of the app I have written…

As an avid listener of BBC radio, particularly 6music and Radio 4, I always want to find out about what I’m listening to. I find the text provided by my DAB radio is both limited and slow. Sometimes it says something like “Why not tell us what you’d like to listen to blah blah” (or similar generic message, and not the currently playing track) which takes forever to scroll across my 30 character LCD display. The solution? It’s called ‘BBC Radio What’s On’: on your Android phone you select from a list of available radio stations (see above).

What's on 6music?

Then you are shown a page with information about:

1) What programme is on now
2) What’s on next
3) What were the last 10 tracks played on this station*

*This data is only as accurate as the data provided by the BBC radio teams, which vary from show to show

So, as you can see, a very simple application, but (arguably) very useful. As far as I am aware, no BBC web page provides this data in one place. The nearest to this is the new configurable BBC mobile homepage. This allows the user to select radio stations to appear on the homepage indicating what show is on now, but no information about current tracks.


Most people I have spoken to about Android development have been broadly positive. The speed and ease with which you can create and deploy a basic application on the emulator or a real phone is impressive. The tight integration with Eclipse makes development simple (performance issues aside, see below). Where I really found development problematic was when attempting to continue developing in the way I do when building web applications (i.e. using test-driven development and dependency injection).

Why testing is a problem

A basic unit testing scenario is that I want to test some code that makes use of a third-party utility library. In my application I wanted to make a service that reads a JSON file from a URL (using my HttpService) and returns a object representing the JSON in way that reflects the domain. This object is a RadioStation object which contains meta-data about a particular radio station. The method updateRadioStation(radioStation) takes a radio station and populates the fields based on JSON. My test, shown below, is trying to capture this functionality:

@Mock HttpService httpService;

public void loadsDataCorrectlyFromJSON() throws IllegalStateException, IOException, JSONException {
    RadioStation radioStation = new RadioStation("x", "service", "radio x");



    assertEquals(radioStation.getCurrentShow().getType(), "On Now:");
    assertEquals(radioStation.getCurrentShow().getTitle(), "Cerys on 6");
    assertEquals(radioStation.getCurrentShow().getSubtitle(), "16/10/2009");
    assertEquals(radioStation.getCurrentShow().getPid(), "b00n9nfv");
                 "Music, chat and more archive sessions with Cerys.");

Unfortunately, if we run this test as a junit test, we get the following error:

java.lang.RuntimeException: Stub!
    at org.json.JSONObject.(
    at daverog.bbcradio.service.RadioStationLoader.updateRadioStation(...:29)
    at daverog.bbcradio.service.RadioStationLoaderTest.loadsDataCorrectlyFromJSON(...:36)

The problem here is that the android.jar supplied with the SDK is stubbed out with no implementation code. The solution that seems to be expected is that you should run your unit tests on the emulator or a real phone. Google have provided all the tools to do this; in fact it is an intrinsic part of the platform. To run a test on the emulator, the Eclipse plugin supplies a new run target ‘Android junit test’ (or simply install the application as normal and all unit tests will be run). This will load up the emulator, load the whole application onto the device, and run the test inside the Dalvik virtual machine*.

Dalvik virtual machine
From Wikipedia, the free encyclopedia
Dalvik is the virtual machine which runs the Java platform on Android mobile devices.
It runs applications which have been converted into a compact Dalvik Executable (.dex) format suitable for systems that are constrained in terms of memory and processor speed.
Dalvik was written by Dan Bornstein, who named it after the fishing village of Dalvík in Eyjafjörður, Iceland, where some of his ancestors lived.

By running on the emulator and against the fully working Dalvik-compiled platform the unit test will succeed. Well, it would succeed, if you’d done the following:

  • Made all tests extend some kind of Android test case class (e.g. AndroidTestCase)
  • Stopped using a mocking framework
  • Downgraded to junit 3
  • Tweaked the imports to match Android’s package changes (e.g. framework.junit from org.junit)

and worst of all?

  • Wait at least 10 seconds for even a single simple test to run

The five points above make testing on the emulator a largely impractical process. Testing needs to be simple and responsive, and developers need to be able to leverage powerful matching and mocking frameworks. The rest of this blogpost assumes that the above five points hold true, so I should briefly explain some of the alternatives I tried or thought of that did not work:

1) Import a mocking framework (e.g. mockito) into the project as an additional dependency.

Any imported jars containing class files not compiled to Dalvik bytecode (most) will not work. Attempting to compile the source along with your project will not work either because most libraries will make extensive use of parts of the Java language not compatible with Dalvik: it uses its own library built on a subset of the Apache Harmony Java implementation.

2) Obtain a version of the android.jar where the utility libraries are not stubbed.

Tried and failed, but please tell me if you find one.

That the deployment environment (Dalvik) can place restrictions on the way in which code is tested is very frustrating. For the same reason, we also have a restriction on the libraries we can use as part of the application. The Android platform provides a very extensive array of libraries that make it easier to write most applications, particularly those with a web-focus. However, I feel one is missing that is central to creating applications using test-driven development and dependency injection: Spring. The inclusion of a lightweight IOC container would be an excellent addition to the Android platform.

I have one final bugbear that I need to address. It is not important because it will be possible to fix, but it caused me more frustration than any other part of the development process. I am referring to the performance of Eclipse when using the Android SDK plugin. When Eclipse first loads and you are working with an Android project, the responsiveness is quite acceptable, but, for whatever reason, Eclipse increasingly slows. Use of the emulator is particularly detrimental and after around half an hour of development it can take 10-20seconds to switch between two file tabs. This is unacceptably slow and I find myself rebooting Eclipse and the emulator every half hour to alleviate the problem. Moan over.

An approach to TDD on Android

Based on the assumption that running tests on the emulator is too restrictive and slow, I wanted to come up with an approach that at least partially facilitates test-driven development. First, in Eclipse, we create a new Java (not Android) project call [ProjectName]Test which will be dependent on the main project and hold all the tests and test resources. By having a separate non-Android project we can add resources and code that do not conform to the Android project specification and satisfy the automatic validation. This project will be setup to use a standard JRE system library. Now we can add tests to this project that test the classes in the main project.

The first general suggestion I’d make is: keep things simple by trying to make as many services as possible not dependent on the parts of the Android platform that are not compatible with a conventional JVM. This avoids the “Stub!” error I mentioned above. The LastFmRssReader service does not use any Android-specific libraries so it can be tested using whatever testing or mocking frameworks you wish:

public class LastFmRssReader {


    private RssFeedParser parser;

    public List getPlayedTracksForLastFmUser(String lastFmUser) {

        List playedTracks = new ArrayList();
        List messages = parser.parse();

        for(Message message:messages){
            String title = message.getTitle();
            String artist = "";
            playedTracks.add(new PlayedTrack(artist, title, message.getDate()));

        return playedTracks;


an example test:

public void readerCallsCorrectURLAndExtractsArtistAndTitleFromMessageTitle(){
    LastFmRssReader lastFmRssReader = new LastFmRssReader(rssFeedParser);

    ArrayList messages = new ArrayList();
    Message message= new Message();
    message.setTitle("artist - title");


    List tracks = lastFmRssReader.getPlayedTracksForLastFmUser("testuser");

    assertEquals(tracks.get(0).getArtist(), "artist");
    assertEquals(tracks.get(0).getTitle(), "title");


This approach should work well, particularly for larger applications where more and more services perform an internal role where external libraries are not required. It’s also the best way to test domain objects which usually don’t need mocked services. Where this approach fails is where much of the code written is either using libraries or the parts of Android API, which would be common in small applications (such as ‘BBC Radio What’s On’).

Using the approach above, two problems remain:

  1. Testing parts of the system that interact with the Android API
  2. Testing parts of the system that use common libraries embedded in the Android platform

I am stumped regarding the best way to test code that uses the Android API. I have been reverting to testing on the emulator using Junit 3 and no mocking framework. The android.test.mock package may help with this, but seems to be just a bunch of stubbed out implementations for you to override as you require, so does not offer nearly as much power as a mocking framework.

I have found an unconventional way of testing with the common libraries embedded in the Android platform. This approach is the best example of how awkward testing can be on Android.

If we go back to the test I first used as an example:

@Mock HttpService httpService;

public void loadsDataCorrectlyFromJSON() throws 
        IllegalStateException, IOException, JSONException {
    RadioStation radioStation = new RadioStation("x", "service", "radio x");



    assertEquals(radioStation.getCurrentShow().getType(), "On Now:");
    assertEquals(radioStation.getCurrentShow().getTitle(), "Cerys on 6");
    assertEquals(radioStation.getCurrentShow().getSubtitle(), "16/10/2009");
    assertEquals(radioStation.getCurrentShow().getPid(), "b00n9nfv");
        "Music, chat and more archive sessions with Cerys.");

This test cause a “Stub!” exception when run as a unit test.

Here’s the implementation I am testing (it’s loading json from a URL and populating a domain object):

import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;

public class RadioStationLoader {

    public void updateRadioStation(RadioStation radioStation) throws 
            IllegalStateException, IOException, JSONException{
        String url = HOST + radioStation.getBrandId()
                + "/programmes/schedules" + radioStation.getServiceTypeURLExtension()
                + "/upcoming.json";

        String body = httpService.callUrlAndReturnString(url);

        JSONObject json = new JSONObject(body); <--- This is where the exception is thrown

        JSONObject schedule = ((JSONObject) json.get("schedule"));

        JSONObject service = ((JSONObject) schedule.get("service"));
        String serviceTitle = service.getString("title");


        JSONObject now = ((JSONObject) schedule.get("now"));
        JSONObject currentBroadcast = ((JSONObject) now.get("broadcast"));
        JSONObject next = ((JSONObject) schedule.get("next"));
        JSONArray broadcasts = ((JSONArray) next.get("broadcasts"));
        JSONObject nextBroadcast = ((JSONObject) broadcasts.getJSONObject(0));

            "On Now:", currentBroadcast));
            "On Next:", nextBroadcast));

    private RadioShow extractRadioShowFromBroadcastJson(
            String type, JSONObject broadcastJson) throws JSONException{
        String start = broadcastJson.getString("start");
        String end = broadcastJson.getString("end");
        JSONObject programme = ((JSONObject) broadcastJson.get("programme"));
        JSONObject display_titles = ((JSONObject) programme
        String title = display_titles.getString("title");
        String subtitle = display_titles.getString("subtitle");
        String pid = programme.getString("pid");
        String synopsis = programme.getString("short_synopsis");
        return new RadioShow(type, title, subtitle, start, end, synopsis, pid);


If I copy this class into my test project and locate in the same package then I can run against a different test copy. What I want to do now is run the code against an implemented version of the JSON library. If I import the JSON jar (JSON lib) into my project along with its dependencies then I can use this instead of the Android stubbed version. Because the package structures between the external library and the Android version are different I need to switch the import to use the alternative library:

import org.json.JSONObject;
import net.sf.json.JSONObject;

And because Android are using a slightly different version I need to change the constructor:

JSONObject json = new JSONObject(body);
JSONObject json = JSONObject.fromObject(body);

The class then compiles. Running my test is now successful:

This approach allowed me to quickly find several bugs in the RadioStationLoader before copying over the fixes to the original and deploying to the emulator. This is clearly not a sustainable approach but it provides a solution to developing code in a test-driven, dependency injected way.

Finally, I needed a way to wire my services together to provide them to the main Activity classes. It’s hard to write applications using inversion of control without somewhere to invert the control to. In Java web applications this would be Spring . In Android, no such framework is available. This seems unfortunate as Spring (core) is lightweight (in many senses), even more so with a stripped-down alternative like Spring ME. Without such a framework, a developer is limited in leveraging the power of IOC architectures. My crude alternative was to configure my system and inject my dependencies in an ‘application context’ Java class.

client = new DefaultHttpClient();
httpService = new HttpService(client);
radioStationLoader = new RadioStationLoader(httpService);
rssFeedParser = new RssFeedParser();
lastFmRssReader = new LastFmRssReader(rssFeedParser);

public RadioStationLoader getRadioStationLoader(){
    return radioStationLoader;

public LastFmRssReader getLastFmRssReader(){
    return lastFmRssReader;


The one main recommendation that I would make as a result of this work would be for Google to provide an alternative android.jar that is stubbed as little as possible. This is particularly important where common open-source libraries have been integrated into the platform. This would make unit testing, and therefore test-driven development, much less painful. Oh, and Spring on Android would be nice too.