I’ve recently spent quite a bit of time working with QUnit, a great unit testing framework for Javascript. While building out a larger test suite, I ran into a few issues, especially while testing/working with dependency managers and the QUnit test framework. In this post I’ll document some of the issues I ran into, some patterns I started using, and a few tricks I used to ensure consistent and reliable test results. The main issues I ran into were caused by the fact that I was testing modules that were being loaded asynchronously. This meant that in some cases QUnit would miss tests, misplace module labels, and/or report false positives. Now when I mention async here, it is not to be confused with QUnit’s native support for asyncTests.

Problem #1: QUnit.done misfiring

QUnit.done = function(qunitReport) {
  console.log('Done!');
};
setTimeout(function() {
  test("helloworld", function() {
    expect( 1 );
    ok(true);
  });
}, 100);

The timeout here simulates the delay that an asynchronous dependency might create when QUnit loads up all the tests. In the above example, the QUnit.done event will get called several times (a similar report was addressed here, but I am still seeing the issue). This can be particularly problematic if you are using a test runner to automate your QUnit test suite, and are waiting on this event to complete your suite.

Problem #2: Missing/Empty test cases

window.setTimeout(function() {
  asyncTest('test1', function() {
    expect(1);
    ok(true);
    start();
  });
  test('test2 - Something has gone horribly wrong!!', function() {
    expect(100);
    console.log("Yes this test run");
    ok(false);
  });
  test('test3', function() {
    expect(1);
    ok(true);
  });
}, 150);

In this example, the second test (or actually any test between the first and last) will be listed, run, but the results of the test will be ignored. This can be particularly dangerous since they will be visually reported as passed (yet empty) tests, and the QUnit.done results object will not show any errors. In this situation, it seems like QUnit collects the list of tests, but during the execution of the individual tests, something goes wrong.

Note that the 2nd test has 0 reported asserts, even though the test itself was actually run.

Timeouts (independent of actual asyncTests)/asynchronous behavior tends to throw the test runner off. Since I was unit testing async modules and code that depended on these modules, this was a problem. What I needed was a way to ensure that the test suite was only run after every test was registered, and I could only register each test when I knew the dependencies they relied on were loaded themselves.

Make each test a managed dependency:

QUnit.config.autostart = false;
using('tests/suite1', function() {
    start();
});

For starters, we tell QUnit not to autostart. This will allow us to explicitly start the runner when we are certain the test environment is ready. We next re-write, then load in our test suite(s) as loadrunner.js dependencies, and once that is complete we are safe to begin. loadrunner.js will deal with all the sub-dependencies that may be required in “tests/test1”, and will also block until that test has explicitly reported that it is ready.

provide('test/suite1', function(exports) {
  asyncTest('test1', function() {
    expect(1);
    ok(true);
    start();
  });
  test('test2 - Something has gone horribly wrong!!', function() {
    expect(100);
    console.log("Yes this test run");
    ok(false);
  });
  test('test3', function() {
    expect(1);
    ok(true);
  });
  exports();
});

And when this test suite is run, we see the expected failing test, and also a single done event fired. The screenshot below shows us the desired output, a big glaring warning telling us that something has gone wrong in our code!

I realize now that my choice of example makes this look bad, but we *want* to see this failing test. In this example I also brought back the QUnit.done, which now works.. a nice side effect.

I happened to use this technique (and solution) in loadrunner.js, because I was developing and testing loadrunner modules. You can find a similar solution if you are working with require.js here from the QUnit boards. I’ve put some additional code up on Github here, which runs through a few various test scenarios and how this pattern can be used to create relatively clean/readable test files. I’ve also included a test template for “async-safe” QUnit suites.

I personally felt that this syntax also reads much nicer once you start including 10’s if not hundreds of test files. loadrunner will also load regular javascript files in the same fashion so you can mix (if you know for certain that there will be no sync probs). I found that by treating each test file as a managed dependency, you get better testing isolation, less potential for test poison/side effects, and more control over your test context with very little footprint. Lastly, it allows us to avoid making everything an async test just because a specific test happens to load an async dependency (especially when we are unit testing functionality, not the ability to load things asynchronously).

Hope this helps out someone down the line! Test safely my friends!

Advertisement