Integration Testing with TeamCity

Inspired by a recent seminar Ken Schwaber presented at the company I work for, I decided to provide the means for ensuring that the high-level functionality of an application package built in our continuous integration environment (we use TeamCity) can be tested automatically.

The Context

Suppose we are working on a stand-alone java app (let’s call it tc-rec, after the app I have been setting up integration tests for) and have an existing build which generates a zip file containing all the jars and scripts required to run the app. The unit test coverage is decent, but by their nature unit tests do not prove that the built package can actually be executed -– a bug in the execution scripts or in the code performing dependency injection of the live app would not be detected by them. To test that the apps are actually runnable as built we are downloading the zip built by TeamCity, extracting it to some temporary directory and running through a series of sample input data sets and then inspecting the output to see if it matches what we expected.

Most of this is fairly easy to automate. The approach I chose was to have a script which grabs the built zip, extracts it to a temporary directory, runs a series of test cases, each containing input and expected output, and verifies that the output produced matches expected output. These integration test scripts are set up as a separate project –- in this way it is not tied to a particular build of a tested app and we can just drop a zip file into the test directory and have it tested. The directory structure is as follows:

/tc-rec
  /src
  /target         <-- the build puts tc-rec-x.y.z.zip in this directory
/tc-rec-integration-tests
  /tests
    /test-suite-1
      /test-case-1
        /input
        /output
      ... (more test suites and cases)
  /target         <-- this directory is created by test scripts
    /test-suite-1
      /test-case-1
      ...
  /work           <-- created by test scripts and deleted at the end
    /app          <-- the app is unzipped to this directory
    /test-suite-1 <-- each test case has its own work area
      /test-case-1
      ...

I decided to script it up in Python because of prior familiarity and it already having been used in our environment, but anything that can be run from TeamCity (or just a command line, for that matter) would do.

TeamCity Setup

In TeamCity, tc-rec was set up as a separate build configuration, generating a single artifact: tc-rec-x.y.z.zip. I set up a new TeamCity build configuration, tc-rec-integration-tests and made it dependent on the zip file generated by tc-rec in such a way, that the latest zip would be copied to tc-rec-integration-tests root directory when that build kicks off:

{{:tech:teamcity-inttest-dependencies.png|}}

//tc-rec-integration-tests* build was also made to run whenever tc-rec build completed, thus testing that every zip produced by TeamCity passes the integration tests.

{{:tech:teamcity-inttest-triggers.png|}}

Reporting Test Results

Now, after a couple of failed builds and sorting out the build agent’s settings and paths, the build should be up and running in TeamCity. The point of the exercise, though, is to have an easy way of checking the health of latest build. How do we tell TeamCity to fail the tc-rec-integration-tests build if any of the tests fail?

Assuming we are using command line runner (as I do for running the Python scripts), TeamCity will check the exit code of the command executed and set the build status accordingly. The first step is then to make sure that any failure during the execution of the test scripts sets the exit code to non-zero. That will provide the basic passed/failed indication for the entire integration tests build.

Even more helpful would be the ability to tell which exact test failed and what the reason was. Printing out diagnostics to stdout/stderr can help – all these messages will end up in the build log displayed in one of TeamCity’s tabs, and which can be looked at to determine the reason for failure. There is a more practical solution though: command line runner can be set up to watch for test result files in common formats (Ant’s JUnit, Maven’s Surefire) and nothing prevents us from writing these from the test scripts. I decided on Surefire XML, which can be generated using these templates:

REPORT_TEMPLATE = '''
<testsuite name="%s"
           errors="0"
           skipped="0"
           tests="%s"
           failures="%s"
           time="%s">
%s
</testsuite>'''

SUCCESSFUL_TEST_CASE_TEMPLATE = '''
  <testcase name="%s"
            time="%s"/>
'''

FAILED_TEST_CASE_TEMPLATE = '''
  <testcase name="%s"
            time="%s">
    <failure type="failure">
        %s
    </failure>
  </testcase>
'''

With test report files in place TeamCity will display a nice summary and track performance of the tests over subsequent builds – as it does with standard JUnit tests:

{{:tech:teamcity-inttest-testresults.png|}}

Viewing the Output

One final useful thing would be to be able to visually inspect the output generated by the application during test runs. TeamCity makes it easy through its artifact publishing mechanism. After adding the path to the target directory to artifact paths:

{{:tech:teamcity-inttest-artifactcfg.png|}}

we can browse the output directly from TeamCity’s web page.

{{:tech:teamcity-inttest-artifacts.png|}}

Now we are all set for quick feedback on the build health. All that remains is writing some meaningful integration test cases.