Mission

Testing is important to us as a community. The Melange community strives to produce a framework that can be used to build useful and reliable web applications to host programs like Google Summer of CodeTM. This desire drives our placement of testing as an important part of contributing to the Melange project. Testing is also an important complement to code reviews.

Human Testing

We love to have testers visit, try things out and particularly report issues on the issue tracker. It‘s fine to raise issues that might be most expediently solved by documentation. They are still issues. If you are reporting an issue, do check first that the issue hasn’t already been reported. This search link will give you issues which have been fixed but perhaps not pushed to appspot yet, and it gives the most recently raised issues first.

Load Testing

We have been doing some preliminary work on load testing Melange at appspot using this list of pages:

Earlier testing allowed us to make a change in how sidebars are handled, things look a lot better now. We're currently not too worried about load.

Automated Testing

The Melange project aspires to apply testing best-practices to the development of the SoC framework, always working towards goals such as higher release reliability and increasing test coverage. Improving towards these goals makes development methods such as continuous integration possible. To that end, contributors need to adhere to certain testing guidelines.

While we feel that software testing is very important, the Melange community does not want to refuse patches and other contributions from casual developers. To that end, we promise to make the tests themselves reliable and easy to run. We want to encourage every contributor to help us enhance the quality and reliability of the SoC framework and the open source and free software contribution web sites that is will power.

The SoC framework will define some test suites, including smoke tests and golden tests. At this point in time, this document is mostly a TODO list, but it will evolve into the project testing guidelines as these wish list items are implemented.

Test suites

Smoke tests

The SoC framework will include a set of “smoke tests” that are used as a “first line of defense” before code contributions can be submitted for review by other contributors.

As long as the “golden tests” do not take too long to run, they will be included in the smoke test suite. Once the smoke test suite becomes too large, the more esoteric tests will be migrated to the golden test suite.

Running the smoke tests

To run the smoke tests, run this command from the root of the SoC mercurial working copy:

nosetests -v \
 trunk/scripts/tests
 # more test directories will be added to this example later

The -v (--verbosity) option shows specifically which tests are being run and can be omitted.

These test modules are explicitly not executable scripts so that they can use explicit relative references to “reach back” to the module to be tested from the tests/ sub-directory (which do not work if __name__ == '__main__').

Golden Tests

The golden test suite are a proper superset of the shorter-to-build-and-run smoke test suite used as a precondtion for patch code reviews (that is, all of the “smoke tests” are included in the “golden tests” by definition).

For some time, the smoke test suite and the golden test suite are likely to identical. See the note in the smoke tests section.

Running the golden tests

Instructions for building and running the golden test suite will be added here.

Policies

Nontrivial changes need tests

Any non-trivial change requires an accompanying test, in the same patch as the proposed change. These guidelines need to be followed:

  • There should always be a test whenever adding new behavior or changing existing behavior in any way (and the test code should be in the same patch as the code change).
  • Refactorings do not require new tests, so long as the code being refactored is already covered by tests (and those tests pass after the refactoring). Refactorings should make no non-trivial changes to tests, since the tests are instrumental in validating that the refactoring did not break anything.
  • Emergency patches (perhaps quick fixes by the release packager) do not require tests immediately, but a test should be included in a new patch accompanying the follow-up code review.
  • You need to add tests if changing a section of code that does not currently have them.

All changes should pass smoke tests before review

All patches should pass the smoke test suite before being submitted for review. So as not to waste other contributors' time, please ensure that the smoke tests pass before requesting a code review of a patch, and after any svn update or svn merge that might change the code being reviewed in a development branch.

Never reduce existing code coverage

Please make a habit of collecting code coverage information to determine if the new lines of code in a patch have test coverage (though keep in mind that code coverage alone is not sufficient). Code reviewers and “lurkers” on the developer mailing list should actively ask for tests to be included in each patch, in adherence with the guidelines listed above.

No official release if tests are failing

The SoC framework will not be officially released if the continuous build indicates that any of the “golden tests” are failing. (Before a continuous build is up and running, this will be checked manually by having the release packager run the test suites explicitly.)

No non-deterministic tests

Please do not allow any non-deterministic tests to be added to the test suites, and please fix any tests that are detected to be non-deterministic by using mocks. Non-deterministic tests are considered second priority only to actual test failures. This is because official releases are gated by passing the golden test suite, so that suite needs to be trustworthy. Eventually, official releases will be based on a green-light build from the continuous build. A test that later fails represents a scenario that can also be a failure in in the release itself.

User Acceptance Testing

Instructions for pre-release building and running of the sample Program web applications and testing them manually (or with a tool like Selenium) for User Acceptance Testing (or UAT) will be added here.

Testing tools

Continuous build

A medium-term goal is the ability to continuously build and test the HEAD revision in /trunk/ automatically. Tools exist for setting up continuous build and test processes and monitoring those processes. At some point, these will be deployed for the Melange project.

Code test coverage

A long-term goal is the ability to acquire automated test coverage. Tools exist for measuring test coverage of Python code. At some point, these will be incorporated into a Melange continuous build.


Copyright 2008 Google Inc. This work is licensed under a Creative Commons Attribution 2.5 License.