This document contains information about using the Chimera test suite, including the writing of tests, and the test suite's image comparison capabilities. It is divided into several sections:
root directory for Chimera testing frameworkCHIMERA/test/README
[this file!] contains instructions on the Chimera testing structure, how to run the tests, how to add new tests.CHIMERA/test/tools/
directory contains several scripts and modules that manage and run testsCHIMERA/test/tools/tester.py
runs the development version of Chimera (chimera-build/PLATFORM/build/chimera) on specified platforms with specified tests, and posts results on the chimera testsuite intranet site.CHIMERA/test/tools/generateTestFile.py
generates a file that tells Chimera which tests to run, and prints out location of file. this script can also be used to run the development version of chimera, but produces only a raw output file.CHIMERA/test/tools/TestUtils.py
contains several functions and classes that test-writers may find helpful to use in writing their testsCHIMERA/test/pytests/
directory contains all the unit tests that can be run by the testing framework. passing an argument of all to the tester script will result in all of these tests being runCHIMERA/test/data/
directory contains data files used by multiple testsCHIMERA/test/images/
directory contains images generated by test suite. Each platform has its own subdirectory within this directory.
There are several levels of testing 'layers' involved the Chimera test suite. This provides flexibility for arbitrarily grouping tests to test complex usage patterns in Chimera. Here is an overview:
TestGroup | | | contains one or many | |____> TestCase | | | contains one or many | |___> unit test
The most basic unit of testing is the actual unit test. A unit test is a
self-contained entity and it should encapsulate testing one very specific functional unit of a program,
and then checking to make sure that whatever you would expect to happen as a result, actually did happen.
Unit tests cannot live on their own however; they must exist as part of a TestCase.
A TestCase class can contain one or many unit tests, in addition to any
housekeeping code that deals with setting up before, and cleaning up after,
the test has run (or before/after each test has run, if there are several).
For example, here's a pseudocode representation of a TestCase:
class ExampleTestCase(TestCase) def setUp(): """This method is run before the actual test ('runTest') is run This method 'sets the stage' the test to run. In this example, this includes opening the model 3fx2.pdb . """ open 3fx2.pdb def runTest(): """The actual test. This method comprises the actual content of the test. """ doMidasCommand('repr stick') for each atom in 3fx2: if atom is not stick: TEST FAILS!!!! TEST SUCCEEED! def tearDown(): """This method performs any necessary cleanup after the test has completed. In this example, it closes the molecule that was opened in the 'setUp' method. """ close 3fx2.pdbA TestCase can have
setUp
and tearDown
methods, which execute before and after
(respectively) each test in the TestCase is run. So, if the TestCase has multiple testWhatever
methods, setUp
and tearDown
will run before and after each testWhatever
method runs.
The above TestCase only has one unit test. It is (must be) contained in a method named
runTest
. However, it is also possible for a TestCase to have more than one unit tests. In this
case, the TestCase should not have a runTest
method, but instead should have one or more
methods that start with the string test
. The above example TestCase with several unit tests would
look like:
class Example2TestCase(TestCase) def setUp(): """This method is run before the actual test ('runTest') is run This method 'sets the stage' the test to run. In this example, this includes opening the model 3fx2.pdb . """ open 3fx2.pdb def testStickRepr(): """An actual test. Tests Chimera's stick atom representation""" doMidasCommand('repr stick') for each atom in 3fx2: if atom is not stick: TEST FAILS!!!! TEST SUCCEEED! def testAtomColor(): """Tests the color command""" doMidasCommand('color blue') for each atom in 3fx2: if atom is not blue: TEST FAILS!!!! TEST SUCCEEED! def testUndisp(): """Tests the ~disp command""" doMidasCommand('~disp') for each atom in 3fx2: if atom is not hidden: TEST FAILS!!!! TEST SUCCEEED! def tearDown(): """This method performs any necessary cleanup after the test has completed. In this example, it closes the molecule that was opened in the 'setUp' method. """ close 3fx2.pdb
Test failure or success can be determined in several ways. If the test is testing for a particular value of something, you can use one of several assertions which will cause a test to fail if the required conditions are not met (or if they are, depending on the assertion). Complete details on using these assertions can be found in the Python unittest documentation (or see any of the test files in CHIMERA/test/pytests/ for examples of these assertion statements) A test will be considered failed if one of the assert statements in the test fails; i.e. if what you expect to happen doesn't happen, the test has failed. If an uncaught exception is raised during the execution of that test, then the test is considered to have encountered an error; i.e. if something breaks that you did't expect to break, it is considered an error. An error usually means that there is something wrong with the test, not with Chimera.If a test completes running without failing any assert statements, and without raising any uncaught exceptions, it will be considered passed.
The file TestUtils.py has a class calledOpen3fx2TestCase
which defines setUp
and
tearDown
methods to manage the opening and closing of the 3fx2 molecule.
It can be used as a superclass, by implementing a runTest
method,
or several testSomething
methods.
TestCases can be grouped into a TestGroup. A TestGroup contains one or more TestCases:
class ExampleTestGroup: def setUpOnce(): ## perform some set up before the tests are run def tearDownOnce(): ## clean up after all the tests are run class ExampleTC1(TestCase): def setUp(): ## set up specific to this test def runTest(): ## do some stuff here def tearDown(): ## clean up specific to this test class ExampleTC2(TestCase): def setUp(): ## set up specific to this test def runTest(): ## do other stuff here def tearDown(): ## clean up specific to this test
A TestGroup can have methods named setUpOnce
and tearDownOnce
, which are
run one time
before and after (respectively)
all
tests are run.
So in the above example, the order of execution would be
ExampleTestGroup.setUpOnce, ExampleTC1.setUp, ExampleTC1.runTest, ExampleTC1.tearDown, ExampleTC2.setUp, ExampleT2.runTest, ExampleTC2.tearDown ExampleTestGroup.tearDownOnce
The file TestUtils.py provides a class called Open3fx2TestGroup
which you can use as a superclass when writing your own tests.
This provides setUpOnce
and tearDownOnce
methods that
manage the opening and closing of the 3FX2.pdb molecule .
This enables you to simulate performing multiple actions on a model, while testing for a condition at each intermediary step.
In order to add a test to the test suite:
RUNME
(if you are making a package, this attribute should
go in the __init__.py file of the package)
This should be set to True
if the test is to be run or False
if it is to be left out.
This provides an easy way of turning on/off certain tests.See CHIMERA/test/pytests/pyt_MidasCommands.py for a documented template.
There are several notions of test ordering. Several situations are possible:
generateTestFile.py
script.
NTH=1
There are three main ways to run the test suite:
The test suite extension can be used to run the test suite from within Chimera. Code for the extension is checked into the Chimera CVS tree at
chimera/test/TestSuiteExtIn order for Chimera to pick up this extension, you must put your the directory above the
TestSuiteExt
(i.e.
chimera/test
) on your Chimera extension search path.
This will add the item Tools/Utilities/Run Test Suite
to
the Chimera menu system.
Invoking the test suite extension brings up a window with a list of
tests on the left side, and an empty output log on the right side. This
list is populated by looking for all the files/directories in
chimera/test/pytests
that have the prefix 'pyt_'
.
Clicking on any of the tests in the list will display a description of
that test in the lower left-hand corner of the window. Multiple tests
can be selected by Shift-
or Ctrl-
clicking.
Pressing Enter or clicking on the Run button
will run the selected tests, and display output to the right-hand
side of the dialog. Choosing the list item labeled all will run
all test suite tests that have their RUNME attribute set to True
(see this section for more information about
the RUNME
attribute).
Clear output can be used to clear the contents of the output log,
and Save can be used to save it to a file.
The test suite extension is the easiest way to run just a couple tests, or debug new unit tests as you write them.
The purpose of the generateTestFile.py
script is to produce a python file
('test file') that can be passed in to Chimera, which will run a series of tests (defined by input options
to generateTestFile.py
), and produce an output file containing results from the
tests that were run. Actually, when run in Chimera, the 'test files' generated by this script
will produce three different output files, all which are named according to the supplied output
file name. So if you gave an argument of
test.out
to the -o
option, these three output files would be
The standard usage scenario for this script is that you have some installation of Chimera
on your machine, let's say it's in /tmp/dan/chimera-NEW
, and you want to
run the test suite against it. This can be done by running chimera, and using the generateTestFile.py
script to generate an input file. generateTestFile.py
outputs the path to the generated test
file on std out; you can use backquotes (`) to capture the output of the script (i.e. the path
to the test file) and pass it in to Chimera:
/tmp/dan/chimera-NEW/bin/chimera `./generateTestFile.py -t all -o results.out`
This tells generateTestFile.py
to make a Python file that has all the code necessary
to run the test suite on all the tests and write the output to a file named results.out
in the current directory. The path to this test file is printed to standard out. Then, Chimera is invoked with
this file (actually a python file) as input.
generateTestFile.py [-r] [-p platform_directory] [-rand] -o output_file -t test_listDESCRIPTION:
-r don't just generate a test file; actually run Chimera against the specified tests. The location of the Chimera that is actually run depends on the -p option, if it is specified. If the -p option is not given, the following mapping specifies which Chimera will be run, based on which system the generateTestFile.py script is run on: /usr/local/src/staff/chimera-build/Darwin-X11/build -> Mac OS X /usr/local/src/staff/chimera-build/Linux-X11/build -> Linux /usr/local/src/staff/chimera-build/IRIX-X11/build -> SGI IRIX /usr/local/src/staff/chimera-build/OSF1-X11/build -> Alpha Tru64 If the -r option is not specified, the script will just print out the path of the generated test input file. This file can then be passed to Chimera (its just a python file) to run the specified tests with all the desired options. -p platform_directory specify which build of Chimera to run. The argument must correspond to a name of one of the directories in/usr/local/src/staff/chimera-build/
. Currently this is one of: Darwin-X11, IRIX-X11, Linux-X11, OSF1-X11, RedHat8-X11 If this option is not specified, a directory will be chosen based on the content of Python'ssys.platform
. This option is only really useful for specifying which Linux build to run (Linux-X11 specifies the distribution built on Red Hat 7.1, and RedHat8-X11 specifies the distribution build on RH 8.0 ) if you're on a Linux machine. -rand run tests in a random order (module level) -o output_file Write test results to output_file -t tests_list A whitespace seperated list of tests to run, orall
to run all tests whose RUNME attribute is set to True. This script determines which files contain viable tests by looking in the test directory (chimera/test/pytests
) for modules or packages whose name starts with the prefix '_pyt'. If a list of test names is given, elements of the list correspond to names of modules or packages in theCHIMERA/test/pytests
directory (minus the.py
suffix). Example Usage:% ./generateTestFile.py -rand -o test.output -t all
will run the corresponding build version of chimera, running test modules in a random order. Output goes to a file 'test.output')% /usr/local/chimera `./generateTestFile -o test.output -t pyt_MidasCommands pyt_AtomTrigger`
will run/usr/local/chimera
, only running the tests contained in the filespyt_MidasCommands.py
andpyt_AtomTrigger.py
. Output goes totest.output
. Notice the `backticks` around the generateTestFile command and arguments, because the script prints the location of the test file to standard out.
Note: make sure that you are in the chimera/test/tools
directory when you run this script.
Otherwise, bad things will happen!
tester.py
is a slightly higher level tool than generateTestFile.py
, because
in addition to generating a test file or running Chimera against the specified tests, the tester.py
script post-processes the test output, and drops the results into a several nicely formatted files on the
Chimera testsuite intranet site. This script should be used
whenever you want to test the development build of Chimera (chimera-build/PLATFORM/build/chimera).
Note that this script must be run from socrates to function correctly. There is an option
(see below) to specify on which X-server to display the Chimera window (this would most likely be your
desktop machine from which you logged into socrates).
tester.py -p platform_list -t test_list -d display_hostDESCRIPTION:
-p platform_list a whitespace-seperated list that specifies on which platforms to run the test suite against the development version (chimera-build/PLATFORM/build/chimera) of Chimera. platform_list can be one or more of the following: Darwin-X11, IRIX-X11, Linux-X11, OSF1-X11, or RedHat8-X11. -t test_list a whitespace seperated list of tests to run (actually the name of the containing file, minus the 'Caveats for using.py
' suffix, e.g., pyt_MidasCommands, or 'all' to run all the tests (those which have theRUNME
attribute set to True) -d display_host the name of the host running the X-server on which Chimera should be displayed, e.g. tolkien.cgl.ucsf.edu
tester.py
HOME/.ssh/authorized_keys
mechanism. However this is not mission-critical , just be prepared to
enter your password into the command line twice for each host you're testing.
% xhost +client_host
on the server host, where client_host
is the name of the host you will be testing Chimera on.
Example Usage:
% ./tester.py -p IRIX-X11 -t pyt_AtomSpec -d tolkien.cgl.ucsf.edu
will run the build version of Chimera on IRIX against the Atom Specification unit test,
and displays the Chimera window on tolkien's X-Server. The output of this test will be posted
on the chimera test suite intranet site.
Note: make sure that you are in the chimera/test/tools
directory when you run this script.
Otherwise, bad things will happen!
The mechanism by which tester.py
generates test output and images on the host machine,
copies stuff over to socrates, and finally updates intranet/chimera-testsuite/index.html
page
is pretty convoluted; a brief explanation is in order.
The following procedure is repeated for each platform which is passed into the
tester.py
script (with the -p
option). This platform will be
referred to in this discussion as the target platform.
argument to -p option |
host name |
---|---|
OSF1-X11 | socrates.cgl.ucsf.edu |
Linux-X11 | tolkien.cgl.ucsf.edu |
RedHat8-X11 | tolkien.cgl.ucsf.edu |
IRIX-X11 | spinoza.cgl.ucsf.edu |
Darwin-X11* | austin.cgl.ucsf.edu |
Windows-WGL* | buckley.cgl.ucsf.edu |
Note: platforms with an (*) don't really work in this automated framework and must be run manually (but it would be nice if they did).
tester.py
script SSHes into the designated host, with an ssh
command like
ssh -X (USER)@(HOST) env DISPLAY=(DISPHOST):0
where (USER) is the username of whoever is running this script, (HOST) is the host that
corresponds to the given platform argument, and (DISPHOST) is argument supplied to
tester.py
for the -d
option (which dictates on what host Chimera should
display). So you could theoretically be logged into socrates, run Chimera on host A but
direct it to display on host B.
While on the target host, it changes directory to
/usr/local/src/staff/(USER)/chimera/test/tools
and issues the following command:
./generateTestFile.py -r -p (PLATFORM) -o /tmp/out.(PLATFORM) -t (TESTS)
This invocation of generateTestFile.py
script generates a test file containing
all the tests passed in to tester.py
(this list replaces (TESTS) in the above
example), and runs the platform-specific development version of Chimera
(note the -r
option) with this test file as an argument. The output of the tests
is written to /tmp/out.(PLATFORM)
tester.py
script makes a new directory:
socrates:/usr/local/src/www/cgl/secure/datadocs/chimera-testsuite/(PLATFORM)-results/(DATESTAMP)
, where (DATESTAMP) in the above example will be named according to the time the tests
were run. For example, if the tests were run on June 16th, 2005 at 1:27 pm (and 12 seconds),
(DATESTAMP) would be 20050616132712
.
Once the above procedure has been repeated for all the platforms passed in to the
tester.py
script, the Chimera test suite index.html file is updated to reflect the
newly run tests: For each platform that was included in the most recent test run:
chimera-testsuite/(PLATFORM)-results/(DATESTAMP)
directory) The last step is running the image comparison scripts. Some of the tests in the test suite will actually save images for later comparison, e.g. to test that the ribbon- rendering code renders ribbons correctly. These images are by default saved to the directory:
/usr/local/src/staff/(USER)/chimera/test/images/(PLATFORM)The script
imgcmp.py
is given a list of directories - one for each of
the platforms that were included in this test suite run, and figures out
which filenames are present in all these directories. The other input into this
script is where it should write the results (output) of the image comparison. The following
path is given:
socrates:/usr/local/src/www/cgl/secure/datadocs/chimera-testsuite/image-comparison/index.html
This page will contain a table which compares each image on each platform produced by the test suite with the same image on all the other platforms that were included in this run. It is accessible from the main Chimera test suite index page (see the next section for more information on image comparison).
TestUtils.snapshot
function - generating imagessnapshot
that can be used to capture an image
of Chimera's graphics window at any time during a test:
TestUtils.snapshot(filename="large_match.png", width=512, height=512, format="PNG", supersample=3)
It takes the following arguments: filename - name of file to save image to (required) width - width of image to save (default = 512 pixels) height - height of image to save (default = 512 pixels) format - image format, can be any format Chimera knows how to save (default = "PNG") supersample - how much to supersample when saving image (default = 3)
It is also possible to set the environment variable CHIMERA_IMAGE_DIR, to specify
what directory the tests should save images to. This directory takes precedence over
the directory specified in the filename argument, although the acutal file *name*
( i.e. as returned by os.path.basename()
) will be maintained.
So if the CHIMERA_IMAGE_DIR environment variable is set to
/home/dan/chimera-images
and the filename argument passed to the snapshot function is
/tmp/chimera-ts-images/small_mol.png
snapshot
will actually save the file to
/home/dan/chimera-images/small_mol.png
This is to facilitate maintaining several sets of images, without having to
change the path in all the occurrences of the snapshot
function.
my_image_directory
which returns
the path to the 'standard' image comparison file-save location:
CHIMERA/test/images/PLATFORM
snapshot
function
to save images into the 'standard' image comparison file-save location:
TestUtils.snapshot( os.path.join(TestUtils.my_image_directory(), 'small_mol.png') )
Note, that if this is called from one of the Linux builds (RedHat8-X11 or Linux-X11), my_image_directory() returns CHIMERA/test/images/linux2 for the Linux-X11 build, and CHIMERA/test/images/redhat8 for the RedHat8-X11 build
The CHIMERA/test/tools/imgcmp.py script is used to compare images saved during the test suite, and then generate several report files containing different views on the comparison results
imgcmp.py output_file directory1 directory2 ... directorynDESCRIPTION:
output_file file will contain the main 'image comparison by platform' html page directory1 directory2 ... directoryn list of directories from which to compare images. The script will only compare those files which occur in *all* the directories given.Example Usage:
% ./imgcmp.py /usr/tmp/dan/testsuite-comp/comp_results.html /usr/tmp/dan/testsuite-imgs/{windows_imgs, linux_imgs, irix_imgs} This will find all similarly named files in the three directories, and run the comparison on them, printing out the results to the file /usr/tmp/dan/comp_results.html . For example, if there is a file named molecule.png in all three directories, this file will be included in the comparison. However, another file named mol_surf.png that only occurs in two of the named directories, would not be included in the comparison results .
The 'main' view (i.e. the file that gets written to the output_file
argument to the script)
shows image comparison results for each file, for each combination of platforms.
Assuming that imgcmp.py was used to compare the three files large_color.png, small_color.png,
and session.png across three platforms linux2, irix6, and win32, the output would look like:
File | win32 irix6 |
win32 linux2 |
irix6 linux2 |
---|---|---|---|
large_color.png |
view images
R:0.35±0.7 G:0.35±0.71 B:0.36±0.7 |
view images
R:0.51±0.78 G:0.48±0.79 B:0.51±0.8 |
view images
R:0.22±0.47 G:0.19±0.45 B:0.22±0.47 |
session.png |
view images
R:7.1±28 G:9.6±34 B:7.2±28 |
view images
R:7.3±25 G:7.8±26 B:7.5±25 |
view images
R:10±31 G:12±35 B:10±31 |
small_color.png |
view images
R:0.23±0.57 G:0.23±0.58 B:0.23±0.57 |
view images
R:0.34±0.67 G:0.31±0.66 B:0.34±0.67 |
view images
R:0.15±0.4 G:0.12±0.38 B:0.14±0.4 |
Each file name on the left is linked to another page that has a cross-platform comparison. The page
for large_color.png has a table like this:
Platform | win32 | irix6 | linux2 |
---|---|---|---|
win32 |
R:0±0 G:0±0 B:0±0 |
R:0.35±0.7 G:0.35±0.71 B:0.36±0.7 |
R:0.51±0.78 G:0.48±0.79 B:0.51±0.8 |
irix6 |
R:0.35±0.7 G:0.35±0.71 B:0.36±0.7 |
R:0±0 G:0±0 B:0±0 |
R:0.22±0.47 G:0.19±0.45 B:0.22±0.47 |
linux2 |
R:0.51±0.78 G:0.48±0.79 B:0.51±0.8 |
R:0.22±0.47 G:0.19±0.45 B:0.22±0.47 |
R:0±0 G:0±0 B:0±0 |
Finally, each cell in the table is linked to a page showing a side-by-side comparison of the two images being compared in that cell. The 'view images' link in the large_color.png cell for the win32/irix6 comparison leads to a page like this:
Image 'large_color' on win32, generated 2004-05-14 15:14:53![]() |
Image 'large_color' on irix6, generated 2004-05-14 15:14:53![]() |
Passed in as an argument to imgcmp.py. This file will contain the top level table showing image comparison results for each file, in all combimations of platform pairs.OUTPUT_DIR/per-image/
This directory will contain one html file for each image included in the comparison. Each of these files contains an html table showing comparison results for that image.OUTPUT_DIR/side-by-side/
Side-by-side image comparisons. Each file shows the same image from two different platforms.OUTPUT_DIR/images/
This directory will contain a subdirectory for each platform in the comparison (i.e. each directory passed as an argument to the imgcmp.py script). In each of these subdirectories will be a symbolic link to the actual location of the image file on the filesystem. Html files in the side-by-side directory use these link to display the image files.