Tests

Running tests

We currently have two to four coexisting test mechanisms:

  • Traditional shell script tests using TEST scripts; these are being phased out
  • Perl scripts; these has been phased out
  • Runtest based scripts, see: http://runtest.readthedocs.org
  • CTest

CTest wraps around runtest. Runtest is for individual tests. CTest ties them all together and produces a test runner. The preferred way to run tests is through CTest inside the build directory.

You can run all tests:

$ ctest

all tests in parallel:

$ ctest -jN

You can match test names:

$ ctest -R somename

or labels:

$ ctest -L essential

to see all options, type:

$ man ctest

To see all labels, browse cmake/Tests(LS)DALTON.cmake.

You can also run tests individually, for this execute the individual python test scripts and point it to the correct build directory (try ./test -h to see all options).

Warning

We should here describe what tests we require to pass before pushing anything to master.

Writing tests

Have a look here: http://runtest.readthedocs.org.

And have a look at other test scripts for inspiration. You have there full Python freedom. The important part is the return code. Zero means success, non-zero means failure.

To make a new test you have to make a folder, which is the test name, and put a mol and a dal file in that folder. You also have to include a reference output in a sub-folder “result/”. Then you can just copy a “test” file from one of the other test directories and modify it according to your needs (there is no for this).

You have to add your new test to cmake/Tests(LS)DALTON.cmake. Then do:

$ cmake ..

in your build directory. Finally you can run your new test with the command:

$ ctest -R my_new_test

Your test folder will not be cluttered with output files if you are running the test from the build directory.

You can also adjust the test and execute it directly in the test directory but then be careful to not commit generated files.

Nightly testing dashboard

We have two testing dashboards, https://testboard.org/cdash/?project=Dalton, and https://testboard.org/cdash/?project=LSDalton.

This is the place to inspect tests which are known to fail (ideally none).

By default CTest will report to https://testboard.org/cdash/?project=Dalton. You can change this by setting CTEST_PROJECT_NAME to “LSDALTON” (or some other dashboard):

$ export CTEST_PROJECT_NAME=LSDALTON

By default the build name that appears on the dashboard is set to:

"${CMAKE_SYSTEM_NAME}-${CMAKE_HOST_SYSTEM_PROCESSOR}-${CMAKE_Fortran_COMPILER_ID}-${BLAS_TYPE}-${CMAKE_BUILD_TYPE}"

If you don’t like it you can either change the default, or set the build name explicitly:

$ ./setup -D BUILDNAME='a-better-build-name'

Then run CTest with -D Nightly or Experimental:

$ ctest -D Nightly      [-jN] [-L ...] [-R ...]
$ ctest -D Experimental [-jN] [-L ...] [-R ...]

If you want to test your current code, take Experimental. If you want to set up a cron script to run tests every night, take Nightly.

By default CTest will report to https://testboard.org/cdash/?project=Dalton. You can change this by setting CTEST_PROJECT_NAME to “LSDALTON” (or some other dashboard):

$ export CTEST_PROJECT_NAME=LSDALTON

On a mobile device you may find this useful: https://testboard.org/cdash/iphone/project.php?project=Dalton

Testing with coverage

To compile and test for coverage is to collect statistics on which source lines are executed by the test suite. This accomplished at the setup stage:

$ ./setup --type=debug --coverage

and by executing the tests with:

$ ctest -D Experimental

This will execute the tests and upload the statistics to the CDash, and listed under Coverage. For each source file one obtains the percentage of executable lines that were executed by the test suite or the selected tests. To be able to see which lines where executed and which were not executed, one has to be authorized and logged in to the CDash pages.

Viewing coverage results locally (GNU)

This section applies for the GNU compiler suite. It can be useful if one wants to check things locally without submitting each run to the public site.

  • The setup is the same as above, the --coverage compiler option is a synonym for the compiler flags -fprofile-arcs -ftest-coverage and link flag -lgov.

  • During the compile stage, for each source file xxx.F a file xxx.F.gcno is generated

  • When the tests are run, for each source file xxx.F a file xxx.F.gcda is generated

  • The gcov program is used to obtain text files with coverage results. This can be used for individual files: e.g., in the build directory run the command:

    $ gcov CMakeFiles/dalton.dir/DALTON/yyy/xxx.F.gcno
    File '.../dalton/DALTON/yyy/xxx.F'
    Lines executed:86.37% of 653
    Creating 'xxx.F.gcov'
    

This generates a copy of the source file where each source line is preceded by exexecution count and line number. In particular, lines that have not been executed by the tests are labeled #####

  • The lcov program is a graphical frontend and the following steps can be used to generate html locally:

    $ lcov -o xxx.info -c -d CMakeFiles/dalton.dir/DALTON
    $ genhtml -o result xxx.info
    

Open result/index.html in your browser and you have a local graphical view of coverage statistics down to the source-line level.