summaryrefslogtreecommitdiff
path: root/doc/TESTS.rst.txt
diff options
context:
space:
mode:
authorrgommers <ralf.gommers@googlemail.com>2010-11-11 21:29:56 +0800
committerrgommers <ralf.gommers@googlemail.com>2010-11-11 21:29:56 +0800
commit4a7de57448091ef02e50edf9d1e302c20a26ff0c (patch)
tree357de983a7ba4ff56e7800781f4619d70111a89f /doc/TESTS.rst.txt
parenta07ac0f4703cbac0e8747ac0e9e08f41cba9b896 (diff)
downloadnumpy-4a7de57448091ef02e50edf9d1e302c20a26ff0c.tar.gz
DOC: rename ReST files under doc/ from *.txt to *.rst.txt, so they render on github.
Diffstat (limited to 'doc/TESTS.rst.txt')
-rw-r--r--doc/TESTS.rst.txt367
1 files changed, 367 insertions, 0 deletions
diff --git a/doc/TESTS.rst.txt b/doc/TESTS.rst.txt
new file mode 100644
index 000000000..936f00a1d
--- /dev/null
+++ b/doc/TESTS.rst.txt
@@ -0,0 +1,367 @@
+.. -*- rest -*-
+
+NumPy/SciPy Testing Guidelines
+==============================
+
+.. contents::
+
+Introduction
+''''''''''''
+
+SciPy uses the `Nose testing system
+<http://www.somethingaboutorange.com/mrl/projects/nose>`__, with some
+minor convenience features added. Nose is an extension of the unit
+testing framework offered by `unittest.py
+<http://docs.python.org/lib/module-unittest.html>`__. Our goal is that
+every module and package in SciPy should have a thorough set of unit
+tests. These tests should exercise the full functionality of a given
+routine as well as its robustness to erroneous or unexpected input
+arguments. Long experience has shown that by far the best time to
+write the tests is before you write or change the code - this is
+`test-driven development
+<http://en.wikipedia.org/wiki/Test-driven_development>`__. The
+arguments for this can sound rather abstract, but we can assure you
+that you will find that writing the tests first leads to more robust
+and better designed code. Well-designed tests with good coverage make
+an enormous difference to the ease of refactoring. Whenever a new bug
+is found in a routine, you should write a new test for that specific
+case and add it to the test suite to prevent that bug from creeping
+back in unnoticed.
+
+To run SciPy's full test suite, use the following::
+
+ >>> import scipy
+ >>> scipy.test()
+
+SciPy uses the testing framework from NumPy (specifically
+``numpy.testing``), so all the SciPy examples shown here are also
+applicable to NumPy. So NumPy's full test suite can be run as
+follows::
+
+ >>> import numpy
+ >>> numpy.test()
+
+Note that for Python >= 2.7 deprecation warnings are silent by default. They
+can be turned on (recommended after installation and before upgrading) by
+starting the interpreter with the -Wd switch, or by::
+
+ >>> import warnings
+ >>> wwarnings.simplefilter('always', DeprecationWarning)
+
+The test method may take two or more arguments; the first is a string
+label specifying what should be tested and the second is an integer
+giving the level of output verbosity. See the docstring for numpy.test
+for details. The default value for the label is 'fast' - which will
+run the standard tests. The string 'full' will run the full battery
+of tests, including those identified as being slow to run. If the
+verbosity is 1 or less, the tests will just show information messages
+about the tests that are run; but if it is greater than 1, then the
+tests will also provide warnings on missing tests. So if you want to
+run every test and get messages about which modules don't have tests::
+
+ >>> scipy.test(label='full', verbosity=2) # or scipy.test('full', 2)
+
+Finally, if you are only interested in testing a subset of SciPy, for
+example, the ``integrate`` module, use the following::
+
+>>> scipy.integrate.test()
+
+The rest of this page will give you a basic idea of how to add unit
+tests to modules in SciPy. It is extremely important for us to have
+extensive unit testing since this code is going to be used by
+scientists and researchers and is being developed by a large number of
+people spread across the world. So, if you are writing a package that
+you'd like to become part of SciPy, please write the tests as you
+develop the package. Also since much of SciPy is legacy code that was
+originally written without unit tests, there are still several modules
+that don't have tests yet. Please feel free to choose one of these
+modules to develop test for either after or even as you read through
+this introduction.
+
+Writing your own tests
+''''''''''''''''''''''
+
+Every Python module, extension module, or subpackage in the SciPy
+package directory should have a corresponding ``test_<name>.py`` file.
+Nose examines these files for test methods (named test*) and test
+classes (named Test*).
+
+Suppose you have a SciPy module ``scipy/xxx/yyy.py`` containing a
+function ``zzz()``. To test this function you would create a test
+module called ``test_yyy.py``. If you only need to test one aspect of
+``zzz``, you can simply add a test function::
+
+ def test_zzz():
+ assert_(zzz() == 'Hello from zzz')
+
+More often, we need to group a number of tests together, so we create
+a test class::
+
+ from numpy.testing import assert_, assert_raises
+
+ # import xxx symbols
+ from scipy.xxx.yyy import zzz
+
+ class TestZzz:
+ def test_simple(self):
+ assert_(zzz() == 'Hello from zzz')
+
+ def test_invalid_parameter(self):
+ assert_raises(...)
+
+Within these test methods, ``assert_()`` and related functions are used to test
+whether a certain assumption is valid. If the assertion fails, the test fails.
+Note that the Python builtin ``assert`` should not be used, because it is
+stripped during compilation with ``-O``.
+
+Sometimes it is convenient to run ``test_yyy.py`` by itself, so we add
+
+::
+
+ if __name__ == "__main__":
+ run_module_suite()
+
+at the bottom.
+
+Labeling tests with nose
+------------------------
+
+Unlabeled tests like the ones above are run in the default
+``scipy.test()`` run. If you want to label your test as slow - and
+therefore reserved for a full ``scipy.test(label='full')`` run, you
+can label it with a nose decorator::
+
+ # numpy.testing module includes 'import decorators as dec'
+ from numpy.testing import dec, assert_
+
+ @dec.slow
+ def test_big(self):
+ print 'Big, slow test'
+
+Similarly for methods::
+
+ class test_zzz:
+ @dec.slow
+ def test_simple(self):
+ assert_(zzz() == 'Hello from zzz')
+
+Easier setup and teardown functions / methods
+---------------------------------------------
+
+Nose looks for module level setup and teardown functions by name;
+thus::
+
+ def setup():
+ """Module-level setup"""
+ print 'doing setup'
+
+ def teardown():
+ """Module-level teardown"""
+ print 'doing teardown'
+
+
+You can add setup and teardown functions to functions and methods with
+nose decorators::
+
+ import nose
+ # import all functions from numpy.testing that are needed
+ from numpy.testing import assert_, assert_array_almost_equal
+
+ def setup_func():
+ """A trivial setup function."""
+ global helpful_variable
+ helpful_variable = 'pleasant'
+ print "In setup_func"
+
+ def teardown_func():
+ """A trivial teardown function."""
+ global helpful_variable
+ del helpful_variable
+ print "In teardown_func"
+
+ @nose.with_setup(setup_func, teardown_func)
+ def test_with_extras():
+ """This test uses the setup/teardown functions."""
+ global helpful_variable
+ print " In test_with_extras"
+ print " Helpful is %s" % helpful_variable
+
+Parametric tests
+----------------
+
+One very nice feature of nose is allowing easy testing across a range
+of parameters - a nasty problem for standard unit tests. It does this
+with test generators::
+
+ def check_even(n, nn):
+ """A check function to be used in a test generator."""
+ assert_(n % 2 == 0 or nn % 2 == 0)
+
+ def test_evens():
+ for i in range(0,4,2):
+ yield check_even, i, i*3
+
+Note that 'check_even' is not itself a test (no 'test' in the name),
+but 'test_evens' is a generator that returns a series of tests, using
+'check_even', across a range of inputs.
+
+.. warning::
+
+ Parametric tests cannot be implemented on classes derived from
+ TestCase.
+
+Doctests
+--------
+
+Doctests are a convenient way of documenting the behavior a function
+and allowing that behavior to be tested at the same time. The output
+of an interactive Python session can be included in the docstring of a
+function, and the test framework can run the example and compare the
+actual output to the expected output.
+
+The doctests can be run by adding the ``doctests`` argument to the
+``test()`` call; for example, to run all tests (including doctests)
+for numpy.lib::
+
+>>> import numpy as np
+>>> np.lib.test(doctests=True)
+
+The doctests are run as if they are in a fresh Python instance which
+has executed ``import numpy as np`` (tests that are part of the SciPy
+package also have an implicit ``import scipy as sp``).
+
+
+``tests/``
+----------
+
+Rather than keeping the code and the tests in the same directory, we
+put all the tests for a given subpackage in a ``tests/``
+subdirectory. For our example, if it doesn't all ready exist you will
+need to create a ``tests/`` directory in ``scipy/xxx/``. So the path
+for ``test_yyy.py`` is ``scipy/xxx/tests/test_yyy.py``.
+
+Once the ``scipy/xxx/tests/test_yyy.py`` is written, its possible to
+run the tests by going to the ``tests/`` directory and typing::
+
+ python test_yyy.py
+
+Or if you add ``scipy/xxx/tests/`` to the Python path, you could run
+the tests interactively in the interpreter like this::
+
+ >>> import test_yyy
+ >>> test_yyy.test()
+
+``__init__.py`` and ``setup.py``
+--------------------------------
+
+Usually, however, adding the ``tests/`` directory to the python path
+isn't desirable. Instead it would better to invoke the test straight
+from the module ``xxx``. To this end, simply place the following lines
+at the end of your package's ``__init__.py`` file::
+
+ ...
+ def test(level=1, verbosity=1):
+ from numpy.testing import Tester
+ return Tester().test(level, verbosity)
+
+You will also need to add the tests directory in the configuration
+section of your setup.py::
+
+ ...
+ def configuration(parent_package='', top_path=None):
+ ...
+ config.add_data_dir('tests')
+ return config
+ ...
+
+Now you can do the following to test your module::
+
+ >>> import scipy
+ >>> scipy.xxx.test()
+
+Also, when invoking the entire SciPy test suite, your tests will be
+found and run::
+
+ >>> import scipy
+ >>> scipy.test()
+ # your tests are included and run automatically!
+
+Tips & Tricks
+'''''''''''''
+
+Creating many similar tests
+---------------------------
+
+If you have a collection of tests that must be run multiple times with
+minor variations, it can be helpful to create a base class containing
+all the common tests, and then create a subclass for each variation.
+Several examples of this technique exist in NumPy; below are excerpts
+from one in `numpy/linalg/tests/test_linalg.py
+<http://github.com/numpy/numpy/blob/master/numpy/linalg/tests/test_linalg.py>`__::
+
+ class LinalgTestCase:
+ def test_single(self):
+ a = array([[1.,2.], [3.,4.]], dtype=single)
+ b = array([2., 1.], dtype=single)
+ self.do(a, b)
+
+ def test_double(self):
+ a = array([[1.,2.], [3.,4.]], dtype=double)
+ b = array([2., 1.], dtype=double)
+ self.do(a, b)
+
+ ...
+
+ class TestSolve(LinalgTestCase):
+ def do(self, a, b):
+ x = linalg.solve(a, b)
+ assert_almost_equal(b, dot(a, x))
+ assert_(imply(isinstance(b, matrix), isinstance(x, matrix)))
+
+ class TestInv(LinalgTestCase):
+ def do(self, a, b):
+ a_inv = linalg.inv(a)
+ assert_almost_equal(dot(a, a_inv), identity(asarray(a).shape[0]))
+ assert_(imply(isinstance(a, matrix), isinstance(a_inv, matrix)))
+
+In this case, we wanted to test solving a linear algebra problem using
+matrices of several data types, using ``linalg.solve`` and
+``linalg.inv``. The common test cases (for single-precision,
+double-precision, etc. matrices) are collected in ``LinalgTestCase``.
+
+Known failures & skipping tests
+-------------------------------
+
+Sometimes you might want to skip a test or mark it as a known failure,
+such as when the test suite is being written before the code it's
+meant to test, or if a test only fails on a particular architecture.
+The decorators from numpy.testing.dec can be used to do this.
+
+To skip a test, simply use ``skipif``::
+
+ from numpy.testing import dec
+
+ @dec.skipif(SkipMyTest, "Skipping this test because...")
+ def test_something(foo):
+ ...
+
+The test is marked as skipped if ``SkipMyTest`` evaluates to nonzero,
+and the message in verbose test output is the second argument given to
+``skipif``. Similarly, a test can be marked as a known failure by
+using ``knownfailureif``::
+
+ from numpy.testing import dec
+
+ @dec.knownfailureif(MyTestFails, "This test is known to fail because...")
+ def test_something_else(foo):
+ ...
+
+Of course, a test can be unconditionally skipped or marked as a known
+failure by passing ``True`` as the first argument to ``skipif`` or
+``knownfailureif``, respectively.
+
+A total of the number of skipped and known failing tests is displayed
+at the end of the test run. Skipped tests are marked as ``'S'`` in
+the test results (or ``'SKIPPED'`` for ``verbosity > 1``), and known
+failing tests are marked as ``'K'`` (or ``'KNOWN'`` if ``verbosity >
+1``).