infra-docs-fpo/modules/developer_guide/pages/writing-tests.adoc
2022-03-20 19:09:33 +01:00

269 lines
8.5 KiB
Text

== Tests
Tests make development easier for both veteran project contributors and
newcomers alike. Most projects use the
https://docs.python.org/3.6/library/unittest.html[unittest] framework
for tests so you should familiarize yourself with this framework.
[NOTE]
.Note
====
Writing tests can be a great way to get involved with a project. It's an
opportunity to get familiar with the codebase and the code submission
and review process. Check the project's code coverage and write a test
for a piece of code missing coverage!
====Patches should be accompanied by one or more tests to demonstrate
the feature or bugfix works. This makes the review process much easier
since it allows the reviewer to run your code with very little effort,
and it lets developers know when they break your code.
=== Test Organization
Having a standard test layout makes it easy to find tests. When adding
new tests, follow the following guidelines:
[arabic]
. Each module in the application should have a corresponding test
module. These modules should be organized in the test package to mirror
the package they test. That is, if the package contains the
`<package>/server/push.py` module, the test module should be in a module
called `<test_root>/server/test_push.py`.
. Within each test module, follow the
https://docs.python.org/3.6/library/unittest.html#organizing-test-code[unittest
code organization guidelines].
. Include documentation blocks for each test case that explain the goal
of the test.
. Avoid using mock unless absolutely necessary. It's easy to write tests
using mock that only assert that mock works as expected. When testing
code that makes HTTP requests, consider using
https://pypi.python.org/pypi/vcrpy[vcrpy].
[NOTE]
.Note
====
You may find projects that do not follow this test layout. In those
cases, consider re-organizing the tests to follow the layout described
here and follow the established conventions for that project until that
happens.
======= Test Runners
Projects should include a way to run the tests with ease locally and the
steps to run the tests should be documented. This should be the same way
the continuous integration (Jenkins, TravisCI, etc.) tool runs the
tests.
There are many test runners available that can discover
https://docs.python.org/3.6/library/unittest.html[unittest] based tests.
These include:
* https://docs.python.org/3.6/library/unittest.html[unittest] itself via
`python -m unittest discover`
* http://docs.pytest.org/en/latest/contents.html[pytest]
* http://nose2.readthedocs.io/en/latest/[nose2]
Projects should choose whichever runner best suits them.
[NOTE]
.Note
====
You may find projects using the
https://nose.readthedocs.io/en/latest/[nose] test runner. nose is in
maintenance mode and, according to their documentation, will likely
cease without a new maintainer. They recommend using
https://docs.python.org/3.6/library/unittest.html[unittest],
http://docs.pytest.org/en/latest/contents.html[pytest], or
http://nose2.readthedocs.io/en/latest/[nose2].
====[[tox-config]]
=== Tox
https://pypi.python.org/pypi/tox[Tox] is an easy way to run your
project's tests (using a Python test runner) using multiple Python
interpreters. It also allows you to define arbitrary test environments,
so it's an excellent place to run the code style tests and to ensure the
project's documentation builds without errors or warnings.
Here's an example `tox.ini` file that runs a project's unit tests in
Python 2.7, Python 3.4, Python 3.5, and Python 3.6. It also runs
https://pypi.python.org/pypi/flake8[flake8] on the entire codebase and
builds the documentation with the "warnings treated as errors" Sphinx
flag enabled. Finally, it enforces 100% coverage on lines edited by new
patches using https://pypi.org/project/diff-cover/[diff-cover]:
....
[tox]
envlist = py27,py34,py35,py36,lint,diff-cover,docs
# If the user is missing an interpreter, don't fail
skip_missing_interpreters = True
[testenv]
deps =
-rtest-requirements.txt
# Substitute your test runner of choice
commands =
py.test
# When running in OpenShift you don't have a username, so expanduser
# won't work. If you are running your tests in CentOS CI, this line is
# important so the tests can pass there, otherwise tox will fail to find
# a home directory when looking for configuration files.
passenv = HOME
[testenv:diff-cover]
deps =
diff-cover
commands =
diff-cover coverage.xml --compare-branch=origin/master --fail-under=100
[testenv:docs]
changedir = docs
deps =
sphinx
sphinxcontrib-httpdomain
-rrequirements.txt
whitelist_externals =
mkdir
sphinx-build
commands=
mkdir -p _static
sphinx-build -W -b html -d {envtmpdir}/doctrees . _build/html
[testenv:lint]
deps =
flake8 > 3.0
commands =
python -m flake8 {posargs}
[flake8]
show-source = True
max-line-length = 100
exclude = .git,.tox,dist,*egg
....
=== Coverage
https://pypi.python.org/pypi/coverage/[coverage] is a good way to
collect test coverage statistics.
http://docs.pytest.org/en/latest/contents.html[pytest] has a
https://pypi.python.org/pypi/pytest-cov[pytest-cov] plugin that
integrates with https://pypi.python.org/pypi/coverage/[coverage] and
https://pypi.python.org/pypi/nose-cov[nose-cov] provides integration for
the https://nose.readthedocs.io/en/latest/[nose] test runner.
https://pypi.org/project/diff-cover/[diff-cover] can be used to ensure
that all lines edited in a patch have coverage.
It's possible (and recommended) to have the test suite fail if the
coverage percentage goes down. This example `.coveragerc`:
....
[run]
# Track what conditional branches are covered.
branch = True
include =
my_python_package/*
[report]
# Fail if the coverage is not 100%
fail_under = 100
# Display results with up 1/100th of a percent accuracy.
precision = 2
exclude_lines =
pragma: no cover
# Don't complain if tests don't hit defensive assertion code
raise AssertionError
raise NotImplementedError
if __name__ == .__main__.:
omit =
my_python_package/tests/*
....
To configure `pytest` to collect coverage data on your project, edit
`setup.cfg` and add this block, substituting `yourpackage` with the name
of the Python package you are measuring coverage on:
....
[tool:pytest]
addopts = --cov-config .coveragerc --cov=yourpackage --cov-report term --cov-report xml --cov-report html
....
causes coverage (and any test running plugins using coverage) to fail if
the coverage level is not 100%. New projects should enforce 100% test
coverage. Existing projects should ensure test coverage does not drop to
accept a pull request and should increase the minimum test coverage
until it is 100%.
[NOTE]
.Note
====
https://pypi.python.org/pypi/coverage/[coverage] has great
https://coverage.readthedocs.io/en/coverage-4.3.4/excluding.html[exclusion]
support, so you can exclude individual lines, conditional branches,
functions, classes, and whole source files from your coverage report. If
you have code that doesn't make sense to have tests for, you can exclude
it from your coverage report. Remember to leave a comment explaining why
it's excluded!
======= Licenses
The https://pypi.org/project/liccheck/[liccheck] checker can verify that
every dependency in your project has an acceptable license. The
dependencies are checked recursively.
The licenses are validated against a set of acceptable licenses that you
define in a file called `.license_strategy.ini` in your project
directory. Here is an example of such a file, that would accept Free
licenses:
....
[Licenses]
authorized_licenses:
bsd
new bsd
simplified bsd
apache
apache 2.0
apache software
gnu lgpl
gpl v2
gpl v3
lgpl with exceptions or zpl
isc
isc license (iscl)
mit
python software foundation
zpl 2.1
....
The verification is case-insensitive, and is done on both the `license`
and the `classifiers` metadata fields. See
https://pypi.org/project/liccheck/[liccheck]'s documentation for more
details.
You can automate the license check with the following snippet in your
`tox.ini` file:
....
[testenv:licenses]
deps =
liccheck
commands =
liccheck -s .license_strategy.ini
....
Remember to add `licenses` to your Tox `envlist`.
=== Security
The https://pypi.org/project/bandit/[bandit] checker is designed to find
common security issues in Python code.
You can add it to the tests run by Tox by adding the following snippet
to your `tox.ini` file:
....
[testenv:bandit]
deps = bandit
commands =
bandit -r your_project/ -x your_project/tests/ -ll
....
Remember to add `bandit` to your Tox `envlist`.