Slow unit tests can lead to tests not getting run as often as they should. Unit tests often are run before builds on continuous integration systems so deployments can also be slowed down by lengthy or poorly-patched tests.
There are various techniques for speeding up tests including patching methods (which should be done for isolation anyway), isolating specific tests for execution or exclusion, background watch processes running application-specific tests when relevant files are saved, etc...
I've seen a number of code bases where the application code is out-numbered by the testing code 4:1. Even if the tests could finish in two minutes it's still a two minute delay for deployment.
pytest-xdist was recently pointed out to me as a utility to speed up tests. It can break up tests into separate batches and run them concurrently on separate databases.
I decided to create a small project and try pytest-xdist out. You can find my example project on Bitbucket.
A short test
These are the requirements I installed:
$ pip install Django==1.7.1 \
pytest-django==2.7.0 \
pytest-xdist==1.11 \
pytest-cov==1.8.0
pytest is the main tool being used. pytest-xdist and pytest-cov are plugins used to speed up testing and run coverage utilities respectively.
I created a small model and wrote some unit tests for that model that would take at least 800 milliseconds each to run.
example/models.py:
from django.db import models
class Candidate(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
example/tests.py:
from time import sleep
from django.test import TestCase
from .models import Candidate
class ModelTests(TestCase):
def test_1(self):
candidate = Candidate(first_name='Mark', last_name='Lit')
candidate.save()
sleep(0.8) # 800 milliseconds
candidate = Candidate.objects.get(first_name='Mark')
self.assertEqual(candidate.first_name, 'Mark')
def test_2(self):
self.test_1()
def test_3(self):
self.test_1()
def test_4(self):
self.test_1()
def test_5(self):
self.test_1()
def test_6(self):
self.test_1()
def test_7(self):
self.test_1()
self.assertTrue(False, 'False positive')
def test_8(self):
self.test_1()
def test_9(self):
self.test_1()
def test_10(self):
self.test_1()
def test_11(self):
self.test_1()
def test_12(self):
self.test_1()
A significant speed improvement
I first ran the regular Django test runner to see how long it would take to complete and demonstrate the test failure is reported properly.
$ python manage.py test
Creating test database for alias 'default'...
.........F..
======================================================================
FAIL: test_7 (example.tests.ModelTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mark/fast_tests/example/tests.py", line 36, in test_7
self.assertTrue(False, 'False positive')
AssertionError: False positive
----------------------------------------------------------------------
Ran 12 tests in 9.662s
FAILED (failures=1)
Destroying test database for alias 'default'...
It showed the failure and finished testing in 9.662 seconds.
I then created a pytest.ini file with the following contents:
[pytest]
python_files=tests.py
DJANGO_SETTINGS_MODULE=fast_tests.settings
And then ran py.test with 3 parallel processes:
$ py.test -n 3
=================================== test session starts ====================================
platform linux2 -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4
plugins: xdist, django
gw0 [12] / gw1 [12] / gw2 [12]
scheduling tests via LoadScheduling
...........F
========================================= FAILURES =========================================
____________________________________ ModelTests.test_7 _____________________________________
[gw1] linux2 -- Python 2.7.6 /home/mark/.virtualenvs/fast_tests/bin/python
self = <example.tests.ModelTests testMethod=test_7>
def test_7(self):
self.test_1()
> self.assertTrue(False, 'False positive')
E AssertionError: False positive
example/tests.py:36: AssertionError
=========================== 1 failed, 11 passed in 2.82 seconds ============================
It ran the same set of tests and reported the failure 3.4 times faster than the regular Django test runner.
Coverage
py.test has a pytest-cov plugin which adds support for running tests with the coverage tool. The coverage tool generates a .coverage file which holds statistics on how many lines of your code base (which were seen by coverage) were hit at least one when running the testing suite.
You can also also ask for .py,cover files to be generated along side your source code files. These are annotated files showing which lines have and have not been hit by your tests and lines which were deemed to be non-statement lines and were skipped.
$ py.test -n3 --cov . --cov-report annotate
coverage supports returning an error code if a certain percentage of lines were not hit. Below I check to see if less than 95% percent of the lines where hit:
$ coverage report --fail-under=95
Name Stmts Miss Cover
-----------------------------------------------------
example/__init__ 0 0 100%
example/admin 1 0 100%
example/migrations/0001_initial 5 0 100%
example/migrations/__init__ 0 0 100%
example/models 4 0 100%
example/tests 33 0 100%
example/views 1 1 0%
fast_tests/__init__ 0 0 100%
fast_tests/settings 17 0 100%
fast_tests/urls 3 3 0%
fast_tests/wsgi 4 4 0%
manage 6 6 0%
-----------------------------------------------------
TOTAL 74 14 81%
Only 81% were hit so the exit code will be 127:
$ echo $?
127
If I lower the threshold to 80% exit code 0 is returned:
$ coverage report --fail-under=80
...
$ echo $?
0