Python testing: 5 tips and features you might not know about

26.06.2023 | 5 min read

computer window and magnifying glass

The contents of this article are neither extremely advanced, nor overly basic. If you are a moderately experienced Python developer, you might have not heard about some of these features related to testing, and hopefully I can make your work a bit easier.


When I was getting hired by 10Clouds, testing was a major blind spot in my developer skillset. I made an effort to fix this, and along the way I came across several cool features and techniques that really caught my attention by helping me solve a particular problem, or simply by being very convenient.

I will mostly focus on pytest (and sometimes on its pytest-mock plugin), but some features are available in the standard library's unittest. So let's get started.

Use pytest's spy to inspect objects without mocking their behavior.

Read the docs here.


spy is a feature of the pytest-mock plugin for pytest. Use it if you do not need to mock any behaviour, but still want to be able to see if and how the mocked code was called (or use other call features). In the next tip, keep an eye out for the following lines:

python
spy_inner_function = mocker.spy(src, "inner_function")

and then a few lines down

python
spy_inner_function.assert_called_once_with(mocker.ANY, ARG_2)

I did not want to interfere with calling inner_function, just have a little look at how it was called.

spy also has two extra attributes. To quote the documentation:

  • spy_return: contains the returned value of the spied function.
  • spy_exception: contains the last exception value raised by the spied function/method when it was last called, or `None` if no exception was raised.

    Under the hood, if you read spy's source code, it is basically
python
patch.object(obj, name, side_effect=wrapped, autospec=autospec)`

Let us unpack this quickly.

  • patch.object(target, attribute) mocks (or "patches") the indicated attribute of the given target. So in our case, we mock the inner_function from the module src. You can read more about it in the documentation.
  • side_effect=wrapped allows the spy to return what the wrapped object would normally return.
  • autospec=autospec (usually this resolves to autospec=True) further mimics the behaviour of the mocked/spied object. Actually, autospec is so great, that it has its own paragraph in this article, so keep on reading for a more in-depth explanation.

mocker.ANY for your non-deterministic outcomes


Note: if you do not use pytest and pytest-mock, ANY is still available with the standard library's unittest.mock.ANY.

Read the docs here.

As the name suggests, it works as a dummy, that should never throw an error in your assertion. It is useful when you are interested in some arguments that were passed, but not all of them, like default arguments.

python
# src.py
def some_function(arg1, silence_errors=False):
	try:
    	result = f"I did some stuff to {arg1}"  
	except TypeError as exc:
   	 if silence_errors:
   		 pass
   	 else:
   		 raise exc
    
	return result  

Let's say we have a function that has a default argument that doesn't really affect the "happy path behavior", so when testing the happy path, we might not care about it. Also note how we make use of the spy mentioned above.

python
# tests.py
import src  

def test_some_function(mocker):  
	ARG_1 = "foo"  
	spy_some_function = mocker.spy(src, "some_function")  
 
	result = src.some_function(ARG_1, False)  
 
	assert result == f"I did some stuff to {ARG_1}"  
	spy_some_function.assert_called_once_with(ARG_1, mocker.ANY)

Perhaps your API returns a creation timestamp, or a random id?

python
from my_api.views import create_object

def test_create_object(mocker):
	expected_response_data = {
    	"important_attribute": "important_value",
    	"created": mocker.ANY,
    	"id": mocker.ANY
	}

    # assuming create_object() is an API function, which exposes object data, rather than returning the Python object itself
	response = create_object()

	assert response.status_code == 201
	assert response.data == expected_response_data

Conveniently, the test will still fail if the created or id keys are missing from the returned dictionary. It just doesn't care about the values.

autospec can save you from typos and false positive tests

Read the docs here.

The easiest way to explain autospec is that, in my experience, it can save you from yourself when you mock things. Mocks can be tricky, because they will just accept anything you throw at them happily, and conceal errors with their good intentions. Let us consider a simple database interface: we have a database connector and a repository layer which calls the connector.

python
# db.py
class DataBaseConnector:  
 
	def __init__(self):  
    	self.db = []  
 
	def create(self, row: dict):  
    	self.db.append(row)  
 
class DataBaseRepository:  
 
	@staticmethod  
	def create(db_connector: DataBaseConnector, row):  
    	db_connector.create(row)  
    	return row

And a simple test:

python
from unittest import mock, TestCase  
from db import DataBaseRepository  
 
 
class TestDataBaseRepository(TestCase):  
	def test_create_returns_inserted_row(self):  
    	TEST_ROW = {"foo": "bar"}  
    	with mock.patch.object(db, "DataBaseConnector") as mock_connector:
        	connector = mock_connector()  
        	inserted_row = DataBaseRepository.create(connector, TEST_ROW)  
        	assert inserted_row == TEST_ROW

We are testing the Repository class, so it stands to reason that we mock the database connection. However, this test can hide breaking changes to the Connector. Let's assume we want to refactor the method name, and now the Connector save rather than create:

python
class DataBaseConnector:  
 
	def __init__(self):  
    	self.db = []  
   	 
	# change method name
	def save(self, row: dict):  
    	self.db.append(row)

# repository remains unchanged
class DataBaseRepository:  
 
	@staticmethod  
	def create(db_connector: DataBaseConnector, row):  
    	db_connector.create(row)  
    	return row

The previous test still passes, even though the Repository now calls a non-existing method. What can we do? autospec comes to the rescue. Let's add it to the mock:

python
class TestDataBaseRepository(TestCase):  
	def test_create_returns_inserted_row(self):  
    	TEST_ROW = {"foo": "bar"}  

   	 # add autospec=True
    	with mock.patch.object(db, "DataBaseConnector", autospec=True) as mock_connector:  
        	connector = mock_connector()  
        	inserted_row = DataBaseRepository.create(mock_connector, TEST_ROW)  
        	assert inserted_row == TEST_ROW

Now the test fails with a helpful message:

python
AttributeError: Mock object has no attribute 'create'

Now that you have seen how it works, let's look at the docs for a more technical explanation than the one we've started with:

python
[Autospeccing] limits the api of mocks to the api of an original object (...) In addition mocked functions / methods have the same call signature as the original.

You can parametrize fixtures with indirect=True

Read the docs here.

Whenever you use the following fixutre, it will test if your code handles empty strings and None as phone number inputs. This will generate 3 tests, one for each parameter, for each use of the fixture. It is very convenient when you don't want to rewrite all the tests that need to handle phone numbers.

python
@pytest.fixture(params=["", None, "555-123-4567"])
def user_info(request):
	return {
    	"name": "John Doe",
    	"phone_number": request.param
	}

But perhaps you would like to parametrize it per test, rather than at the fixture definition? Luckily, pytest.fixture has an optional argument indirect which can do this for you.

python
@pytest.fixture
def user_info(request):
	return {
    	"name": "John Doe",
    	"phone_number": request.param
	}

@pytest.mark.parametrize("user_info", ["+1-555-123-4567", "5551234567"], indirect=True)
def test_save_user_data_different_phone_no_formats(user_info):
	saved_user_data = save_user_data(user_info)
	assert saved_user_data["phone_number"] == user_info["phone_number"]

The snippet above will pass the two provided strings to the fixture. Please note they will override any parameters that might have been provided in the fixture itself. So in this case, the test will be called with "+1-555-123-4567" and "5551234567" only, regardless of the fixture's own parameters.

Change detector tests - do your tests break when you refactor?

This tip will in a way contradict the previous ones, because we discussed quite a few ways of mocking things, and now we will consider why mocking can be detrimental. Sometimes you want to refactor the code, but you do not want the tests to break, because the refactoring is completely valid. I myself have fallen victim to having to change multiple tests after changing some names to be more descriptive. The cause is often mocking too much, or at the wrong level. This can mean you are testing implementation rather than behavior.

python
# src.py
class SourceClass:
    @classmethod
	def get_data_from_somewhere(cls):  
    	return "complex JSON"  
 
    @classmethod  
    def tested_function(cls):  
    	data = cls.get_data_from_somewhere()  
    	result = f"Do some logic with {data}"  
    	return result
python
# tests.py
from unittest import mock, TestCase  
import src  
 
 
class TestingClass(TestCase):  
    def test_function(self):  
    	with mock.patch.object(  
            	src.SourceClass, "get_data_from_somewhere", return_value="bar") as data_mock:  
        	return_value = src.SourceClass.tested_function()  
        	assert return_value == "Do some logic with bar"  
            	assert return_value == "Do some logic with bar"

This test will fail if we decide get_data_from_somewhere is too vague, and rename it to get_json_from_somewhere. Of course, the mock might be necessary if we are getting the data from a DB or an external API, and then this "change detector test" is a necessary evil. But if you can get rid of the mock, then you go back to testing behavior rather than implementation (note that the test becomes an integration test now, which may or may not be what you want):

python
class TestingClass(TestCase):
def test_function(self):
return_value = src.SourceClass.tested_function()
assert return_value == "Do some logic with complex JSON"

Another thing to consider is that if your code becomes difficult to test, then perhaps it signifies that the code itself could be refactored? Let's try passing the data factory as an argument. This is arguably cleaner code.

python
# src.py
class SourceClass:  
	@classmethod  
	def get_data_from_somewhere(cls):  
    	return "complex JSON"  
 
	@classmethod  
	def tested_function(cls, data_factory=None):  
    	if data_factory is not None:  
        	data = data_factory()  
    	else:  
        	data = cls.get_data_from_somewhere()  
    	result = f"Do some logic with {data}"  
    	return result

Now we can rewrite the test to be more of a unit test, instead of integration.

python
# tests.py
from unittest import TestCase  
import src  
 
 
class TestingClass(TestCase):  
 
	test_data = "test data"  
 
	def provide_test_data(self):  
    	return self.test_data  
 
	def test_function(self):  
    	return_value = src.SourceClass.tested_function(self.provide_test_data)  
    	assert return_value == f"Do some logic with {self.test_data}"

We can test tested_function selectively, replacing the data provider with a fixture.

Summary

In this article we went over some technical tips to add to your testing toolbox:

  • we can spy on our functions to have a good look at them without interfering with their behaviour,
  • we can use ANY for when some arguments or return values are not really relevant to the test,
  • autospec helps us keep our mocks in line with our tested objects,
  • we have two ways of parametrizing our fixtures, either at the fixture level, or per-test with indirect,
  • we finished off with less of a tech tip, and more of a high-level question about how we design our tests.

Further reading

If you would like to explore these topics further, I heartily recommend Brian Okken's Python Testing with pytest, 2nd Edition. The book is a great read, and works both as a learning resource and as a reference.

Documentation

You may also like these posts

Start a project with 10Clouds

Hire us