How to Write Better Python Tests for Network Programming
In this blog post, I will share with you a simple technique that helped me a lot in writing better, testable code: writing tests in parallel with developing the code.
Why Is It Worth Having Tests?
Have you heard any of these?
- Writing tests slows down development. I will write tests when the code is ready.
- It may still change; if I write tests now I will have to rewrite them, so I will write tests when the code is ready.
I have heard it countless times, and also said it myself. Today I think it is one of the most common mistakes to leave tests for later. It usually means that tests are not as good as they could be or there are no tests at all due to other priorities. Furthermore, if you expect your code to change that is actually a good argument to have tests. When you expect changes, you know that you will eventually have to retest. Perhaps you will have to amend your tests, but when some of your tests fail after the change, you get the extra verification that it’s only related to the change. Lack of decent tests results in technical debt. And like any debt, sooner or later you will have to pay it off. It usually happens when you go back to your code after a while to change/fix something, and all that time you could spend writing tests you will probably spend on manually retesting your code after changing or fixing something. If you still remember how you tested it before, this may be manageable; if not, you will spend even more time on it. You can even skip testing and rely on the grace of the gods that it will work well. But you may avoid all of this if you change just one thing!
How Do You Run Your Code?
python <your_file>.py
Right? OK, time for the pro tip!
What if you avoid running code directly and run it with tests instead?
Development Through Tests
When developing code, we write functions, classes, methods. And we run them to test whether they give us what we expect. Running your code for the first time is the right time to develop tests! All you need to do is just run your code with pytest
instead of running it directly; capture outputs which you normally check with print()
; and gradually build your tests as you develop your code.
Let’s get our hands dirty by creating some practical examples. This is our project structure:
├── main.py
└── tests
├── __init__.py
└── test_main.py
Create our first function in main.py
, something simple.
# main.py
def simple_math_function(*args):
"""Sum arguments"""
total = 0
for arg in args:
total += arg
return total
Now we should test our function to check whether we get what we expect. But instead of running python main.py
, we create a test in tests/test_main.py
and we run pytest -s
. Remember the -s
option, as it gives all print()
outputs on-screen. We use print
in the test, but you can use it anywhere in your code. Now we just want to capture our print the same way we would by running python main.py
and calling our function there.
# tests/test_main.py
import main
def test_simple_math_function():
o = main.simple_math_function(1, 2, 3, 4, 5)
print(o)
pytest -s
============================== test session starts ============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item
tests/test_main.py 15
.
============================== 1 passed in 0.01s ===============================
I usually use -k
option to point to a specific test. This is convenient when you already have many tests, and you want to work on one. Let’s run tests again, limiting them to only the test we work on.
pytest -s -k simple_math_function
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item
tests/test_main.py 15
.
============================== 1 passed in 0.01s ===============================
Our output is 15
, and it is indeed the sum of all the arguments we passed to our function. Now we can just replace print
with assert
and we now have a test that compares the function call result with our previously captured expected result. We have our first test completed, which will remain and will be executed automatically whenever we run our tests in the future.
# tests/test_main.py
import main
def test_simple_math_function():
assert main.simple_math_function(1, 2, 3, 4, 5) == 15
pytest -s -v -k our_simple_function
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item
tests/test_main.py::test_our_simple_function PASSED
============================== 1 passed in 0.02s ===============================
Note -v
option, which gives more verbose output. Let’s make one more function and test.
# main.py
def simple_hello(name):
return f"Hello dear {name}!"
# tests/test_main.py
import main
def test_simple_hello():
print(main.simple_hello("Guest"))
pytest -sv -k simple_hello
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 2 items / 1 deselected / 1 selected
tests/test_main.py::test_simple_hello Hello dear Guest!
PASSED
========================== 1 passed, 1 deselected in 0.02s =======================
Again we modify print
to assert
, and we add the expected result and run the test again.
# tests/test_main.py
import main
def test_simple_hello():
assert main.simple_hello("Guest") == "Hello dear Guest!"
pytest -sv -k simple_hello
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 2 items / 1 deselected / 1 selected
tests/test_main.py::test_simple_hello PASSED
========================= 1 passed, 1 deselected in 0.03s =======================
As you see, the effort is comparable to typical testing with print
, but with a little more effort, we have unit tests that will remain after we remove print statements. This is a huge benefit for the future and for anyone else who will work with our code.
Practice Makes Perfect
Let’s develop something more practical from the networking world. We will use netmiko
to get software version from a device, and we develop that through tests.
# main.py
from netmiko import ConnectHandler
def get_running_version(driver, host, username="admin", password="admin"):
with ConnectHandler(
device_type=driver,
host=host,
username=username,
password=password
) as device:
version = device.send_command("show version", use_textfsm=True)
return version
# tests/test_main.py
import main
def test_get_running_version():
version = main.get_running_version("cisco_ios", "10.1.1.1")
print(version)
Let’s run to see what we get from the device.
pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected
tests/test_main.py::test_get_running_version [{'version': '15.7(3)M5', 'rommon': 'System', 'hostname': 'LANRTR01', 'uptime': '1 year, 42 weeks, 4 days, 1 hour, 18 minutes', 'uptime
_years': '1', 'uptime_weeks': '42', 'uptime_days': '4', 'uptime_hours': '1', 'uptime_minutes': '18', 'reload_reason': 'Reload Command', 'running_image': 'c2951-universalk9-mz.SPA.157
-3.M5.bin', 'hardware': ['CISCO2951/K9'], 'serial': ['FGL2014508V'], 'config_register': '0x2102', 'mac': [], 'restarted': '10:48:48 GMT Fri Mar 6 2020'}]
PASSED
======================== 1 passed, 2 deselected in 6.01s =========================
We need index 0
and the version
key. We modify the return in our function in main.py
and run the test again.
# main.py
def get_running_version(driver, host, username="admin", password="admin"):
with ConnectHandler(
device_type=driver,
host=host,
username=username,
password=password
) as device:
version = device.send_command("show version", use_textfsm=True)
return version[0]["version"]
pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected
tests/test_main.py::test_get_running_version 15.7(3)M5
PASSED
========================= 1 passed, 2 deselected in 9.02s =======================
Now we can modify our test: remove print
and add assert
and enter the returned value as the expected value, then we run the test again.
# tests/test_main.py
import main
def test_get_running_version():
version = main.get_running_version("cisco_ios", "10.1.1.1")
assert version == "15.7(3)M5"
pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected
tests/test_main.py::test_get_running_version PASSED
======================== 1 passed, 2 deselected in 8.01s =========================
Our test works fine, but it takes 8 sec to complete because we still connect to the real device. We need to mock up netmiko output. Under tests/conftest.py
, we create FakeDevice
class, where we overwrite netmiko send_command
method, which we use to get structured output of show version
, and we return the same output that we collected from the device with print. Because we call ConnectHandler with context manager, we also need to implement __enter__
and __exit__
methods. Next we create mock_netmiko
fixture where we use pytest monkeypatch
to patch ConnectHandler
in our main.py
module. This fixture we use as an argument in our test function. You can read more on how to mock/monkeypatch in pytest documentation.
# tests/conftest.py
import pytest
import main
class FakeDevice:
def __init__(self, **kwargs):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def send_command(self, *args, **kwargs):
return [
{
'version': '15.7(3)M5',
'rommon': 'System',
'hostname': 'LANRTR01',
'uptime': '1 year, 42 weeks, 4 days, 1 hour, 18 minutes',
'uptime_years': '1',
'uptime_weeks': '42',
'uptime_days': '4',
'uptime_hours': '1',
'uptime_minutes': '18',
'reload_reason': 'Reload Command',
'running_image': 'c2951-universalk9-mz.SPA.157-3.M5.bin',
'hardware': ['CISCO2951/K9'],
'serial': ['FGL2014508V'],
'config_register': '0x2102',
'mac': [],
'restarted': '10:48:48 GMT Fri Mar 6 2020'
}
]
@pytest.fixture()
def mock_netmiko(monkeypatch):
"""Mock netmiko."""
monkeypatch.setattr(main, "ConnectHandler", FakeDevice)
# /tests/test_main.py
import main
def test_get_running_version(mock_netmiko):
version = main.get_running_version("cisco_ios", "10.1.1.1")
assert version == "15.7(3)M5"
We run the test again.
pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected
tests/test_main.py::test_get_running_version PASSED
=========================== 1 passed, 2 deselected in 0.02s =====================
This time it took only 0.02 sec to execute the test because we used mock and did not connect to the device anymore.
More on Developing Tests
Check out Netmiko Sandbox, where you can get more practice with structured command output from multiple vendor devices—all available as code, so you don’t even have to run any device! You can also easily collect command outputs for your mocks.
Also check out Adam’s awesome series of blog posts on pytest in the networking world, where Adam shares practical fundamentals of testing. Part 1Part 2Part 3 Pay attention to test parametrization and consider how we could extend our first two tests with more parameters.
Conclusion
It may seem like Test Driven Development, but is it really TDD? Well, TDD principles say that a test is developed first, before the actual code that makes the test pass. In this approach code and tests are developed in parallel, so formally it doesn’t strictly follow TDD principles. I would put this in between TDD and the typical code development followed by tests.
The presented approach to testing requires you to change your habits of how you run your code during development, but it has several significant advantages:
- tests are developed in parallel with code, Will do it later is avoided
- manual tests are input to automated tests, work on manual tests done once can be automatically executed later
- better code quality, developed code is testable, you will not be able to develop tests for untestable code
- increased test coverage right from the beginning as opposed to tests developed later
- greater confidence after implementing changes/fixes as all tests can be performed instantaneously and automatically
-Patryk
Tags :
Contact Us to Learn More
Share details about yourself & someone from our team will reach out to you ASAP!