Software Testing Types

Blog Detail

Testing your code is a key component of the Software Development Lifecycle. If you are like me and came from a networking background, you may not be aware of the sheer number of types of tests that exist. In this blog post I will give a high-level overview and some key characteristics of the most common types of tests we use, or that are being introduced, or that we plan to use in the Nautobot ecosystem.

Unit Tests

Unit tests are by far the most commonly implemented. Unit tests test a specific section or part of code. More often than not, the “part” of code is a function. One example would be creating a test to ensure that your function that converts MAC addresses in the format aaaa.bbbb.cccc to aa:bb:cc:dd:ee:ff works as intended.

A great example of that exact unit test can be found in my prior blog here.

Some characteristics of unit tests include:

  • Quickest of tests to run – Unit tests should be written so that they take very little time to run.
  • Provide specific feedback – Because unit tests test a small section of code, feedback is typically very precise.
  • Easy to write – Out of the many types of tests, unit tests are often the easiest to write becasue they deal with a small section of code.
  • Does not interact with dependencies – Unit tests should test only the piece of code they are focused on. They should not interact with a web server, database, etc.
  • Should be able to run simultaneously – Because unit tests have no real dependencies, they can, and should, be run in parallel.

Real-world unit tests can be found in pyntc repository. Note the use of both mock and patch to ensure that these tests do not have any dependencies.

Integration Tests

Integration tests are also very common. As the name suggests, the main purpose of integration tests is to test the integration between separate modules of a given application or program. An example of an integration test can be found in Nautobot here. This function is testing the integration between the web UI and the back end to ensure that when someone logs in, the log-in is successful. Another example more related to the network world would be if the tests found here in pyntc used an actual device rather than a “mock”. You could then call these integration tests since they have a dependency (the switch) and rely on it for their tests.

Some characteristics of integration tests include the following:

  • Typically use the real dependency – Integration tests more often than not test using an actual dependency, e.g., database, switch, web server, etc.
  • Difficult to write – Compared to unit tests, integration can be much harder to write as now you have to account for interactions between modules.
  • Can be time-consuming – Because integration tests typically use real dependencies, the tests take longer to run. You may have to wait for an API call to return data or for an HTTP server to start.
  • May not be able to be run in parallel – Because integration tests often depend on other modules or code, they are typically run in succession rather than in parallel.

Regression Tests

Regression testing is more of a methodology than a specific test encompassing a particular part of a program or application. The idea is to to test all parts of your code whenever a change is made, regardless of whether the change affected that part of code. Because regression testing is more a methodology than testing a particular piece of code, both the aforementioned unit and integration tests can be considered regression tests to some extent. Let me give you an example. I recently opened a pull request to add an “Export” button to the Nautobot ChatOps project. When I created that pull request, the CI/CD pipeline process ran through all of the existing unit and integration tests to ensure that functionality of the plugin was not broken from the code I added. I also needed to add tests for the code I added, which later on could be considered regression tests for the next person who wants to add a feature to the plugin.

Some characteristics of regression testing include:

  • Time-consuming – Regression testing typically means running the whole test suite even when only a small part of code may have changed.
  • Repetitive task – The same tests need to be run over and over again whenever changes to the code are made.
  • New tests for code changes – As new features or bug fixes are introduced into a project, tests need to be created to account for that.

Load Tests

The purpose of load tests is to ensure that your application can handle the amount of users, connections, and interactions it will receive in a production environment. While there are currently no official load tests in the Nautobot repo, we do plan on adding them using the Python library Locust. An example test we may have could be having 100 concurrent users hit the landing page of Nautobot to see how it handles it. With that load test we could look at loading times of the page and how long any interactions with the database took. If we increased that 100 users to 1,000 users we could run our test again and see how Nautobot handles that.

Some characteristics of load testing include:

  • Can be, but not necessarily are, stress tests – Stress testing is typically done with the intent to reach a point of failure. Load tests can result in a failure but that is not part of the goal.
  • Can be difficult to account for all types of configurations – Customer X may run your application on a 2-core processor and client Y may run your application on an 8-core processor. When load testing, you need to account for the different types of hardware, software, and security configurations on a given machine.

User Acceptance Tests

User acceptance tests are some of the last tests performed on an application. They differ from the aforementioned tests because, while the previous tests can easily be done programmatically, user acceptance tests take more work to automate. The goal of these tests is to ensure that the created software meets the goals of the customer/end user who will be using the application. Many times there can be a disconnect between what the developer creates and what the end user needs. Here at NTC we definitely take advantage of user acceptance tests quite often. If we are working in a professional services agreement, we are always getting feedback from the customer. If we are developing an open-source plugin, many hands here at NTC touch it and give feedback before it is released. Whil automating user acceptance tests can be more difficult than unit tests, one great library that provides a good framework is Selenium. It provides a programmatic way to interact with web browsers. This allows us to create reproducable and traceable tests to ensure we are meeting the customers’ needs.

When embarking on a user acceptance test journey, you may want to keep these things in mind as guidelines for developing good tests.

  • Define the scope – What exact features are you testing?
  • Constraints and assumptions – Before starting the tests, what are some assumptions and constraints? For example, are we only able to test on Windows 11 and not 10? Or maybe we can test only on a Linux system and not Windows.
  • Risks – This can include things such as incomplete testing environments and components.
  • Roles and responsibilities – Ideally you have multiple people doing user acceptance tests. You need to define what group (or individual) does what tests.
  • Create the script for your tests – Define each step a user will take for a given test and document it properly.

Conclusion

Software testing is a huge subject. I’ve only briefly introduced you to some of the tests that exist out there. Hopefully this has intrigued you enough to take a little bit of time and do some research on the many other types that exist.

-Adam


Tags :

ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

How to Write Better Python Tests for Network Programming

Blog Detail

In this blog post, I will share with you a simple technique that helped me a lot in writing better, testable code: writing tests in parallel with developing the code.

Why Is It Worth Having Tests?

Have you heard any of these?

  • Writing tests slows down development. I will write tests when the code is ready.
  • It may still change; if I write tests now I will have to rewrite them, so I will write tests when the code is ready.

I have heard it countless times, and also said it myself. Today I think it is one of the most common mistakes to leave tests for later. It usually means that tests are not as good as they could be or there are no tests at all due to other priorities. Furthermore, if you expect your code to change that is actually a good argument to have tests. When you expect changes, you know that you will eventually have to retest. Perhaps you will have to amend your tests, but when some of your tests fail after the change, you get the extra verification that it’s only related to the change. Lack of decent tests results in technical debt. And like any debt, sooner or later you will have to pay it off. It usually happens when you go back to your code after a while to change/fix something, and all that time you could spend writing tests you will probably spend on manually retesting your code after changing or fixing something. If you still remember how you tested it before, this may be manageable; if not, you will spend even more time on it. You can even skip testing and rely on the grace of the gods that it will work well. But you may avoid all of this if you change just one thing!

How Do You Run Your Code?

python <your_file>.py Right? OK, time for the pro tip!

What if you avoid running code directly and run it with tests instead?

Development Through Tests

When developing code, we write functions, classes, methods. And we run them to test whether they give us what we expect. Running your code for the first time is the right time to develop tests! All you need to do is just run your code with pytest instead of running it directly; capture outputs which you normally check with print(); and gradually build your tests as you develop your code.

Let’s get our hands dirty by creating some practical examples. This is our project structure:

├── main.py
└── tests
    ├── __init__.py
    └── test_main.py

Create our first function in main.py, something simple.

# main.py
def simple_math_function(*args):
    """Sum arguments"""
    total = 0
    for arg in args:
        total += arg
    return total

Now we should test our function to check whether we get what we expect. But instead of running python main.py, we create a test in tests/test_main.py and we run pytest -s. Remember the -s option, as it gives all print() outputs on-screen. We use print in the test, but you can use it anywhere in your code. Now we just want to capture our print the same way we would by running python main.py and calling our function there.

# tests/test_main.py
import main

def test_simple_math_function():
    o = main.simple_math_function(1, 2, 3, 4, 5)
    print(o)
pytest -s
============================== test session starts ============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item                                                                                                                                                                     

tests/test_main.py 15
.

============================== 1 passed in 0.01s ===============================

I usually use -k option to point to a specific test. This is convenient when you already have many tests, and you want to work on one. Let’s run tests again, limiting them to only the test we work on.

pytest -s -k simple_math_function
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item                                                                                                                                                                     

tests/test_main.py 15
.

============================== 1 passed in 0.01s ===============================

Our output is 15, and it is indeed the sum of all the arguments we passed to our function. Now we can just replace print with assert and we now have a test that compares the function call result with our previously captured expected result. We have our first test completed, which will remain and will be executed automatically whenever we run our tests in the future.

# tests/test_main.py
import main

def test_simple_math_function():
    assert main.simple_math_function(1, 2, 3, 4, 5) == 15
pytest -s -v -k our_simple_function
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item                                                                                                                                                                     

tests/test_main.py::test_our_simple_function PASSED

============================== 1 passed in 0.02s ===============================

Note -v option, which gives more verbose output. Let’s make one more function and test.

# main.py
def simple_hello(name):
    return f"Hello dear {name}!"
# tests/test_main.py
import main

def test_simple_hello():
    print(main.simple_hello("Guest"))
pytest -sv -k simple_hello
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 2 items / 1 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_simple_hello Hello dear Guest!
PASSED

========================== 1 passed, 1 deselected in 0.02s =======================

Again we modify print to assert, and we add the expected result and run the test again.

# tests/test_main.py
import main

def test_simple_hello():
    assert main.simple_hello("Guest") == "Hello dear Guest!"
pytest -sv -k simple_hello
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 2 items / 1 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_simple_hello PASSED

========================= 1 passed, 1 deselected in 0.03s =======================

As you see, the effort is comparable to typical testing with print, but with a little more effort, we have unit tests that will remain after we remove print statements. This is a huge benefit for the future and for anyone else who will work with our code.

Practice Makes Perfect

Let’s develop something more practical from the networking world. We will use netmiko to get software version from a device, and we develop that through tests.

# main.py
from netmiko import ConnectHandler

def get_running_version(driver, host, username="admin", password="admin"):
    with ConnectHandler(
        device_type=driver,
        host=host,
        username=username,
        password=password
    ) as device:
        version = device.send_command("show version", use_textfsm=True)
    return version
# tests/test_main.py
import main

def test_get_running_version():
    version = main.get_running_version("cisco_ios", "10.1.1.1")
    print(version)

Let’s run to see what we get from the device.

pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_get_running_version [{'version': '15.7(3)M5', 'rommon': 'System', 'hostname': 'LANRTR01', 'uptime': '1 year, 42 weeks, 4 days, 1 hour, 18 minutes', 'uptime
_years': '1', 'uptime_weeks': '42', 'uptime_days': '4', 'uptime_hours': '1', 'uptime_minutes': '18', 'reload_reason': 'Reload Command', 'running_image': 'c2951-universalk9-mz.SPA.157
-3.M5.bin', 'hardware': ['CISCO2951/K9'], 'serial': ['FGL2014508V'], 'config_register': '0x2102', 'mac': [], 'restarted': '10:48:48 GMT Fri Mar 6 2020'}]
PASSED

======================== 1 passed, 2 deselected in 6.01s =========================

We need index 0 and the version key. We modify the return in our function in main.py and run the test again.

# main.py
def get_running_version(driver, host, username="admin", password="admin"):
    with ConnectHandler(
        device_type=driver,
        host=host,
        username=username,
        password=password
    ) as device:
        version = device.send_command("show version", use_textfsm=True)
    return version[0]["version"]
pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_get_running_version 15.7(3)M5
PASSED

========================= 1 passed, 2 deselected in 9.02s =======================

Now we can modify our test: remove print and add assert and enter the returned value as the expected value, then we run the test again.

# tests/test_main.py
import main

def test_get_running_version():
    version = main.get_running_version("cisco_ios", "10.1.1.1")
    assert version == "15.7(3)M5"
pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_get_running_version PASSED

======================== 1 passed, 2 deselected in 8.01s =========================

Our test works fine, but it takes 8 sec to complete because we still connect to the real device. We need to mock up netmiko output. Under tests/conftest.py, we create FakeDevice class, where we overwrite netmiko send_command method, which we use to get structured output of show version, and we return the same output that we collected from the device with print. Because we call ConnectHandler with context manager, we also need to implement __enter__ and __exit__ methods. Next we create mock_netmiko fixture where we use pytest monkeypatch to patch ConnectHandler in our main.py module. This fixture we use as an argument in our test function. You can read more on how to mock/monkeypatch in pytest documentation.

# tests/conftest.py
import pytest
import main


class FakeDevice:
    def __init__(self, **kwargs):
        pass

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        pass

    def send_command(self, *args, **kwargs):
        return [
            {
                'version': '15.7(3)M5',
                'rommon': 'System',
                'hostname': 'LANRTR01',
                'uptime': '1 year, 42 weeks, 4 days, 1 hour, 18 minutes',
                'uptime_years': '1',
                'uptime_weeks': '42',
                'uptime_days': '4',
                'uptime_hours': '1',
                'uptime_minutes': '18',
                'reload_reason': 'Reload Command',
                'running_image': 'c2951-universalk9-mz.SPA.157-3.M5.bin',
                'hardware': ['CISCO2951/K9'],
                'serial': ['FGL2014508V'],
                'config_register': '0x2102',
                'mac': [],
                'restarted': '10:48:48 GMT Fri Mar 6 2020'
            }
        ]


@pytest.fixture()
def mock_netmiko(monkeypatch):
    """Mock netmiko."""
    monkeypatch.setattr(main, "ConnectHandler", FakeDevice)
# /tests/test_main.py
import main

def test_get_running_version(mock_netmiko):
    version = main.get_running_version("cisco_ios", "10.1.1.1")
    assert version == "15.7(3)M5"

We run the test again.

pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_get_running_version PASSED

=========================== 1 passed, 2 deselected in 0.02s =====================

This time it took only 0.02 sec to execute the test because we used mock and did not connect to the device anymore.

More on Developing Tests

Check out Netmiko Sandbox, where you can get more practice with structured command output from multiple vendor devices—all available as code, so you don’t even have to run any device! You can also easily collect command outputs for your mocks.

Also check out Adam’s awesome series of blog posts on pytest in the networking world, where Adam shares practical fundamentals of testing. Part 1Part 2Part 3 Pay attention to test parametrization and consider how we could extend our first two tests with more parameters.


Conclusion

It may seem like Test Driven Development, but is it really TDD? Well, TDD principles say that a test is developed first, before the actual code that makes the test pass. In this approach code and tests are developed in parallel, so formally it doesn’t strictly follow TDD principles. I would put this in between TDD and the typical code development followed by tests.

The presented approach to testing requires you to change your habits of how you run your code during development, but it has several significant advantages:

  • tests are developed in parallel with code, Will do it later is avoided
  • manual tests are input to automated tests, work on manual tests done once can be automatically executed later
  • better code quality, developed code is testable, you will not be able to develop tests for untestable code
  • increased test coverage right from the beginning as opposed to tests developed later
  • greater confidence after implementing changes/fixes as all tests can be performed instantaneously and automatically

-Patryk



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Pytest in the Networking World – Part 3

Blog Detail

This blog post will be the final entry in my “Pytest in the Networking World” blog series. Part 1 can be found here and Part 2 can be found here. I recommend reading through both so you have the basic knowledge of how pytest works and some insight into parameterization. In this post, I will be going over various aspects of mocking.

Why Do We Need Mocking?

Keeping in the same vein as our last blog, let’s say we wrote a function using netmiko that queries our Cisco 3650 for its MAC address table so that we can use that information and pass it to our normalize_mac_address function. Sometimes, this call completes in under a second. Other times, it may take up to a few seconds. When we write tests for our code, do we want to possibly wait a few seconds for our call to the 3650 to complete? We definitely don’t. Unit tests should be fast so that developers are more likely to utilize them. Is there a way we could possibly fake the API call? There is, and it’s called mocking! When we mock a function, we make a “dummy” copy of that function where we can control the logic in the function and what that function returns. Let’s walk through an example of how we’d take advantage of mocking.

Updating Our Files

At the end of the last blog, our Python file contained the following:

>>> NON_NORMALIZED_MACS = ["aabb.ccdd.eeff", "0011.2233.4455", "aa11.bb22.cc33"]
>>> NORMALIZED_MACS = ["aa:bb:cc:dd:ee:ff", "00:11:22:33:44:55", "aa:11:bb:22:cc:33"]
>>>
>>> @pytest.mark.parametrize("param_non_normalized_mac, param_normalized_mac", list(zip(NON_NORMALIZED_MACS, NORMALIZED_MACS)))
>>> def test_normalize_mac_address_lists(param_non_normalized_mac, param_normalized_mac):
>>>     assert normalize_mac_address(param_non_normalized_mac) == param_normalized_mac

Using parameterization, we were able to test our normalize_mac_address function with multiple MAC addresses while ensuring each set of data was treated as its own test. Now, let’s add our function that will connect to a Cisco 3650 and get its MAC address table to the same file that the normalize_mac_address function is in. We’ll also add a function that will get the MAC address table and normalize it all at once. Our file should now look like this:

>>> from netmiko import ConnectHandler
>>> from datetime import datetime
>>>
>>> def normalize_mac_address(mac):
>>>     if mac.count(".") == 2:
>>>         mac = f"{mac[0:2]}:{mac[2:4]}:{mac[5:7]}:{mac[7:9]}:{mac[10:12]}:{mac[12:14]}"
>>>     return mac
>>>
>>> def get_mac_address_table():
>>>     start_time = datetime.now()
>>>
>>>     mydevice = {
>>>         "device_type": "cisco_ios",
>>>         "host": "YOUR DEVICE IP",
>>>         "username": "YOUR DEVICE USERNAME",
>>>         "password": "YOUR PASSWORD FOR THE CONNECTING USER"
>>>     }
>>>     command = "show mac address-table"
>>>
>>>     net_connect = ConnectHandler(**mydevice)
>>>     output = net_connect.send_command(command, use_textfsm=True)
>>>     net_connect.disconnect()
>>>
>>>     print(f"\nTime to run: {datetime.now() - start_time}")
>>>     return output
>>>
>>> print(get_mac_address_table())

For information on Netmiko and TextFSM, check here.

I included some code so we could time how long it takes to run this function. Let’s run it now.

[{'destination_address': '0100.0ccc.cccc', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0100.0ccc.cccd', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0000', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0001', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0002', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0003', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0004', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0005', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0006', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0007', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0008', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0009', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.000a', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.000b', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.000c', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.000d', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.000e', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.000f', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0010', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0180.c200.0021', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': 'ffff.ffff.ffff', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, {'destination_address': '0014.1c57.a488', 'type': 'DYNAMIC', 'vlan': '300', 'destination_port': 'Gi1/0/1'}, {'destination_address': '00c8.8bca.55c7', 'type': 'STATIC', 'vlan': '300', 'destination_port': 'Vl300'}, {'destination_address': '0014.1c57.a488', 'type': 'DYNAMIC', 'vlan': '400', 'destination_port': 'Gi1/0/1'}, {'destination_address': '00c8.8bca.55d2', 'type': 'STATIC', 'vlan': '400', 'destination_port': 'Vl400'}, {'destination_address': '0014.1c57.a488', 'type': 'DYNAMIC', 'vlan': '450', 'destination_port': 'Gi1/0/1'}, {'destination_address': '00c8.8bca.55fd', 'type': 'STATIC', 'vlan': '450', 'destination_port': 'Vl450'}, {'destination_address': '0014.1c57.a488', 'type': 'DYNAMIC', 'vlan': '500', 'destination_port': 'Gi1/0/1'}, {'destination_address': '00c8.8bca.55d0', 'type': 'STATIC', 'vlan': '500', 'destination_port': 'Vl500'}]

Time to run: 0:00:04.369203

This gets us a list of dictionaries each containing the destination address, type, vlan, and destination port for each entry in the table. On the last line, you can see it took 4.4 seconds to connect to the device, run the command, parse the output, and present it back to us. Next we’ll need to update our normalize_mac_address function to account for the new data structure. Let’s also add a function to combine getting the MAC address table and normalizing it all in one function.

>>> from netmiko import ConnectHandler
>>> from datetime import datetime
>>>
>>> def normalize_mac_address(macs):
>>>     for entry in macs:
>>>         if entry["destination_address"].count(".") == 2:
>>>             new_mac = f"{entry['destination_address'][0:2]}:{entry['destination_address'][2:4]}:{entry['destination_address'][5:7]}:{entry['destination_address'][7:9]}:{entry['destination_address'][10:12]}:{entry['destination_address'][12:14]}"
>>>             entry['destination_address'] = new_mac
>>>     return macs
>>>
>>> def get_mac_address_table():
>>>     start_time = datetime.now()
>>>
>>>     mydevice = {
>>>         "device_type": "cisco_ios",
>>>         "host": "YOUR DEVICE IP",
>>>         "username": "YOUR DEVICE USERNAME",
>>>         "password": "YOUR PASSWORD FOR THE CONNECTING USER"
>>>     }
>>>     command = "show mac address-table"
>>>
>>>     net_connect = ConnectHandler(**mydevice)
>>>     output = net_connect.send_command(command, use_textfsm=True)
>>>     net_connect.disconnect()
>>>
>>>     print(f"\nTime to run: {datetime.now() - start_time}\n")
>>>     return output
>>>
>>>
>>> def get_mac_table_and_normalize():
>>>     macs = get_mac_address_table()
>>>     normalized_macs = normalize_mac_address(macs)
>>>     return normalized_macs
>>>
>>> print(get_mac_table_and_normalize())

If you run this now, you’ll see that the MAC addresses in the dictionary have been normalized.

[{'destination_address': '0100.0ccc.cccc', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, ...,
 {'destination_address': '00c8.8bca.55d0', 'type': 'STATIC', 'vlan': '500', 'destination_port': 'Vl500'}]

Time to run: 0:00:04.849528

[{'destination_address': '01:00:0c:cc:cc:cc', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}, ...,
{'destination_address': '00:c8:8b:ca:55:d0', 'type': 'STATIC', 'vlan': '500', 'destination_port': 'Vl500'}]

Middle entries have been taken out for brevity.

Great! Our new function works as expected. Now we’ll turn toward updating our tests file. Currently it looks like:

NON_NORMALIZED_MACS = ["aabb.ccdd.eeff", "0011.2233.4455", "aa11.bb22.cc33"]
NORMALIZED_MACS = ["aa:bb:cc:dd:ee:ff", "00:11:22:33:44:55", "aa:11:bb:22:cc:33"]

@pytest.mark.parametrize("param_non_normalized_mac, param_normalized_mac", list(zip(NON_NORMALIZED_MACS, NORMALIZED_MACS)), ids=[x for x in NON_NORMALIZED_MACS])
def test_normalize_mac_address_lists(param_non_normalized_mac, param_normalized_mac):
    assert normalize_mac_address(param_non_normalized_mac) == param_normalized_mac

This test will currently fail, as we have changed the normalize_mac_address function. For the sake of staying on topic, we’ll comment out that test and focus on testing out get_mac_table_and_normalize. Here is our file with a basic test written for our new function and our old test commented out.

>>> import pytest
>>> from mac_address import normalize_mac_address, get_mac_table_and_normalize
>>>
>>> #NON_NORMALIZED_MACS = ["aabb.ccdd.eeff", "0011.2233.4455", "aa11.bb22.cc33"]
>>> #NORMALIZED_MACS = ["aa:bb:cc:dd:ee:ff", "00:11:22:33:44:55", "aa:11:bb:22:cc:33"]
>>> #
>>> #@pytest.mark.parametrize("param_non_normalized_mac, param_normalized_mac", list(zip(NON_NORMALIZED_MACS, NORMALIZED_MACS)), ids=[x for x in NON_NORMALIZED_MACS])
>>> #def test_normalize_mac_address_lists(param_non_normalized_mac, param_normalized_mac):
>>> #    assert normalize_mac_address(param_non_normalized_mac) == param_normalized_mac
>>>
>>> def test_get_mac_table_and_normalize():
>>>     macs = get_mac_table_and_normalize()
>>>     assert True

Let’s run our test and look at the output.

============================================================= test session starts ==============================================================================
platform linux -- Python 3.8.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/adam/repo/Sandbox/blog_pytest
collected 1 item

test_mac_address.py .                                                                                                                                      [100%]

============================================================== 1 passed in 8.95s ================================================================================

The test passes (because we are currently just asserting True), but you’ll notice that on the last line it states that our test passed in 8.95 seconds. We know that connecting to the device and retrieving the MAC address table is taking roughly 4 seconds, so let’s jump into how we’d mock this function.

Mocking

Let me update our function to leverage mocking, and then I’ll go over it.

>>> import pytest
>>> import re
>>> from mac_address import normalize_mac_address, get_mac_table_and_normalize
>>> from unittest.mock import patch
>>>
>>> @patch('mac_address.get_mac_address_table')
>>> def test_get_mac_table_and_normalize(mock_get_mac_address_table):
>>>     mock_get_mac_address_table.return_value = [{'destination_address': '01:00:0c:cc:cc:cc', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}]
>>>     macs = get_mac_table_and_normalize()
>>>     normalized_mac = macs[0].pop("destination_address")
>>>
>>>     assert re.match(r'([0-9a-fA-F]{4}[.]){2}([0-9a-fA-F]{4})', normalized_mac)
>>>     mock_get_mac_address_table.assert_called_once()

Commented test left out for brevity’s sake.

The first thing you may notice is that there are two new imports, import re and from unittest.mock import patch. The re library is not a necessary import to use mocks. I imported it for use in an assert statement that I’ll go over shortly. The unittest.mock import patch is necessary. This gives us access to the @patch() decorator that you see implemented on line 6. This decorator is what is actually doing the mocking. To use the decorator, you pass in the object that you want to mock using the syntax @patch(module.functionA) or @patch(module.ClassA). In this case, when I’m testing get_mac_table_and_normalize, I want to mock the get_mac_address_table function that gets called in my get_mac_table_and_normalize function. I implement my decorator using @patch('mac_address.get_mac_address_table'); and in my test function definition, I add the argument mock_get_mac_address_table. When you mock an object, that mocked object gets passed into the decorated function as an argument so you need to account for that. I typically name the function with mock_ prepended to it.

On line 8, I define what I want the return value to be when my mocked function is called. I can make this whatever I want, but you typically would make this similar to what you would expect from the original function. Line 9, I call our get_mac_table_and_normalize function. Line 10, I get the value for the destination_address key from the first dictionary in the returned list from calling get_mac_table_and_normalize and store it in a variable. In lines 12 and 13, I have two assertions taking place. The first ensures that the value I popped from our returned list conforms to our normalized MAC address format by leveraging regex. The second asserts that our mock function was called only one time. Let’s run our newly written test and check out the results.

=========================================================== test session starts =======================================================================================
platform linux -- Python 3.8.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/adam/.virtualenvs/blog/bin/python3
cachedir: .pytest_cache
rootdir: /home/adam/repo/Sandbox/blog_pytest
collected 1 item

test_mac_address.py::test_get_mac_table_and_normalize PASSED                                                                                                      [100%]

========================================================== 1 passed in 4.59s ===========================================================================================

You’ll notice our test passes and that it now took 4.59 seconds to complete. That is exactly what we expected as we mocked our call to our network device which took roughly 4 seconds. To provide another example, let’s say we wanted to mock the normalize_mac_address function. How would we go about that? We can follow the same process we did for the get_mac_address_table function.

>>> import pytest
>>> import re
>>> from mac_address import normalize_mac_address, get_mac_table_and_normalize
>>> from unittest.mock import patch
>>>
>>> @patch('mac_address.get_mac_address_table')
>>> @patch('mac_address.normalize_mac_address')
>>> def test_get_mac_table_and_normalize(mock_normalize_mac_address, mock_get_mac_address_table):
>>>     mock_get_mac_address_table.return_value = [{'destination_address': '01:00:0c:cc:cc:cc', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}]
>>>     mock_normalize_mac_address.return_value = [{'destination_address': '0100.0ccc.cccc', 'type': 'STATIC', 'vlan': 'All', 'destination_port': 'CPU'}]
>>>     macs = get_mac_table_and_normalize()
>>>     normalized_mac = macs[0].pop("destination_address")
>>>
>>>     assert re.match(r'([0-9a-fA-F]{4}[.]){2}([0-9a-fA-F]{4})', normalized_mac)
>>>     mock_get_mac_address_table.assert_called_once()

We are now mocking both functions that are called when we call get_mac_table_and_normalize. We added a patch decorator and included mac_address.normalize_mac_address in its parameter. We also have to add another argument to our test function definition. When mocking more than one object, the lowest patch decorator is mapped to the leftmost argument in the function definition. If we run this, we get the following output:

======================================================== test session starts =============================================================================================
platform linux -- Python 3.8.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/adam/repo/Sandbox/blog_pytest
collected 1 item

test_mac_address.py .                                                                                                                                                [100%]

======================================================== 1 passed in 4.45s ================================================================================================

Our test again passes. This time we have only a slight decrease in test time to 4.45 seconds. This was expected as the normalize_mac_address function is not doing anything extensive, so mocking it didn’t yield much decrease in computation time.

To prove the test is using the mocked return values, you can change the line 10 destination address to something that doesn’t conform to our normalized MAC format and the test will fail.

Mocking Tips

  • There are different implementations of mocking in pytest. I chose the above method because I felt it was the easiest to see and wrap your head around. To see the other implementations of mocking, you can check out the documentation here.
  • When referring to the object you want to mock in the decorator argument, you need to think about mocking the object where it is looked up rather than where it may be defined. This documentation goes more in-depth.
  • You can do a wide variety of things with a mocked object. You can manipulate it to return whatever you want (as shown in our example) or have it return dynamic results using side effects. More information on that and other ways you can manipulate mock functions can be found here.
  • A multitude of assertions can be made on mocked functions. The documentation here goes over them.

Conclusion

This blog is just the tip of the iceberg when it comes to mocking. It’s a pretty extensive feature and can get very complicated very quickly. While this application of mocking may not be the most practical, hopefully these examples were easy enough to follow along with and you were able to gather what mocking is and one of its applications. If you are interested in looking into more robust applications, you can check out the tests for our pyntc repository here. If you have any questions or want to discuss any tests we at NTC may have written that you’d like clarification on, definitely come by and reach out to us on our NTC community Slack.

-Adam



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!