Developing Nautobot Plugins – Part 5

Blog Detail

This is part 5 of the tutorial series on writing Nautobot plugins. Nautobot plugins are a way to extend the base functionality of Nautobot. Plugins can extend the database schema, add custom pages, and even update existing pages within Nautobot; the possibilities are nearly endless. In this blog series, we are developing a plugin for modeling and managing DNS zone data within Nautobot. In the previous posts, we covered setting up a development environment (part 1) and creating models (part 2), views (part 3), and filters (part 4).

In this post, we will create REST API endpoints for our plugin models, and we’ll also integrate them with GraphQL.

Source code for this part of the tutorial can be found here.

Adding REST API endpoints

To implement REST API for models used by our plugin, we need to extend the file structure of our plugin by adding api directory under our plugin files.

$ tree api
.
├── __init__.py
├── nested_serializers.py
├── serializers.py
├── urls.py
└── views.py

We start from the bottom by defining serializers and nested serializers in their respective files, and then at the top we will chain them all together in urls and views.

Serializers

Serializers allow complex data such as querysets and model instances to be converted to native Python datatypes that can then be easily rendered into JSON, XML, or other content types.

We need one API endpoint for each of our models. Some of our models are related, and for related models we will use nested serializers to populate related model fields (attributes). Let’s get started with DnsZoneModelSerializer.

# api/serializers.py
from rest_framework import serializers

from nautobot.extras.api.serializers import NautobotModelSerializer

from ..models import DnsZoneModel, ARecordModel, CNameRecordModel


class DnsZoneModelSerializer(NautobotModelSerializer):
    url = serializers.HyperlinkedIdentityField(
        view_name="plugins-api:nautobot_example_dns_manager-api:dnszonemodel-detail"
    )

    class Meta:
        model = DnsZoneModel
        fields = "__all__"

Our serializer class inherits from NautobotModelSerializer. Under Meta class, we define model and fields that we want to include in our API endpoint. You may use fields attribute to limit model fields, but in this tutorial we expose them all. We add a new attribute, url, which is a hyperlink to the record that points to a detailed view of the model.

Next, we define a serializer for ARecordModel.

# api/serializers.py
from rest_framework import serializers

from nautobot.extras.api.serializers import NautobotModelSerializer
from nautobot.ipam.api.nested_serializers import NestedIPAddressSerializer

from ..models import DnsZoneModel, ARecordModel, CNameRecordModel
from .nested_serializers import NestedDnsZoneModelSerializer


class ARecordModelSerializer(NautobotModelSerializer):
    url = serializers.HyperlinkedIdentityField(
        view_name="plugins-api:nautobot_example_dns_manager-api:arecordmodel-detail"
    )
    zone = NestedDnsZoneModelSerializer()
    address = NestedIPAddressSerializer()

    class Meta:
        model = ARecordModel
        fields = "__all__"

Meta class is almost identical to the DnsZoneModelSerializer. There is also a url attribute, and fields for the related models, DnsZoneModel and IPAddress. We want to include the data fields of related records in our API endpoint, so we need nested serializers that will serialize related models. If we don’t create nested serializers for zone and address fields, it will display only related object UUIDNestedIPAddressSerializer is already defined by Nautobot, so we can import it and attach to address attribute. NestedDnsZoneModelSerializer we need to implement ourselves.

# api/nested_serializers.py
from rest_framework import serializers

from nautobot.core.api import WritableNestedSerializer

from ..models import DnsZoneModel

class NestedDnsZoneModelSerializer(WritableNestedSerializer):


    url = serializers.HyperlinkedIdentityField(
        view_name="plugins-api:nautobot_example_dns_manager-api:dnszonemodel-detail"
    )

    class Meta:
        """Meta attributes."""

        model = DnsZoneModel
        fields = "__all__"

Implementation is almost exactly the same as regular serializers, but nested serializers inherit from a different parent class. Now we can attach our nested serializer for DnsZoneModel to the zone attribute in the model serializer class ARecordModelSerializer.

And the last one is CNameRecordModelSerializer

# api/serializers.py
class CNameRecordModelSerializer(NautobotModelSerializer):
    url = serializers.HyperlinkedIdentityField(
        view_name="plugins-api:nautobot_example_dns_manager-api:cnamerecordmodel-detail"
    )
    zone = NestedDnsZoneModelSerializer()

    class Meta:
        model = CNameRecordModel
        fields = "__all__"

It is also related to DnsZoneModel, so we use the previously defined nested serializer for the zone attribute.

We are done with serializers, let’s move to views and urls.

Views and URLs

We already covered views and URLs in this series in (part 2). REST API endpoints use the same concept. The only difference is that we import the parent classes from their respective api modules.

# api/views.py
from nautobot.extras.api.views import NautobotModelViewSet
from nautobot_example_dns_manager.models import DnsZoneModel, ARecordModel, CNameRecordModel

from .. import filters
from . import serializers


class DnsZoneModelViewSet(NautobotModelViewSet):
    queryset = DnsZoneModel.objects.all()
    serializer_class = serializers.DnsZoneModelSerializer
    filterset_class = filters.DnsZoneModelFilterSet


class ARecordModelViewSet(NautobotModelViewSet):
    queryset = ARecordModel.objects.prefetch_related("zone", "address")
    serializer_class = serializers.ARecordModelSerializer
    filterset_class = filters.ARecordModelFilterSet


class CNameRecordModelViewSet(NautobotModelViewSet):
    queryset = CNameRecordModel.objects.prefetch_related("zone")
    serializer_class = serializers.CNameRecordModelSerializer
    filterset_class = filters.CNameRecordModelFilterSet

We have three models, and we defined three serializers: one for each model; we also need three views. Each view has three attributes: queryset to fetch records from the database; serializer_class defined in previous steps above; and filterset_class, which was defined and covered in this series in (part 4). We may use these filtersets on our API endpoints to allow the same filtering as shown in part 4.

For DnsZoneModelViewSet there is basic query to fetch all records. But for ARecordModelViewSet and CNameRecordModelViewSet, which have related models, we use .prefetch_related(<related fields>) to include data attributes of the related models.

The last piece is URLs for REST API endpoints.

# api/urls.py
from nautobot.core.api import OrderedDefaultRouter

from . import views


router = OrderedDefaultRouter()
router.register("dnszonemodel", views.DnsZoneModelViewSet)
router.register("arecordmodel", views.ARecordModelViewSet)
router.register("cnamerecordmodel", views.CNameRecordModelViewSet)

urlpatterns = router.urls

We define three URLs (endpoints), and we bind them to our views implemented above. Now we are ready to see how our newly created REST API endpoints work. Let’s have a look at API documentation interface available under api/docs on your development Nautobot instance.

In plugins section, we may see our newly added endpoints. They allow programmatic CRUD (Create, Read, Update, and Delete) operations. 

Let’s take a look at the arecordmodel endpoint by fetching it with curl (GET).

curl -X 'GET' \
  'http://localhost:8080/api/plugins/example-dns-manager/arecordmodel/' \
  -H 'accept: application/json'

In response, we get:

{
  "count": 1,
  "next": null,
  "previous": null,
  "results": [
    {
      "id": "f59dc014-1d3c-4ecc-b0a4-faa505e3061d",
      "display": "app.example.com",
      "custom_fields": {},
      "notes_url": "http://localhost:8080/api/plugins/example-dns-manager/arecordmodel/f59dc014-1d3c-4ecc-b0a4-faa505e3061d/notes/",
      "url": "http://localhost:8080/api/plugins/example-dns-manager/arecordmodel/f59dc014-1d3c-4ecc-b0a4-faa505e3061d/",
      "zone": {
        "id": "2ceb7857-f5fe-412a-8e6a-5cbcfcfd48af",
        "display": "example.com",
        "url": "http://localhost:8080/api/plugins/example-dns-manager/dnszonemodel/2ceb7857-f5fe-412a-8e6a-5cbcfcfd48af/",
        "created": "2022-10-03",
        "last_updated": "2022-10-03T11:08:24.884393Z",
        "_custom_field_data": {},
        "name": "example.com",
        "slug": "example-com",
        "mname": "dns1.example.com",
        "rname": "admin@example.com",
        "refresh": 86400,
        "retry": 7200,
        "expire": 3600000,
        "ttl": 3600
      },
      "address": {
        "display": "1.1.1.1/32",
        "id": "7fedc46e-f79d-4671-b4e9-fd149f322250",
        "url": "http://localhost:8080/api/ipam/ip-addresses/7fedc46e-f79d-4671-b4e9-fd149f322250/",
        "family": 4,
        "address": "1.1.1.1/32"
      },
      "created": "2022-10-03",
      "last_updated": "2022-10-03T11:10:12.792646Z",
      "_custom_field_data": {},
      "slug": "app-example-com",
      "name": "app.example.com",
      "ttl": 14400
    }
  ]
}

We see all our data fields (attributes) serialized to JSON. We also have related fields nicely serialized and nested in the data structure.

GraphQL Integration

Plugins can optionally expose their models via the GraphQL API. This allows models and their relationships to be represented as a graph and queried more easily. There are two mutually exclusive ways to expose a model to the GraphQL interface.

  • By using the @extras_features decorator
  • By creating your own GraphQL type definition and registering it within graphql/types.py of your plugin (the decorator should not be used in this case)

The first option is quick and easy, as you just need to decorate our model classes in models.py with @extras_features("graphql"). But if you want to be able to use filtersets on GQL queries, you need option 2 (where you can attach filtersets to our plugin models).

We extend our plugin files with types.py under graphql directory.

$ tree graphql
├── graphql
│   └── types.py

In types.py, we create GQL type classes for each plugin model. Nautobot uses a library called graphene-django-optimizer to decrease the time queries take to process. By inheriting from graphene_django_optimizer, type classes are automatically optimized.

# graphql/types.py
import graphene_django_optimizer as gql_optimizer

from .. import models, filters


class DnsZoneModelType(gql_optimizer.OptimizedDjangoObjectType):
    """GraphQL Type for DnsZoneModel."""

    class Meta:
        model = models.DnsZoneModel
        filterset_class = filters.DnsZoneModelFilterSet


class ARecordModelType(gql_optimizer.OptimizedDjangoObjectType):
    """GraphQL Type for ARecordModel."""

    class Meta:
        model = models.ARecordModel
        filterset_class = filters.ARecordModelFilterSet


class CNameRecordModelType(gql_optimizer.OptimizedDjangoObjectType):
    """GraphQL Type for CNameRecordModel."""

    class Meta:
        model = models.CNameRecordModel
        filterset_class = filters.CNameRecordModelFilterSet


graphql_types = [DnsZoneModelType, ARecordModelType, CNameRecordModelType]

Under each class we define model and filterset_class attributes where we attach our plugin models and filtersets. By default, Nautobot looks for custom GraphQL types in an iterable named graphql_types within a graphql/types.py file. We add our classes to the list and expose them under graphql_types special variable.

Now we can execute a query to retrieve our records. As opposed to REST API, where we need three separate queries to get our records, in GraphQL we can get what we want in a single query. Remember, GQL is only for read operations, so each interface has its pros and cons. Let’s get all records data.

{
  dns_zone_models {
    name
  }
  a_record_models {
    name
    zone{
      name
    }
    address {
      address
    }
  }
  c_name_record_models {
    name
    zone{
      name
    }
  }
}

Response:

{
  "data": {
    "dns_zone_models": [
      {
        "name": "example.com"
      }
    ],
    "a_record_models": [
      {
        "name": "app.example.com",
        "zone": {
          "name": "example.com"
        },
        "address": {
          "address": "1.1.1.1/32"
        }
      }
    ],
    "c_name_record_models": [
      {
        "name": "www.app.example.com",
        "zone": {
          "name": "example.com"
        }
      }
    ]
  }
}

References


Conclusion

In this blog post, we learned how to build API endpoints for models defined in our plugin and how to integrate them with GraphQL.

-Patryk



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

How to Write Better Python Tests for Network Programming

Blog Detail

In this blog post, I will share with you a simple technique that helped me a lot in writing better, testable code: writing tests in parallel with developing the code.

Why Is It Worth Having Tests?

Have you heard any of these?

  • Writing tests slows down development. I will write tests when the code is ready.
  • It may still change; if I write tests now I will have to rewrite them, so I will write tests when the code is ready.

I have heard it countless times, and also said it myself. Today I think it is one of the most common mistakes to leave tests for later. It usually means that tests are not as good as they could be or there are no tests at all due to other priorities. Furthermore, if you expect your code to change that is actually a good argument to have tests. When you expect changes, you know that you will eventually have to retest. Perhaps you will have to amend your tests, but when some of your tests fail after the change, you get the extra verification that it’s only related to the change. Lack of decent tests results in technical debt. And like any debt, sooner or later you will have to pay it off. It usually happens when you go back to your code after a while to change/fix something, and all that time you could spend writing tests you will probably spend on manually retesting your code after changing or fixing something. If you still remember how you tested it before, this may be manageable; if not, you will spend even more time on it. You can even skip testing and rely on the grace of the gods that it will work well. But you may avoid all of this if you change just one thing!

How Do You Run Your Code?

python <your_file>.py Right? OK, time for the pro tip!

What if you avoid running code directly and run it with tests instead?

Development Through Tests

When developing code, we write functions, classes, methods. And we run them to test whether they give us what we expect. Running your code for the first time is the right time to develop tests! All you need to do is just run your code with pytest instead of running it directly; capture outputs which you normally check with print(); and gradually build your tests as you develop your code.

Let’s get our hands dirty by creating some practical examples. This is our project structure:

├── main.py
└── tests
    ├── __init__.py
    └── test_main.py

Create our first function in main.py, something simple.

# main.py
def simple_math_function(*args):
    """Sum arguments"""
    total = 0
    for arg in args:
        total += arg
    return total

Now we should test our function to check whether we get what we expect. But instead of running python main.py, we create a test in tests/test_main.py and we run pytest -s. Remember the -s option, as it gives all print() outputs on-screen. We use print in the test, but you can use it anywhere in your code. Now we just want to capture our print the same way we would by running python main.py and calling our function there.

# tests/test_main.py
import main

def test_simple_math_function():
    o = main.simple_math_function(1, 2, 3, 4, 5)
    print(o)
pytest -s
============================== test session starts ============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item                                                                                                                                                                     

tests/test_main.py 15
.

============================== 1 passed in 0.01s ===============================

I usually use -k option to point to a specific test. This is convenient when you already have many tests, and you want to work on one. Let’s run tests again, limiting them to only the test we work on.

pytest -s -k simple_math_function
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item                                                                                                                                                                     

tests/test_main.py 15
.

============================== 1 passed in 0.01s ===============================

Our output is 15, and it is indeed the sum of all the arguments we passed to our function. Now we can just replace print with assert and we now have a test that compares the function call result with our previously captured expected result. We have our first test completed, which will remain and will be executed automatically whenever we run our tests in the future.

# tests/test_main.py
import main

def test_simple_math_function():
    assert main.simple_math_function(1, 2, 3, 4, 5) == 15
pytest -s -v -k our_simple_function
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 1 item                                                                                                                                                                     

tests/test_main.py::test_our_simple_function PASSED

============================== 1 passed in 0.02s ===============================

Note -v option, which gives more verbose output. Let’s make one more function and test.

# main.py
def simple_hello(name):
    return f"Hello dear {name}!"
# tests/test_main.py
import main

def test_simple_hello():
    print(main.simple_hello("Guest"))
pytest -sv -k simple_hello
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 2 items / 1 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_simple_hello Hello dear Guest!
PASSED

========================== 1 passed, 1 deselected in 0.02s =======================

Again we modify print to assert, and we add the expected result and run the test again.

# tests/test_main.py
import main

def test_simple_hello():
    assert main.simple_hello("Guest") == "Hello dear Guest!"
pytest -sv -k simple_hello
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 2 items / 1 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_simple_hello PASSED

========================= 1 passed, 1 deselected in 0.03s =======================

As you see, the effort is comparable to typical testing with print, but with a little more effort, we have unit tests that will remain after we remove print statements. This is a huge benefit for the future and for anyone else who will work with our code.

Practice Makes Perfect

Let’s develop something more practical from the networking world. We will use netmiko to get software version from a device, and we develop that through tests.

# main.py
from netmiko import ConnectHandler

def get_running_version(driver, host, username="admin", password="admin"):
    with ConnectHandler(
        device_type=driver,
        host=host,
        username=username,
        password=password
    ) as device:
        version = device.send_command("show version", use_textfsm=True)
    return version
# tests/test_main.py
import main

def test_get_running_version():
    version = main.get_running_version("cisco_ios", "10.1.1.1")
    print(version)

Let’s run to see what we get from the device.

pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_get_running_version [{'version': '15.7(3)M5', 'rommon': 'System', 'hostname': 'LANRTR01', 'uptime': '1 year, 42 weeks, 4 days, 1 hour, 18 minutes', 'uptime
_years': '1', 'uptime_weeks': '42', 'uptime_days': '4', 'uptime_hours': '1', 'uptime_minutes': '18', 'reload_reason': 'Reload Command', 'running_image': 'c2951-universalk9-mz.SPA.157
-3.M5.bin', 'hardware': ['CISCO2951/K9'], 'serial': ['FGL2014508V'], 'config_register': '0x2102', 'mac': [], 'restarted': '10:48:48 GMT Fri Mar 6 2020'}]
PASSED

======================== 1 passed, 2 deselected in 6.01s =========================

We need index 0 and the version key. We modify the return in our function in main.py and run the test again.

# main.py
def get_running_version(driver, host, username="admin", password="admin"):
    with ConnectHandler(
        device_type=driver,
        host=host,
        username=username,
        password=password
    ) as device:
        version = device.send_command("show version", use_textfsm=True)
    return version[0]["version"]
pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_get_running_version 15.7(3)M5
PASSED

========================= 1 passed, 2 deselected in 9.02s =======================

Now we can modify our test: remove print and add assert and enter the returned value as the expected value, then we run the test again.

# tests/test_main.py
import main

def test_get_running_version():
    version = main.get_running_version("cisco_ios", "10.1.1.1")
    assert version == "15.7(3)M5"
pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_get_running_version PASSED

======================== 1 passed, 2 deselected in 8.01s =========================

Our test works fine, but it takes 8 sec to complete because we still connect to the real device. We need to mock up netmiko output. Under tests/conftest.py, we create FakeDevice class, where we overwrite netmiko send_command method, which we use to get structured output of show version, and we return the same output that we collected from the device with print. Because we call ConnectHandler with context manager, we also need to implement __enter__ and __exit__ methods. Next we create mock_netmiko fixture where we use pytest monkeypatch to patch ConnectHandler in our main.py module. This fixture we use as an argument in our test function. You can read more on how to mock/monkeypatch in pytest documentation.

# tests/conftest.py
import pytest
import main


class FakeDevice:
    def __init__(self, **kwargs):
        pass

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        pass

    def send_command(self, *args, **kwargs):
        return [
            {
                'version': '15.7(3)M5',
                'rommon': 'System',
                'hostname': 'LANRTR01',
                'uptime': '1 year, 42 weeks, 4 days, 1 hour, 18 minutes',
                'uptime_years': '1',
                'uptime_weeks': '42',
                'uptime_days': '4',
                'uptime_hours': '1',
                'uptime_minutes': '18',
                'reload_reason': 'Reload Command',
                'running_image': 'c2951-universalk9-mz.SPA.157-3.M5.bin',
                'hardware': ['CISCO2951/K9'],
                'serial': ['FGL2014508V'],
                'config_register': '0x2102',
                'mac': [],
                'restarted': '10:48:48 GMT Fri Mar 6 2020'
            }
        ]


@pytest.fixture()
def mock_netmiko(monkeypatch):
    """Mock netmiko."""
    monkeypatch.setattr(main, "ConnectHandler", FakeDevice)
# /tests/test_main.py
import main

def test_get_running_version(mock_netmiko):
    version = main.get_running_version("cisco_ios", "10.1.1.1")
    assert version == "15.7(3)M5"

We run the test again.

pytest -sv -k get_running_version
============================== test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/patryk/projects/pytest_mock_blog, configfile: pytest.ini
collected 3 items / 2 deselected / 1 selected                                                                                                                                        

tests/test_main.py::test_get_running_version PASSED

=========================== 1 passed, 2 deselected in 0.02s =====================

This time it took only 0.02 sec to execute the test because we used mock and did not connect to the device anymore.

More on Developing Tests

Check out Netmiko Sandbox, where you can get more practice with structured command output from multiple vendor devices—all available as code, so you don’t even have to run any device! You can also easily collect command outputs for your mocks.

Also check out Adam’s awesome series of blog posts on pytest in the networking world, where Adam shares practical fundamentals of testing. Part 1Part 2Part 3 Pay attention to test parametrization and consider how we could extend our first two tests with more parameters.


Conclusion

It may seem like Test Driven Development, but is it really TDD? Well, TDD principles say that a test is developed first, before the actual code that makes the test pass. In this approach code and tests are developed in parallel, so formally it doesn’t strictly follow TDD principles. I would put this in between TDD and the typical code development followed by tests.

The presented approach to testing requires you to change your habits of how you run your code during development, but it has several significant advantages:

  • tests are developed in parallel with code, Will do it later is avoided
  • manual tests are input to automated tests, work on manual tests done once can be automatically executed later
  • better code quality, developed code is testable, you will not be able to develop tests for untestable code
  • increased test coverage right from the beginning as opposed to tests developed later
  • greater confidence after implementing changes/fixes as all tests can be performed instantaneously and automatically

-Patryk



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!