Network Configuration Templating with Ansible – Part 1

Blog Detail

When discussing network automation with our customers, one of the main concerns that come up is the ability to audit their device configurations. This becomes especially important during the last quarter of the year, as many corporations are going through their yearly audit to obtain their required approvals for PCI or other compliance standards. Our solution for that is to use the Golden Configuration application for Nautobot, but it’s also entirely possible to use simple Ansible playbooks to perform the audit. This blog series will go over the essential pieces to understanding network configuration templating using Ansible, but the same process can easily be translated for use with Nautobot.

To start templating your configuration you must identify the feature that you wish to audit. Whether it be your DNS or NTP settings, it’s usually easier to start with small parts of the configuration before moving on to the more complicated parts, such as routing or interfaces. With a feature selected, you can start reviewing the device configurations to create your templates. For this article, we’ll use NTP configuration from an IOS router as the chosen feature:

ntp server 1.1.1.1 prefer
ntp server 1.0.0.1
ntp server 8.8.8.8
clock timezone GMT 0
clock summer-time CET recurring

After you’ve identified the portions of the configuration that you wish to template for the feature, the next step is to review the configuration snippet(s) and identify the variables relevant to the configuration feature. Specifically, you want to extract only the non-platform-specific variables, as the platform-specific syntax should be part of your template with the variables abstracted away for use across platforms. Using the example above, we can extract the following bits of information:

  • three NTP server hosts
    • 1.1.1.1
    • 1.0.0.1
    • 8.8.8.8
  • preferred NTP server
    • 1.1.1.1 is preferred
  • time zone and offset
    • GMT
    • 0
  • daylight saving timezone
    • CET

With these variables identified, the next step is to define a schema for these variables to be stored in. For Ansible this is typically in a YAML file as host or group vars. As YAML is limited in the types of data it can document, lists and key/value pairs typically, it’s best to design the structure around that limitation. With the example above, we’d want to have a list of the NTP servers as one item with a key noting which is preferred, the timezone with offset, and the daylight saving timezone. One potential schema would be like the below:

---
# file: group_vars/all.yml
ntp:
  servers:
    - ip: "1.1.1.1"
      prefer: true
    - ip: "1.0.0.1"
      prefer: false
    - ip: "8.8.8.8"
      prefer: false
  timezone:
    zone: "GMT"
    offset: 0
    dst: "CET"

Defining this structure is important as it will need to be flexible enough to cover data for all platforms while also being simple enough that your templates don’t become complicated. You’ll want to ensure that all other variables that are for this feature are of the same structure to ensure compatibility with the Jinja2 templates you’ll be creating in future parts of this series. It’s possible to utilize something like the Schema Enforcer framework to enable enforcement of your schemas against newly added data. This allows you a level of trust that the data provided to the templates are of the right format.

The next step, once the variables have been defined and you’ve determined a structure for them, is to understand where they belong within your network configuration hierarchy. This means that you need to understand in which circumstances these values are considered valid. Are they globally applicable to all devices or only to a particular region? This will define whether you place the variables in a device-specific variable or a group-specific one, and if in a group which group. This is especially important, as where you place the variables will define which devices inherit them and will use them when it comes time to generate configurations. For this article, we’ll assume that these are global variables and would be placed in the all group vars file. With this in mind, you’ll want to start building your inventory with those variable files. Following the Ansible Best Practices, it’s recommended to have a directory layout like so:

inventory.yml
group_vars/
    all.yml
    routers.yml
    switches.yml
host_vars/
    jcy-rtr-01.infra.ntc.com.yml
    jcy-rtr-02.infra.ntc.com.yml

This should allow for clear and quick understanding of where the variables are in relation to your network fleet. This will become increasingly important as you build out more templates and adding variables. With your inventory structure built out, you can validate that the variables are assigned to your devices as expected with the ansible-invenotry -i inventory.yml --list which will return the variables assigned to each device like so:

{
    "_meta": {
        "hostvars": {
            "jcy-rtr-01.infra.ntc.com": {
                "ntp": {
                    "servers": [
                        {
                            "ip": "1.1.1.1",
                            "prefer": true
                        },
                        {
                            "ip": "1.0.0.1",
                            "prefer": false
                        },
                        {
                            "ip": "8.8.8.8",
                            "prefer": false
                        }
                    ],
                    "timezone": {
                        "dst": "CET",
                        "offset": 0,
                        "zone": "GMT"
                    }
                }
            },
            "jcy-rtr-02.infra.ntc.com": {
                "ntp": {
                    "servers": [
                        {
                            "ip": "1.1.1.1",
                            "prefer": true
                        },
                        {
                            "ip": "1.0.0.1",
                            "prefer": false
                        },
                        {
                            "ip": "8.8.8.8",
                            "prefer": false
                        }
                    ],
                    "timezone": {
                        "dst": "CET",
                        "offset": 0,
                        "zone": "GMT"
                    }
                }
            }
        }
    },
    "all": {
        "children": [
            "routers",
            "ungrouped"
        ]
    },
    "routers": {
        "hosts": [
            "jcy-rtr-01.infra.ntc.com",
            "jcy-rtr-02.infra.ntc.com"
        ]
    }
}

Conclusion

This allows you to validate and ensure that the variables you’ve created are being assigned where you expect. In the next part of this series we’ll dive into how to craft a configuration template using the Jinja2 templating engine.

-Justin



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Advanced Options for Building a Nautobot SSoT App

Blog Detail

In the first part of this series, we reviewed the building blocks of an SSoT app for Nautobot. We reviewed the design of DiffSyncModel classes, the CRUD methods on those classes, building your System of Record adapters to fill those models, and finally the Nautobot Job that executes the synchronization of data between your Systems of Record. In this second half, we’ll review advanced options available to you when architecting an SSoT app like controlling the order of processing for your data and handling special requirements for object deletion.

Please note: it is expected that you’ve read the Nautobot Plugin: Single Source of Truth (SSoT) and Building a Nautobot SSoT App posts and understand the framework terminology, such as Data Source and Data Target.

In the designing of your SSoT application you might find yourself in a situation where you want to define the processing order of your SoR objects. The standard method of processing a set of objects in the DiffSync process can’t be guaranteed but is typically a simple first in, first out queue defined by the order the objects were presented by your adapters. To change this behavior you can extend the Diff class itself and define the processing order. One option might be to process each class of your objects alphabetically, as shown in the example below:

from collections import defaultdict
from diffsync.diff import Diff


class CustomOrderingDiff(Diff):
    """Alternate diff class to list children in alphabetical order, except devices to be ordered by CRUD action."""

    @classmethod
    def order_children_default(cls, children):
        """Simple diff to return all children in alphabetical order."""
        for child_name, _ in sorted(children.items()):
            yield children[child_name]

In some cases you wish to have this done for a single object type, like Devices. This can be done by having a method in your custom Diff class named after the type of the object in the pattern order_children_<type>. It will utilize the order_children_default method for any other object classes that haven’t been explicitly defined. This option also allows you to control the order of CRUD operations that happen on a particular object, as shown in the example below:

    @classmethod
    def order_children_device(cls, children):
        """Return a list of device sorted by CRUD action, starting with deletion, then create, and update, along with being in alphabetical order."""
        children_by_type = defaultdict(list)

        # Organize the children's name by action create, update, or delete
        for child_name, child in children.items():
            action = child.action or "skip"
            children_by_type[action].append(child_name)

        # Create a global list, organized per action, with deletion first to prevent conflicts
        sorted_children = sorted(children_by_type["delete"])
        sorted_children += sorted(children_by_type["create"])
        sorted_children += sorted(children_by_type["update"])
        sorted_children += sorted(children_by_type["skip"])

        for name in sorted_children:
            yield children[name]

Once you’ve defined your custom Diff ordering class you simply need to pass it to the appropriate diff_from/diff_to or sync_from/sync_to methods, as shown below:

from diffsync import DiffSyncFlags
from diffsync.exceptions import ObjectNotCreated

    def sync_data(self):
        """SSoT synchronization from Device42 to Nautobot."""
        client = Device42API()
        d42_adapter = Device42Adapter(job=self, sync=self.sync, client=client)
        d42_adapter.load()
        nb_adapter = NautobotAdapter(job=self, sync=self.sync)
        nb_adapter.load()
        diff = nb_adapter.diff_from(d42_adapter, diff_class=CustomOrderingDiff)
        if not self.kwargs["dry_run"]:
            try:
                nb_adapter.sync_from(d42_adapter, diff_class=CustomOrderingDiff)
            except ObjectNotCreated as err:
                self.log_debug(message=f"Unable to create object. {err}")
            self.log_success(message="Sync complete.")

Custom Diff classes can come in handy when you need to ensure that an obsolete version of an object has been removed before a newer version being installed to prevent possible conflicts.

In addition to controlling the flow of your object processing, you might have a situation where the synchronization fails or you only want to consider objects that exist in one of or both of your Systems of Record. In these cases you would want to utilize a DiffSync Flag. The core DiffSync engine provides two sets of flags, allowing for modifying behavior of DiffSync at either the Global or Model level. As the name implies, global flags apply to all data and while model flags apply to a specific model. A list of the included global flag options (as of DiffSync 1.3) has been provided below:

NameDescription
CONTINUE_ON_FAILUREContinue synchronizing even if failures are encountered when syncing individual models.
SKIP_UNMATCHED_SRCIgnore objects that only exist in the source/”from” DiffSync when determining diffs and syncing. If this flag is set, no new objects will be created in the target/”to” DiffSync.
SKIP_UNMATCHED_DSTIgnore objects that only exist in the target/”to” DiffSync when determining diffs and syncing. If this flag is set, no objects will be deleted from the target/”to” DiffSync.
SKIP_UNMATCHED_BOTHConvenience value combining both SKIP_UNMATCHED_SRC and SKIP_UNMATCHED_DST into a single flag
LOG_UNCHANGED_RECORDSIf this flag is set, a log message will be generated during synchronization for each model, even unchanged ones.

Like your custom Diff ordering class, utilizing the global flags simply requires applying them to the appropriate diff and sync methods in your Job, as below:

from diffsync.enum import DiffSyncFlags
flags = DiffSyncFlags.CONTINUE_ON_FAILURE
diff = nb_adapter.diff_from(d42_adapter, diff_class=CustomOrderingDiff, flags=flags)

Model flags are applied to individual DiffSyncModel instances, for example, you could apply them from the adapter’s load method, as shown in the example below:

from diffsync import DiffSync
from diffsync.enum import DiffSyncModelFlags
from models import Device

class NSOAdapter(DiffSync):

    device = Device

    def load(self, data):
        """Load all devices into the adapter and add the flag IGNORE to all non-ACI devices."""
        for device in data.get("devices"):
            obj = self.device(name=device["name"])
            if "ACI" not in device["name"]:
                obj.model_flags = DiffSyncModelFlags.IGNORE
            self.add(obj)

The DiffSync library, as of version 1.3, currently includes two options for Model Flags, as shown in the table below:

NameDescription
IGNOREDo not render diffs containing this model; do not make any changes to this model when synchronizing. Can be used to indicate a model instance that exists but should not be changed by DiffSync.
SKIP_CHILDREN_ON_DELETEWhen deleting this model, do not recursively delete its children. Can be used for the case where deletion of a model results in the automatic deletion of all its children.

Both global flags and model flags are stored as a binary representation. This allows for storage of multiple flags within a single variable and allows for additional flags to be added in the future. Due to the nature of each flag being a different binary value it is necessary to perform a bitwise OR operation when utilizing multiple flags at once. Imagine the scenario where you want to skip objects that don’t exist in either Systems of Record and log all object records regardless of their being changed. You would first need to define one flag and then perform the bitwise OR operation, as shown in the example:

<span role="button" tabindex="0" data-code=">>> from diffsync.enum import DiffSyncFlags >>> flags = DiffSyncFlags.SKIP_UNMATCHED_BOTH >>> flags <DiffSyncFlags.SKIP_UNMATCHED_BOTH: 6> >>> bin(flags.value) '0b110' >>> flags |= DiffSyncFlags.LOG_UNCHANGED_RECORDS >>> flags
>>> from diffsync.enum import DiffSyncFlags
>>> flags = DiffSyncFlags.SKIP_UNMATCHED_BOTH
>>> flags
<DiffSyncFlags.SKIP_UNMATCHED_BOTH: 6>
>>> bin(flags.value)
'0b110'
>>> flags |= DiffSyncFlags.LOG_UNCHANGED_RECORDS
>>> flags
<DiffSyncFlags.LOG_UNCHANGED_RECORDS|SKIP_UNMATCHED_BOTH|SKIP_UNMATCHED_DST|SKIP_UNMATCHED_SRC: 14>
>>> bin(flags.value)
>>> '0b1110'

Now that you’ve defined exactly how you want your SSoT application to handle the data from your Systems of Record you might have a requirement to perform some action on the data once the sync has completed. Luckily, the SSoT app makes this easy by looking for a sync_complete method in your DataTarget adapter and running it if found. A case where this could be used is one where deletion of objects in your Systems of Record needs to be handled in a specific manner due to inter-object dependence. An example of this would be something like a Site in Nautobot that can’t be deleted until all devices, racks, and other objects in that Site have been deleted or moved. To perform this operation you would need to define your object’s delete method, as below:

def delete(self):
    """Delete Site object from Nautobot.

    Because Site has a direct relationship with many other objects, it can't be deleted before anything else.
    The self.diffsync.objects_to_delete dictionary stores all objects for deletion and removes them from Nautobot
    in the correct order. This is used in the Nautobot adapter sync_complete function.
    """
    self.diffsync.job.log_warning(message=f"Site {self.name} will be deleted.")
    super().delete()
    site = Site.objects.get(id=self.uuid)
    self.diffsync.objects_to_delete["site"].append(site)  # pylint: disable=protected-access
    return self

The delete method is marking the object as deleted, but instead of deleting it immediately from Nautobot’s database, it is adding it to a list of objects to be removed once the synchronization has completed and the appropriate order of deleting objects can be performed, as shown in the following example:

from diffsync import DiffSync
from django.db.models import ProtectedError

class NautobotAdapter(DiffSync):
    """Nautobot adapter for DiffSync."""

    objects_to_delete = defaultdict(list)

    def sync_complete(self, source: DiffSync, *args, **kwargs):
        """Clean up function for DiffSync sync.

        Once the sync is complete, this function runs deleting any objects
        from Nautobot that need to be deleted in a specific order.

        Args:
            source (DiffSync): DiffSync
        """
        for grouping in (
            "ipaddr",
            "subnet",
            "vrf",
            "vlan",
            "cluster",
            "port",
            "device",
            "device_type",
            "manufacturer",
            "rack",
            "site",  # can't delete a site until all of its dependent objects, above, have been deleted
        ):
            for nautobot_object in self.objects_to_delete[grouping]:
                try:
                    nautobot_object.delete()
                except ProtectedError:
                    self.job.log_failure(obj=nautobot_object, message="Deletion failed protected object")
            self.objects_to_delete[grouping] = []

        return super().sync_complete(source, *args, **kwargs)

Just be aware that any changes made to your Systems of Record through the sync_complete method should be ones that won’t impact the data sets between your Systems of Record. This is essential to minimize unnecessary updates on subsequent runs of your sync Job. An example of this would be performing a DNS query of your devices and creating IP Addresses and interfaces on devices after the sync is complete. Doing so would cause these same IP Addresses and interfaces to be absent in the comparison of your Systems of Record and would thus be deleted and then re-added again after the sync finished. This would cause a repeated cycle of objects being removed and re-added. The best practice is to have any manipulation of your data sets in your Systems of Record performed within your adapters and prior to the diff and sync are performed.


Conclusion

Now that you know the basics of designing an SSoT app and have been enlightened to the power of global and model DiffSync flags, custom Diff classes, and the sync_complete method, the options for designing your Single Source of Truth application are limited only by your imagination. We at Network to Code look forward to seeing what you and the community creates.

-Justin



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Building a Nautobot SSoT App

Blog Detail

In a previous post we established the importance of having a single source of truth (SSoT), provided an overview of the Nautobot Single Source of Truth (SSoT) framework, and how the SSoT framework works to enable synchronization of your Systems of Record (SoR). In addition, we’ve also shown a Nautobot SSoT App for Arista CloudVision which extends the SSoT base framework. So now you ask, how do I synchronize my data to and from Nautobot using the Single Source of Truth framework? In this first part of a two-part series, I’ll be explaining the basics of creating your own SSoT app, and then next month, I’ll be following up with more advanced options available when building an SSoT app.

Please note: it is expected that you’ve read the Nautobot Plugin: Single Source of Truth (SSoT) post and understand the framework terminology, such as Data Source and Data Target.

The first thing to do when creating an SSoT app is to define the data shared between your SoR that you want to synchronize. For example, you might want to pull your Rooms from Device42 into Nautobot. You would then create a class that inherits from the DiffSyncModel class as shown below:

from diffsync import DiffSyncModel
from typing import List, Optional


class Room(DiffSyncModel):
    """Room model."""

    _modelname = "room"
    _identifiers = ("name", "building")
    _shortname = ("name",)
    _attributes = ("notes",)
    _children = {"rack": "racks"}
    name: str
    building: str
    notes: Optional[str]
    racks: List["Rack"] = list()

You’ll notice that there are both public and private class attributes defined. Each private attribute is used to help define the model itself within the DiffSync framework. There are two attributes required on every DiffSyncModel, the _modelname and _identifiers attributes. The _modelname attribute defines the type of the model and is used to identify the shared models between your SoR. The _identifier attribute specifies the public attributes used to generate a name for the objects created when loading data from your adapters. It’s essential to confirm that the identifiers used for the object make it globally unique to ensure an accurate sync.

The remaining attributes are optional but can be quite useful in the process. The _shortname attribute identifies an object apart from other objects of the same type allowing for use of a shorter name. The _attributes attribute specifies all attributes that are of interest for synchronization. You’ll notice that each of these public attributes is defined using pydantic typing syntax. These are essential for ensuring data integrity while performing the synchronization. Please note that you must use the Optional type for any attribute that you wish to allow to be None. The last private attribute is _children, which defines other models related to the model you’re creating. In this example, Rooms are children of the Building as you have many Rooms inside a Building. This allows you to define a hierarchy of models for importing. Please note that this is meant for a direct parent-to-child relationship and not multi-branching inheritance. The _children attribute is defined using the pattern of {<model_name>: <field_name>}.

The next step is to define the CRUD (Create, Update, Delete) methods for each model. These methods will handle taking the data, once loaded from your Data Source, and making the relevant changes to the object in your Data Target. Although you may add the CRUD methods for your object to the DiffSyncModel class that you created in the first step, best practice is to create new classes that inherit from that DiffSyncModel class, as shown below:

from django.utils.text import slugify
from nautobot.dcim.models import RackGroup as NautobotRackGroup
from nautobot.dcim.models import Site as NautobotSite


class NautobotRoom(Room):
    
    """Nautobot Room CRUD methods."""

    @classmethod
    def create(cls, diffsync, ids, attrs):
        """Create RackGroup object in Nautobot."""
        new_rg = NautobotRackGroup(
            name=ids["name"],
            slug=slugify(ids["name"]),
            site=NautobotSite.objects.get(name=ids["building"]),
            description=attrs["notes"] if attrs.get("notes") else "",
        )
        new_rg.validated_save()
        return super().create(ids=ids, diffsync=diffsync, attrs=attrs)

    def update(self, attrs):
        """Update RackGroup object in Nautobot."""
        _rg = NautobotRackGroup.objects.get(name=self.name, site__name=self.building)
        if attrs.get("notes"):
            _rg.description = attrs["notes"]
        _rg.validated_save()
        return super().update(attrs)

    def delete(self):
        """Delete RackGroup object from Nautobot."""
        self.diffsync.job.log_warning(f"RackGroup {self.name} will be deleted.")
        super().delete()
        rackgroup = NautobotRackGroup.objects.get(**self.get_identifiers())
        rackgroup.delete()
        return self

Each of the create()update(), and delete() methods for an object are called once a diff is completed and the synchronization process is started. Which method is called depends upon the required changes to the object in your Data Target. When the create() method is called, the object’s identifier and other attributes are passed to it as the ids and attrs variables respectively. The diffsync variable is for handling interactions with the DiffSync Job, such as sending log messages. For the logging of the Job results to be accurate, it is essential that the object is returned to the create method with the variables passed. However, unlike the create() method, the update() method receives only the attributes that have been changed for an object. This means that it is required for the implementer to validate if attributes have been passed or not before making appropriate changes. The delete() method will receive only the class object itself.

When utilizing inheritance between models, ensure the related models have the update_forward_refs() method called. This is essential to establish the relationships between objects.

Once the models and their CRUD methods have been defined, the next step is to write the adapters that load the models you specified in the previous steps. It is in this sense that the adapter class adapts the data from your Data Source. The adapters are required to reference each model that you wish to have considered at the top of the DiffSync object along with a top_level list of your models in the order that you wish to have them processed, as you can see below:

from diffsync import DiffSync
from .models import Building, Room


class Device42Adapter(DiffSync):
    """DiffSync adapter for Device42 server."""

    building = Building
    room = Room

    top_level = ["building"]
from diffsync import DiffSync
from .models import Building, Room


class NautobotAdapter(DiffSync):
    """Nautobot adapter for DiffSync."""

    building = Building
    room = Room

    top_level = ["building"]

As you can see above, you will always have two Systems of Record in a diff so you will need an adapter for both. It is best practice to have them matching at the top to ensure that items are processed identically. As you can see in the examples above, only the Building model is in the top_level list as the Room model is a child and will be processed after the Building is. It is up to the implementer to determine how they wish to load the models they create in the adapters. While loading your models from the methods in your adapters, it is essential that you pass valid DiffSyncModel objects that adhere to what you specified in your models when passed to the add() function. Failing to do so will cause validation errors.

It’s advised to use a load() method to call your other model-specific methods to keep things concise.

from diffsync.exceptions import ObjectAlreadyExists

class Device42Adapter(DiffSync):
...

    def load_rooms(self):
        """Load Device42 rooms."""
        for record in self._device42.api_call(path="api/1.0/rooms")["rooms"]:
            if record.get("building"):
                room = self.room(
                    name=record["name"],
                    building=record["building"],
                    notes=record["notes"] if record.get("notes") else "",
                )
                try:
                    self.add(room)
                    _site = self.get(self.building, record.get("building"))
                    _site.add_child(child=room)
                except ObjectAlreadyExists as err:
                    self.job.log_warning(f"{record['name']} is already loaded. {err}")
            else:
                self.job.log_warning(f"{record['name']} missing Building, skipping.")
                continue

The example above shows how data is pulled from the Device42 API and creates the Room objects that were detailed in the first step. Once the object has been created, it is then added into the DiffSync set with the add() method. As a Room is a child object of a Building, there is an additional step of finding the parent Building object with the get() method, and then using the add_child() method to add the relationship between the objects. If there is an existing object with the same identifiers, the ObjectAlreadyExists exception will be thrown, so it’s advised to wrap the add() method in a try/except block.

With your adapters for each SoR created, the final step is to write your Nautobot Job. This will handle the loading of your models from the adapters, the diff of the objects once loaded, and the synchronization of data by calling the CRUD methods as appropriate. The Job class must be derived from either the DataSource or DataTarget class and is required to include a sync_data method to handle the synchronization process. Optionally, you can also add a config_information or data_mappings method to enrich the data presented to the end user in Nautobot.

from django.templatetags.static import static
from nautobot.extras.jobs import Job
from nautobot_ssot.jobs.base import DataSource

from diffsync.exceptions import ObjectNotCreated
from .device42 import Device42Adapter
from .nautobot import NautobotAdapter


class Device42DataSource(DataSource, Job):
    """Device42 SSoT Data Source."""

    class Meta:
        """Meta data for Device42."""

        name = "Device42"
        data_source = "Device42"
        data_source_icon = static("./d42_logo.png")
        description = "Sync information from Device42 to Nautobot"

    def sync_data(self):
        """Device42 Sync."""
        d42_adapter = Device42Adapter(job=self, sync=self.sync)
        d42_adapter.load()
        nb_adapter = NautobotAdapter(job=self, sync=self.sync)
        nb_adapter.load()
        diff = nb_adapter.diff_from(d42_adapter)
        self.sync.diff = diff.dict()
        self.sync.save()
        self.log_info(message=diff.summary())
        if not self.kwargs["dry_run"]:
            try:
                nb_adapter.sync_from(d42_adapter)
            except ObjectNotCreated as err:
                self.log_debug(f"Unable to create object. {err}")
            self.log_success(message="Sync complete.")


jobs = [Device42DataSource]

As is shown, the Job should create an instance of each of your adapters and call their load() methods to create the DiffSyncModel objects. Once that’s done, a diff and sync can be completed utilizing either the diff_from/diff_to and sync_from/sync_to methods on the adapter objects. Which you use depends upon which way you wish to have the synchronization performed. In the example above, once the models have been loaded, a diff from Device42 to Nautobot is done to report the required objects to be created, updated, or deleted. Finally, if the Job is not a dry-run, synchronization will be executed. Again, this is done with a sync_from from the Device42 adapter object to the Nautobot adapter object. Depending upon the results of an object being created, an ObjectNotCreated exception may be thrown, so it’s advised to use a try/except block when calling the sync methods to ensure it’s caught and handled appropriately. Once all of the objects have been processed, a final success log is sent and the GUI should be updated to reflect the changes.

In summary, the creation of an SSoT app requires the following steps to be completed:

  1. Create one or more DiffSyncModel classes to define the data you wish to synchronize.
  2. For each DiffSyncModel, define the CRUD methods to handle the requisite changes in your Data Target.
  3. Write a DiffSync adapter class for each System of Record to load data from each into your DiffSyncModel classes.
  4. Finally, write your Nautobot Job to perform the synchronization of data between your Data Source and Data Target.

Conclusion

In the next part of this series, we’ll look into how to customize the processing of objects and the use of global and model flags in the DiffSync process.

-Justin



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!