The Flexibility and Power of Nautobot Jobs

Blog Detail

One of Nautobot’s core tenets is extensibility. This manifests itself in the robust capabilities enabled in Nautobot’s apps, but also in a very powerful extensibility feature called Jobs.

Jobs are a way for users to execute custom logic on demand from within the Nautobot UI: jobs can interact directly with Nautobot data to accomplish various data creation, modification, and validation tasks. Jobs can add value stand-alone, as part of an app, or as a complement to an app. Jobs are Python code and exist outside the official Nautobot code base, so they can be updated and changed without interfering with the core Nautobot installation.

A job is essentially a file with Python code.

Read-Only Jobs

By default, jobs can make changes to the Nautobot database.

Developers can optionally flag a job as read-only. This type of job is ideal for report-style jobs, where there would be no change to Nautobot’s data.

One example use case for the read-only job complements Nautobot’s Data Validation Engine app. This app allows users to create custom regular expression (regex) rules. A user can define a regex format for device names, which will ensure any new devices have a name that conforms to the standard. In this example, let’s use the following regex:

^[a-z]{3}[0-9]{2}-[a-z]{4}-[0-9]{2}$
Read-Only Jobs

When the user tries to create a device name that does not comply with the regex, the Data Validation Engine rule prevents creation.

Data Validation Engine rules are not retroactive, however, so any devices that existed prior to that rule being defined may not conform. This is a great use case for a read-only job to inspect the names of all devices or a subset of devices and confirm each existing device name conforms to the standard and flag those that do not.

Tip: A standard job that can write to the database can be executed in dry-run mode so the user can preview any proposed changes without actually making the changes.

Running a Job

To run an installed job, navigate to the Extensibility top-level navigation menu, then select Jobs. You will see a list of installed jobs (more info about installing and creating jobs, as well as a couple examples, are in following sections).

Select the job you are interested in running. We will continue the use of the hostname format use case, so the example shows selection of the Verify Hostnames job.

Running a Job

Once on the page for the specific job, fill in the required info and any other optional info (this will vary from job to job). In this example, we’ll populate the Hostname regex with the exact regex from the Data Validation Engine app:

verify hostname

To run the job, click on the Run Job button. The job results appear after it completes. This example shows job results with multiple non-compliant hostnames:

job result

Job History

To view history for a specific job, navigate back to Extensibility–>Jobs, and find the job you are interested in. To view the results for the most recent run, click on the date/time entry to the right of the description (the Last Run column).

jobs history

To view the results for other past runs, click on the clock icon to the right of the date/time. This will take you to a list of the job results for that job; there will be a timestamp next to each job execution. Select the timestamp for the job execution you are interested in to view the job run’s result:

job result

Jobs Locations

There are a couple of options for where to store jobs.

Local Storage

Jobs can be stored locally on the Nautobot server. Jobs stored in Nautobot may be manually installed as files, stored locally in the JOBS_ROOT path (which defaults to $NAUTOBOT_ROOT/jobs/) on the Nautobot server:

$ echo $NAUTOBOT_ROOT
/opt/nautobot
$ pwd
/opt/nautobot/jobs
$ ls -l
total 12
-rw-rw-r-- 1 nautobot nautobot 2137 Sep 20 15:58 create_pop.py
-rw-rw-r-- 1 nautobot nautobot 2249 Sep 20 15:07 device_data_validation.py
-rw-rw-r-- 1 nautobot nautobot    0 Apr 15  2021 __init__.py
drwxr-xr-x 2 nautobot nautobot 4096 Sep 21 08:29 __pycache__
$ 

The Nautobot worker must be restarted when a new jobs Python file is added to /jobs/:

$ sudo systemctl restart nautobot-worker.service

Tip: In Nautobot 1.3 and later, you will also need to run the nautobot-server post_upgrade command after adding any new Job files to this directory.

Git Repositories

Since jobs are Python code, and since you may have many jobs, it often makes sense to maintain them outside of Nautobot in a git repository.

Nautobot’s documentation has info about configuring a repository and how to configure a Git repository for jobs specifically.

To view currently configured repositories, navigate to Extensibility–>Git Repositories. From there, look for repositories with the greenJobs icon (the icon looks like a scroll). Below is an example of a repository configured to hold Jobs code:

git repositories

From this same screen you can also choose to add/configure a new repository by clicking on the blue Add button in the upper right.

To get more info on the repository, click on it to go to the detail view page for the repository. There you can also see what info a repository is configured to provide:

git info a repository

In a Git repository, job files must be kept in a /jobs/ directory at the root of the repo.

Note: There must be an __init__.py file in the /jobs/ directory for both local and repository instances.

Plugins

If you are writing a Nautobot plugin, you can include jobs as a part of the plugin by listing them in a /jobs.py file included in the plugin. Writing plugins is out of scope for this blog post, but we mention this for sake of completeness.

Creating Jobs

Nautobot’s documentation has extensive documentation on how to write jobs code, so we won’t rehash that here.

Examples

The Nautobot documentation has a great example of a job that will create a new site with a user-defined amount of new devices.

Additionally, here is the code for the read-only example featured in this blog:

"""
   Copyright 2021 Network to Code

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
"""

import re
from nautobot.extras.jobs import Job, StringVar, MultiObjectVar
from nautobot.dcim.models import Device, DeviceRole, DeviceType, Site


def filter_devices(data, log):
    """Returns a list of devices per filter parameters
    
    Args:
        data: A dictionary from the job input
        log: The log instance for logs
    """

    devices = Device.objects.all()

    site = data["site"]
    if site:
        log(f"Filter sites: {normalize(site)}")
        # *__in enables passing the query set as a parameter
        devices = devices.filter(site__in=site)

    device_role = data["device_role"]
    if device_role:
        log(f"Filter device roles: {normalize(device_role)}")
        devices = devices.filter(device_role__in=device_role)

    device_type = data["device_type"]
    if device_type:
        log(f"Filter device types: {normalize(device_type)}")
        devices = devices.filter(device_type__in=device_type)

    return devices


class FormData:
    site = MultiObjectVar(
        model = Site,
        required = False,
    )
    device_role = MultiObjectVar(
        model = DeviceRole,
        required = False,
    )
    device_type = MultiObjectVar(
        model = DeviceType,
        required = False,
    )


class VerifyHostnames(Job):
    """Demo job that verifies device hostnames match corporate standards."""

    class Meta:
        """Meta class for VerifyHostnames"""

        name = "Verify Hostnames"
        description = "Verify device hostnames match corporate standards"
        read_only = True

    site = FormData.site
    device_role = FormData.device_role
    device_type = FormData.device_type
    hostname_regex = StringVar(
        description = "Regular expression to check the hostname against",
        default = ".*",
        required = True
    )

    def run(self, data=None, commit=None):
        """Executes the job"""

        regex = data["hostname_regex"]
        self.log(f"Using the regular expression: {regex}") 
        for device in filter_devices(data, self.log_debug):
            if re.search(regex, device.name):
                self.log_success(obj=device, message="Hostname is compliant.")
            else:
                self.log_failure(obj=device, message="Hostname is not compliant.")

Conclusion

Jobs are an extensibility feature that allows custom code execution. Jobs can provide value stand-alone, as part of Nautobot apps, and adding complementary capabilities with Nautobot apps.

Thank you, and have a great day!

-Tim



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Parsing Strategies – TTP Parser

Blog Detail

The Parsing Strategies blog post series continues with the introduction of TTP Parser. TTP is a relatively new Python library that has gained some adoption within the Network Automation community. It provides a simple way to parse text into structured data with an approach similar to TextFSM but, in my opinion, has a lot more to offer such as output modifiers, macros, built-in functions, results formatting, and many other features that will be discussed throughout the blog. Join the TTP (#ttp-template-text-parser) Slack channel in our Networktocode Slack if you are interested in joining the conversation.

This blog post will provide an overview of TTP. If you are looking for some basic instructions on how to get started using parsers with Ansible (NTC-Templates, TTP, etc.), start here: Ansible Parsers

What is TTP?

TTP is a Python library which uses implied RegEx patterns to parse data, but it is also incredibly flexible. This parsing strategy provides a way to process data at runtime, as opposed to post-processing by using built-in functions, custom macros, and output formatters while parsing. We will dive deeper into this in a later section. The use of macros (Python functions) to manipulate data and generate desired output is one of my favorite features of the library, thus being my go to parsing strategy of choice when it comes to any hierarchical configuration output. TTP provides a series of output formatters to transform data into YAML, JSON, Table, CSV, and more. TTP can be used as a CLI utility, Python library or available with Netmiko and Ansible.

Groups

Capture groups are declared by using XML group tags, which allows nesting of other groups to generate a hierarchy. Any match inside a group is appended to a list of results. Groups have several attributes that can be set, but only the ‘name’ attribute is required. An important attribute to highlight is the “method” value, which can be set to “group” or “table.” When parsing CLI output, it’s recommended to set the method to table. This tells the parser to consider every line as the start of capturing for the group. Otherwise, setting the “start” indicator per match will be required if you have a variation of regular expressions to capture in the group. Although groups use XML group tags, TTP Templates have a deeper resemblance to Jinja templates and share similar characteristics.

Group Example:

<group name="some_group" method="table">
data to parse
    <group name="nested">
    more data to parse
    </group>
</group>

RegEx Indicator Patterns

TTP offers the ability to specify RegEx patterns to capture within a match variable. If we take a look at the source code, we can review the exact RegEx patterns that are being applied to capture a data. It’s important to understand what the regular expression pattern is before applying it, to ensure you properly capture variables.

Patterns

PHRASE = r"(\S+ {1})+?\S+"
ROW = r"(\S+ +)+?\S+"
ORPHRASE = r"\S+|(\S+ {1})+?\S+"
DIGIT = r"\d+"
IP = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}"
PREFIX = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}/[0-9]{1,2}"
IPV6 = r"(?:[a-fA-F0-9]{1,4}:|:){1,7}(?:[a-fA-F0-9]{1,4}|:?)"
PREFIXV6 = r"(?:[a-fA-F0-9]{1,4}:|:){1,7}(?:[a-fA-F0-9]{1,4}|:?)/[0-9]{1,3}"
_line_ = r".+"
WORD = r"\S+"
MAC = r"(?:[0-9a-fA-F]{2}(:|\.|\-)){5}([0-9a-fA-F]{2})|(?:[0-9a-fA-F]{4}(:|\.|\-)){2}([0-9a-fA-F]{4})"

Macros

As the documentation states, “Macros are python code within a macro tag. This code can contain a number of function definitions, these functions can be referenced within TTP templates.” This allows us to process data during parsing by sending the captured output from our template into a function and return a processed result. This helps eliminate the need for post-processing of the values after the data has been parsed. TTP offers the ability to use these macro functions within match variables, groups, output, and input data. Below, we will review a couple examples of using macros on matched variables.

Macro Example:

<macro>
def subscription_level(data):
    data = data.replace('"','').split()
    if len(data) >= 3:
        return {"card-type": data[0], "subscription-level": data[2]}
    return data[0]
</macro>

Structuring Output (Data Modeling)

Another awesome feature of TTP is the ability to manipulate how the data is structured and represented back to us. I won’t go into all the details and capabilities, but you can see from our example below that we have a list of neighbors under our BGP results for the IPV4 Address Family. However, under our peering neighbors, we have generated a dictionary with the key of the neighbor that was found in the subsequent result. Can you find the differences in the way the parser was structured to accomplish the different data structures? There is an ‘Awesome’ hint under the Dynamic Path example!

Simple List:

<group name="neighbor">
neighbor {{ neighbor }} {{ activate | macro("to_bool") }}
neighbor 10.1.0.1 send-community {{ send-community }}
neighbor 10.1.0.1 route-map {{ route-map }} {{ route-map-direction }}
</group>

Result Snippet:

"neighbor": [
{
    "activate": true,
    "neighbor": "10.1.0.1",
    "route-map": "PL-EBGP-PE1-OUT",
    "route-map-direction": "out",
    "send-community": "both"
},
{
    "activate": true,
    "neighbor": "10.1.0.5",
    "route-map": "PL-EBGP-PE2-OUT",
    "route-map-direction": "out",
    "send-community": "both"
}

Dynamic Path:

<group name="neighbor.{{ neighbor }}">  <------ Awesome!
neighbor {{ neighbor }} remote-as {{ remote-as }}
neighbor 10.1.0.1 update-source {{ update-source }}
</group>

Result Snippet:

"neighbor": {
    "10.1.0.1": {
        "remote-as": "65000",
        "update-source": "GigabitEthernet2.1001"
    },
    "10.1.0.5": {
        "remote-as": "65000",
        "update-source": "GigabitEthernet3.1001"
    }

There are several other techniques to format structure, and I highly encourage you to review the documentation to get the most out of the Forming Results Structure feature.

Parsing Hierarchical Configuration

Let’s get right to it! I know you can’t wait for the good stuff. Here is an example of using groups, macros, specified RegEx indicators ( ORPHRASE & DIGIT ), path formatters, and match variable indicators (‘start’, ‘end’, ‘ignore’) to parse a Nokia 7750 card configuration.

#--------------------------------------------------
echo "Card Configuration"
#--------------------------------------------------
    card 1
        card-type "iom-1" level cr
        fail-on-error
        mda 1
            mda-type "me6-100gb-qsfp28"
            ingress-xpl
                window 10
            exit
            egress-xpl
                window 10
            exit
            fail-on-error
            no shutdown
        exit
        no shutdown
    exit

TTP Template:

<macro>
def subscription_level(data):
    data = data.replace('"','').split()
    return {"card-type": data[0], "subscription-level": data[2]}
</macro>

#-------------------------------------------------- {{ ignore }}
echo "Card Configuration" {{ _start_ }}
#-------------------------------------------------- {{ ignore }}
    <group name="configure.card">
    card {{ slot-number | DIGIT }}
        card-type {{ card-type | ORPHRASE  | macro('subscription_level') }}
        fail-on-error {{fail-on-error | set(true) }}
        <group name="mda">
        mda {{ mda-slot }}
            shutdown {{ admin-state | set(false) }}
            mda-type {{ mda-type | replace('"', '') }}
            <group name="ingress-xpl">
            ingress-xpl {{ _start_ }}
                window {{ window }}
            exit {{ _end_ }}
            </group>
            <group name="egress-xpl">
            egress-xpl {{ _start_ }}
                window {{ window }}
            exit {{ _end_ }}
            </group>
            fail-on-error {{ fail-on-error | set(true) }}
            no shutdown {{ admin-state | set(true) }}
        </group>
        exit {{ ignore }}
    </group>
#-------------------------------------------------- {{ _end_ }}

Result:

[
   {
      "configure":{
         "card":{
            "card-type":{
               "card-type":"iom-1",
               "subscription-level":"cr"
            },
            "fail-on-error":true,
            "mda":{
               "admin-state":true,
               "egress-xpl":{
                  "window":"10"
               },
               "fail-on-error":true,
               "ingress-xpl":{
                  "window":"10"
               },
               "mda-slot":"1",
               "mda-type":"me6-100gb-qsfp28"
            },
            "slot-number":"1"
         }
      }
   }
]

Awesome! But what exactly is happening with our “subscription_level” macro? Our template includes the following match variable:

"card-type {{ card-type | ORPHRASE  | macro('subscription_level') }}".

This is using “ORPHRASE” to capture a single word or a phrase and the matched text (‘“iom-1” level cr”’) is then sent into the “subscription_level” function for processing. The function is taking this captured string and manipulating the text by splitting it to produce the following list: “[‘iom-1’, ‘level’, ‘cr’]”. Finally, it’s returning a dictionary with the keys for ‘card-type’ and ‘subscription-level’.

{
   "card-type":{
      "card-type":"iom-1",
      "subscription-level":"cr"
   }
}

Let’s take a look at another example template to parse a Cisco IOS BGP configuration that’s also taking advantage of TTP Built-In Functions. The goal is to convert values that would be better represented as booleans in our data model, specifically the following lines: “log-neighbor-changes” and “activate”. Although we can use the ‘set’ function like our previous example to accomplish something similar, I really want to drive home the fact that we can use python functions to accomplish our desired state of a matched variable. Also, macros have unique behavior when returning data, which we will review in more detail. We will also use ‘is_ip’ to validate our neighbor address is in fact an IP Address and DIGIT as a RegEx indicator to match a number.

Here is the raw output of the running configuration for BGP:

router bgp 65001
 bgp router-id 192.168.10.1
 bgp log-neighbor-changes
 neighbor 10.1.0.1 remote-as 65000
 neighbor 10.1.0.1 update-source GigabitEthernet2.1001
 neighbor 10.1.0.5 remote-as 65000
 neighbor 10.1.0.5 update-source GigabitEthernet3.1001
 !
 address-family ipv4
  redistribute connected
  neighbor 10.1.0.1 activate
  neighbor 10.1.0.1 send-community both
  neighbor 10.1.0.1 route-map PL-EBGP-PE1-OUT out
  neighbor 10.1.0.5 activate
  neighbor 10.1.0.5 send-community both
  neighbor 10.1.0.5 route-map PL-EBGP-PE2-OUT out
 exit-address-family

Template:

<macro>
def to_bool(captured_data):
    represent_as_bools = ["activate", "log-neighbor-changes"]
    if captured_data in represent_as_bools:
      return captured_data, {captured_data: True}
</macro>

<group name="bgp">
router bgp {{ asn | DIGIT }}
 bgp router-id {{ router-id }}
 bgp {{ log-neighbor-changes | macro("to_bool") }}
 <group name="neighbor.{{ neighbor }}">
 neighbor {{ neighbor | is_ip }} remote-as {{ remote-as }}
 neighbor 10.1.0.1 update-source {{ update-source }}
 </group>
 ! {{ ignore }}
 <group name="afi.{{ afi }}">
 address-family {{ afi }}
  redistribute {{ redistribute }}
  <group name="neighbor">
  neighbor {{ neighbor | is_ip }} {{ activate | macro("to_bool") }}
  neighbor 10.1.0.1 send-community {{ send-community }}
  neighbor 10.1.0.1 route-map {{ route-map }} {{ route-map-direction }}
  </group>
 exit-address-family {{ ignore }}
 </group>
</group>

The output of our Ansible task:

TASK [DEBUG] *******************************************************************
ok: [AS65001_CE1] => {
    "msg": [
        [
            {
                "bgp": {
                    "afi": {
                        "ipv4": {
                            "neighbor": [
                                {
                                    "activate": true,
                                    "neighbor": "10.1.0.1",
                                    "route-map": "PL-EBGP-PE1-OUT",
                                    "route-map-direction": "out",
                                    "send-community": "both"
                                },
                                {
                                    "activate": true,
                                    "neighbor": "10.1.0.5",
                                    "route-map": "PL-EBGP-PE2-OUT",
                                    "route-map-direction": "out",
                                    "send-community": "both"
                                }
                            ],
                            "redistribute": "connected"
                        }
                    },
                    "asn": "65001",
                    "log-neighbor-changes": true,
                    "neighbor": {
                        "10.1.0.1": {
                            "remote-as": "65000",
                            "update-source": "GigabitEthernet2.1001"
                        },
                        "10.1.0.5": {
                            "remote-as": "65000",
                            "update-source": "GigabitEthernet3.1001"
                        }
                    },
                    "router-id": "192.168.10.1"
                }
            }
        ]
    ]
}

Well, that was easy. All we had to do was replace the values that are of interest with jinja-like syntax and define several groups with XML group tags to properly structure our results. The macro function “to_bool” was used to process the captured data and return a boolean. You may have noticed that we returned the captured_data and a dictionary in our macro, as opposed to our earlier example only returning a simple dictionary. This is because macros will behave differently according to the data that’s being returned. Here is an explanation from the Documentation:

“If macro returns True or False – original data unchanged, macro handled as condition functions, invalidating result on False and keeps processing result on True If macro returns None – data processing continues, no additional logic associated If macro returns single item – that item replaces original data supplied to macro and processed further If macro return tuple of two elements – fist element must be string – match result, second – dictionary of additional fields to add to results”

Parsing Show Commands

Let’s continue the pattern of the series and parse the output of a simple “show lldp neighbors” command output for IOS.

Topology

Raw Output:

Capability codes:
    (R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device
    (W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other

Device ID           Local Intf     Hold-time  Capability      Port ID
R3.admin-save.com   Gi0/1          120        R               Gi0/0
R2.admin-save.com   Gi0/0          120        R               Gi0/0

Total entries displayed: 2

Now, let’s see how simple the TTP Template is to parse operational show output commands:

<group name="LLDP_NEIGHBORS" method="table">
Device ID           Local Intf     Hold-time  Capability      Port ID {{ignore}}
{{DEVICE_ID}} {{LOCAL_INT}} {{HOLD_TIME | DIGIT}} {{CAPABILITY}} {{ PORT_ID }}
</group>
<group name="TOTAL_ENTRIES">
Total entries displayed: {{ COUNT | DIGIT}}
</group>

That’s it! Let’s review some important pieces to make this a successful template.

  • Method
    • “method=’table’” is applied to the group “LLDP_NEIGHBORS” as we are parsing operational show commands in table format.
  • Ignore
    • ”” is used to tell the parser to discard the lines that we don’t care about inside of our capture group. Any lines outside of the group are simply ignored and discarded by default.

Example Playbook:

---
- name: "EXAMPLE TTP PLAYBOOK"
  hosts: R1
  connection: network_cli

  tasks:

    - name: "10. PARSE LLDP NEIGHBORS WITH TTP"
      ansible.netcommon.cli_parse:
        command: "show lldp neighbors"
        parser:
          name: ansible.netcommon.ttp
        set_fact: lldp

    - name: DEBUG
      debug:
        msg: "{{ lldp }}"

The above playbook is referencing the template at the following relative location: “templates/ios_show_lldp_neighbors.ttp”. The templates directory contains the template starting with the ansible_network_os followed by the command.

Parsed Output:

ok: [R1] => {
    "msg": [
        [
            {
                "LLDP_NEIGHBORS": [
                    {
                        "CAPABILITY": "R",
                        "DEVICE_ID": "R3.admin-save.com",
                        "HOLD_TIME": "120",
                        "LOCAL_INT": "Gi0/1",
                        "PORT_ID": "Gi0/0"
                    },
                    {
                        "CAPABILITY": "R",
                        "DEVICE_ID": "R2.admin-save.com",
                        "HOLD_TIME": "120",
                        "LOCAL_INT": "Gi0/0",
                        "PORT_ID": "Gi0/0"
                    }
                ],
                "TOTAL_ENTRIES": {
                    "COUNT": "2"
                }
            }
        ]

Finally, one thing to keep in mind is the several nested lists that were produced. It’s as simple as ensuring you are accessing the correct list when evaluating the results.

Example Ansible debug task:

- name: DEBUG
  debug:
    msg: "{{ lldp[0][0]['TOTAL_ENTRIES'] }}"

Output:

TASK [DEBUG] *******************************************************************
ok: [R1] => {
    "msg": {
        "COUNT": "2"
    }
}

Conclusion

Although we have barely scratched the surface, you can see that TTP offers many great features. I find it to be very accommodating when parsing full hierarchical running configuration outputs, more so than other available parsers. The library is constantly evolving and implementing new features. Take a second to join the Slack channel to keep up with development and ask any questions you may have!

-Hugo



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Introducing the Nautobot Data Validation Engine Plugin

Blog Detail

Data, data, and more data, but is the data good data?

Coinciding with the coming release of Nautobot v1.1.0, the team is excited to announce the public release of the Data Validation Engine Plugin! This Nautobot plugin offers data validation and enforcement rule logic that utilizes the custom data validators functionality within the platform. Data validators allow for custom business logic to be enforced within Nautobot when changes to data occurs. This gives organizations the ability to better integrate Nautobot into their existing ecosystems and to put guardrails around its use to ensure that network data housed within Nautobot can be trusted. One of Nautobot’s core missions is to serve as a single Source of Truth (SoT) for network data, from which automation solutions can be engineered. For network automation to be successful, the data that drives that automation must be trusted, and for the data to be trusted, there must exist constraints within the data model to enforce its correctness.

The problem lies in the understanding that all organizations operate differently and each will have nuanced ways in which they manage and configure their networks that ultimately dictate constraints in their network SoT data. Something as simple as naming a device can quickly devolve into endless debate and 16 different organizational standards. As such, it is impossible for Nautobot to try to derive and implement any such direct data constraints that would apply to all users. Therefore, the custom data validators feature set exists, to empower end users to codify their business logic and be able to enforce it within Nautobot.

Data Validation Engine Plugin

So where, then, does the plugin fit? The custom data validators API is a raw Python interface that hooks into the data models’ clean() methods, allowing for ValidationErrors to be raised based on defined business logic when model instances are created or updated. If that doesn’t mean anything to you, the Data Validation Engine Plugin is for you!

The plugin offers a Web UI (and REST API) for creating and managing no-code business rules for validation. In this initial release, there are two types of supported rules, regular expressions, and min/max numeric based rules.

Regular Expression Rules

Regular expressions define search patterns for matching text and are quite pervasive in the industry for a variety of use cases. They are often used to validate that text conforms to a pattern. Here, we use them to define rules for constraining text based fields in Nautobot to user defined expressions.

regex-rules-list

Each rule defines the Nautobot model and the text based field on that model to which the validation should apply. A custom error message can be defined, else a default message will indicate validation against the regular expression has failed. The rule may also be toggled on and off in real time in the Web UI.

regex-rules-edit

When a rule has been create and enabled, it is enforced when an instance of the applicable model is either created or updated in the Web UI or REST API. Here we can see what happens when a user attempts to create a device that does no conform to the hostname standard that was defined above.

regex-rules-enforcement

Min/Max Numeric Rules

While regular expression rule work on text based fields, min/max rules work on numeric model fields.

min-max-rules-list

As the name implies, users have the ability to constrain the minimum and/or maximum values of a number-based field, and the rules are defined in the same way as regular expression rules.

min-max-rules-edit

As you might expect, the enforcement is also the same, and in this example, an organization wishes to enforce that no VLANs with ID greater than or equal to 4000 get deployed in their environment, so they create a min/max rule targeting the vid field on the VLAN model.

min-max-rules-enforcement

Install

The plugin is available as a Python package on PyPI and can be installed with pip, following the full instructions on GitHub.

Final Thoughts

Data is key to network automation and trusted, correct data is key to successful network automation. Enforcing your organization’s specific business logic constraints is an important step in building a network automation platform and Nautobot offers the feature set to enable you. The Data Validation Engine Plugin goes one step further in providing a user friendly, no-code solution to common data validation use cases. Give it a try and let us know what you think!



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!