Introducing the New Device Onboarding App

Blog Detail

As Network Automation becomes more popular and companies decide on Nautobot to fit the Network Source of Truth (NSoT) component within their reference architecture, the next crucial problem to solve is data population. This allows organizations quick automation wins that upper management wants to see! The starting point of population for most organizations is “Devices.” Up until now that process was probably a mix of manual populations, CSV imports, nautobot-app-device-onboarding, and most likely the Python library “network-importer” to extend that data further. All these methods have their own pros and cons, but one of the most common asks was to make onboarding devices to Nautobot easier and more flexible. Introducing the Device Onboarding app 4.0!

This revamp of the Onboarding app exposes two new SSoT jobs to simplify the device onboarding process. The first job will onboard basic device information from an IP address. The second job extends the data gathered by pulling in Interface data including VLANs, VRFs, IP addresses (creating prefixes if needed), descriptions, and more! Onboarding 4.0 retains the original implementation for users who are making use of that framework, so you can try out the new features while retaining your existing workflow. I will discuss the new release in more detail throughout this blog post.

Why?

Populating a device inventory into Nautobot takes time. The time commitment is multiplied by the need for a number of different methods, applications, and libraries just to get a decent level of metadata assigned to devices. Onboarding 4.0 addresses these and additional concerns as outlined below.

  • The original OnboardingTask job in the plugin was capable of getting only basic device data into Nautobot.
  • Setting up network-importer as an external program felt disjointed and required additional infrastructure resources.
    • The dependency on Batfish was a challenge, as it required Batfish and its dependency on Docker to be able to be run in the environment.
    • The diffsync dependency didn’t have access to many of the new “contrib” features that nautobot-app-ssot exposes.
  • Adding new support for additional operating systems and data was difficult.
    • Extending an existing platform’s capabilities required additional Python modules to be installed into the environment.
      • The same challenge existed for adding new platform support.
  • The original Onboarding extension framework required a custom app and/or Python library to be available in the environment, which, depending on the deployment method used, can result in delays and complications.

What About the Original Extension Framework?

The original OnboardingTask job and its extension framework will remain available in Onboarding 4.0. We understand that this application has been around since the release of Nautobot, and many users have invested resources into extending the application using the original framework. A deprecation of the OnboardingTask job is planned for the future; but for now, the only change users of the original extension framework need to be aware of is that this job is now hidden by default.

To find the hidden job, simply navigate to Jobs–>Jobs. Click on the Filter button and select “hidden=Yes”.

Revealing the hidden job will allow you to run it and edit job attributes as usual.

First enable the job.

Next, feel free to override the property of the job to un-hide it by overriding the default.

The New SSoT Jobs Explained

The biggest change implemented in the 4.0 release is the use of the Single Source of Truth (SSoT) framework. The SSoT app (nautobot-app-ssot) uses a combination of diffsync, SSoT contrib, and other tools to diff inputs from disparate data sources and then sync data between those systems. This allows us to not only onboard device data but compare and update as needed. There are two new SSoT jobs to accomplish this.

  • Sync devices from network – Mimics what the original onboarding task did, including creation of device(s), serial number, MGMT IP, and interface.
  • Sync data from network – Mimics what the old NTC library network-importer did—syncs interfaces, their MTU, description, IP address, type, status, etc. There is a toggle option to sync VRFs and add them to interfaces as well as a toggle for VLANs that can sync VLANs and add tagged/untagged VLANs to ports.

How It Works

This section will describe the newer SSoT jobs that this App exposes and how they work.

Frameworks in Use

  • Nautobot SSoT – Utilizing the existing Nautobot SSoT framework allows a common pattern to be reused and offers a path forward to add additional support and features.
  • Nautobot App Nornir – Utilized for Nornir Inventory plugins for Nautobot (specifically for Sync Network Data Job).
  • Nornir Netmiko – Used to execute commands and return results.
  • jdiff – Used to simplify parsing required data fields out of command outputs returned from command parser libraries like textFSM. Specifically extract_data_from_json method.
  • Parsers – Initially NTC Templates via textFSM, but support for pyATS, TTP, etc. is planned for the future.

YAML Definition DSL

The key extensibility feature in the new release is the ability to add new platform support by creating a single YAML definition file. The application comes with some logical defaults, but these can be overloaded and new platforms can be added via Git repositories.

File Format

Let’s review a few of the components of the file:

  • ssot job name – Name of the job to define the commands and metadata needed for that job. (choices: sync_devices or sync_network_data)
  • root key data name – Is fully defined in the schema definition.
  • commands – List of commands to execute in order to get the required data.
  • command – Actual show command to execute.
  • parser – Whether to use a parser (TextFSM, pyATS, TTP, etc.). Alternatively, none can be used if the platform supports some other method to return structured data, e.g., | display json or an equivalent.
  • jpath – The JMESPath (specifically jdiff’s implementation) to extract the data from the parsed JSON returned from parser.
  • post_processor – Jinja2-capable code to further transform the returned data post jpath extraction.
  • iterable_type – A optional value to enforce type casting.

As an example:

---
sync_devices:
  hostname:
    commands:
      - command: "show version"
        parser: "textfsm"
        jpath: "[*].hostname"
        post_processor: ""
..omitted..

How the SSoTSync Devices From NetworkJob Works

  1. The job is executed with inputs selected.
    • List of comma-separated IP/DNS names is provided.
    • Other required fields are selected in the job inputs form.
  2. The SSoT framework loads the Nautobot adapter information.
  3. The SSoT frameworks network adapter load() method calls Nornir functionality.
    • The job inputs data is passed to the InitNornir initializer. Because we only have basic information, a custom EmptyInventory Nornir inventory plugin is packaged with the App. This gets initialized in the InitNornir function, but actually initializes a true inventory that is empty.
    • Since Platform information may need to be auto-detected before adding a Nornir Host object to the inventory, a create_inventory function is executed that uses the SSH-Autodetect via Netmiko to try to determine the platform so it can be injected into the “Host” object.
    • Finally, all the platform-specific commands to run plus all the JPath post_processor information loaded from the platform-specific YAML files must be injected into the Nornir data object to be accessible later in the extract/transform functions.
  4. Within the code block of a Nornir with_processor context manager, call the netmiko_send_commands Nornir task.
    • Access the loaded platform-specific YAML data and deduplicate commands to avoid running the same command multiple times; e.g., multiple required data attributes come from the same Show command.
  5. Utilize native Nornir Processor to overload functionality on task_instance_completed() to run command outputs through extract and transformation functions.
    • This essentially is our “ET” portion of an “ETL” (Extract, Transform, Load) process.
    • Next, the JSON result from the show command after the parser executes, e.g., Textfsm, gets run through the jdiff function extract_data_from_json() with the data and the jpath from the YAML file definition.
    • Finally, an optional post_processor Jinja2-capable execution can further transform the data for that command before passing it to finish the SSoT synchronization.

How the SSoTSync Network Data From NetworkJob Works

For those looking to deep dive into the technical details or troubleshooting, here is how it works:

  1. The job is executed with inputs selected.
    • One or multiple device selection.
    • Other required fields are selected in the job inputs form.
    • Toggle certain metadata booleans to True if you want more data synced.
  2. The SSoT framework loads the Nautobot adapter information.
  3. The SSoT framework’s network adapter load() method calls Nornir functionality.
    • The job inputs data is passed to the InitNornir initializer. Because devices now exist in Nautobot, we use NautobotORMInventory. Nornir inventory plugin comes from nautobot-plugin-nornir.
    • Finally, all the platform-specific commands to run plus all the jpath post_processor information loaded from the platform-specific YAML files must be injected into the Nornir data object to be accessible later in the extract/transform functions.
  4. Within the code block of a Nornir with_processor context manager call the netmiko_send_commands Nornir task.
    • Access the loaded platform-specific YAML data and deduplicate commands to avoid running the same command multiple times; e.g., multiple required data attributes come from the same Show command.
  5. Utilize native Nornir Processor to overload functionality on subtask_instance_completed() to run command outputs through extract and transformation functions.
    • This essentially is our “ET” portion of an “ETL” (Extract, Transform, Load) process.
    • Next, the JSON result from the show command after the parser executes, e.g., Textfsm, gets run through the jdiff function extract_data_from_json() with the data and the jpath from the YAML file definition.
    • Finally, an optional post_processor Jinja2-capable execution can further transform the data for that command before passing it to finish the SSoT synchronization.

Extending Platform Support

Adding support can be done by adding a file that parses data into the proper schema. There is a new Git datasource exposed that allows the included YAML files to be overwritten or new platform support to be added for maximum flexibility.

For simplicity, a merge was not implemented for the Git repository functionality. Any file loaded in from a Git repo is preferred. If a file in the repo exists that matches what the app exposes by default, e.g., cisco_ios.yml, the entire file from the repo becomes preferred. So keep in mind if you’re going to overload a platform exposed by the app, you must overload the full file! No merge will happen between two files that are named the same. Additionally, Git can be used to add new support. For example, if you have Aruba devices in your environment, and you want to add that functionality to device onboarding, this can be done with a custom YAML file. Simply create a Git repo and create the YAML file (name it aruba_osswitch.yml), and you’ve just added support for Aruba in your environment.

The filenames must be named <network_driver_name>.yml. See configured choices in the Nautobot UI under a platform definition.

Even better if you follow that up with a PR into the main application!


Conclusion

As the device onboarding application continues to mature, we expect to add further platform support to the defaults the app exposes. We hope the new DSL- and YAML-based extension framework makes it quick and easy to add support and load it in via Git.

Happy automating!

-Jeff, David, Susan



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Introducing Design Builder: Design Driven Network Automation

Blog Detail

Most people involved in network automation are familiar with the concept of a Source of Truth (SoT). The SoT is usually some form of database that maintains intended state of objects as well as their interdependency. The SoT provides a way to quickly ascertain what a network’s intended state should be, while often providing a way to see what the network’s state actually is. A new concept is emerging, known as Design Oriented Source of Truth. This idea takes network designs and codifies them, attaching additional meaning to the objects within the SoT. Nautobot is a source of truth that contains all sorts of information about a network’s state. Although many of the pieces of information within Nautobot are related, they are discretely managed. A new Nautobot App aims to simplify the process of codifying network designs and populating Nautobot objects based on these designs.

Introduction

It is very common to have a small set of standardized designs that are used to deploy many sites and services in enterprise networks. For example, branch office sites may have a few different designs depending on their size. There could be a design that uses a single branch office router for small sites. Another design could have two routers and an access switch for sites with a moderate user base. A third design could include a more complex switching infrastructure for sites with many employees. When companies do tech refreshes or new site builds, these standardized designs are used and new data must be created in the source of truth. The newly open-sourced Design Builder application was created to address this problem, and fulfills the idea that a standardized design can be taken from a network engineer and transformed into a format that can be consumed and executed by Nautobot. Design Builder can expand a minimal set of inputs into a full-fledged set of configuration objects within Nautobot. This includes any kind of data object that Nautobot can model. Everything from Rack and Device objects to IP addresses and BGP peering information.

Design Builder provides powerful mechanisms that make simple designs possible. The first is the ability to represent interrelated data in a meaningful hierarchy. For example, devices have interfaces and interfaces have IP addresses. Conceptually this seems like a very simple structure. However, if we were to manually use the REST API or ORM to handle creating objects like this, we would first have to create a device object and keep its ID in memory. We would then have to create interfaces with their device foreign-key set to the device ID we just created. Finally, we’d have to save all of the interface IDs and do the same with IP addresses. Design Builder provides a means to represent objects in YAML and produce their representation within the Nautobot database. A typical design workflow follows the following diagram:

Following this process, we can produce YAML files that intuitively represent the structure of the data we want to create. An example of a Design Builder YAML design can be seen in the following YAML document:

devices:
  - name: "Router 1"
    status__name: "Active"
    interfaces:
      - name: "GigabitEthernet0"
        type: "1000base-t"
        status__name: "Active"
        ip_addresses:
          - address: "192.168.0.1/24"
            status__name: "Active"

This YAML document would produce a single device, with a single Gigabit Ethernet interface. The interface itself has a single IP address. As demonstrated in the example, Design Builder automatically associates the parent/child relationships correctly, and there is no need to keep copies of primary and foreign keys. We can visually represent this YAML design with the following diagram:

Design Builder also provides a system to query for existing related objects using some attribute of the associated object. In the above example, the status field is actually a related object. Statuses are not just simple strings, they are first-class objects within the Nautobot database. In this case, the Status object with the name Active is predefined in Nautobot and does not need to be created. It does, however, need to be associated with the Device, the Interface, and the IPAddress objects.

This object relationship is actually a foreign-key relationship in the database and ORM. If we were using the Django ORM to associate objects, we would first need to look up the status before creating the associated objects. Design Builder provides a way to perform that lookup as part of the model hierarchy. Note that we’re looking up the status by its name: status__name. Design Builder has adopted similar syntax to Django’s field lookup. The field name and related field are separated by double underscores.

Use Cases

There are many use cases that are covered by the Design Builder, but we will highlight a very simple one in this post. Our example use case handles the creation of edge site designs within Nautobot. This use case is often seen when doing tech refreshes or new site build-outs.

Engineers commonly need to add a completely new set of data for a site. This could be the result of a project to refresh a site’s network infrastructure or it could be part of deploying a new site entirely. Even with small sites, the number of objects needing to be created or updated in Nautobot could be dozens or even hundreds. However, if a standardized design is developed then Design Builder can be used to auto-populate all of the data for new or refreshed sites.

Consider the following design, which will create a new site with edge routers, a single /24 prefix and two circuits for the site:

---
sites:
  - name: "LWM1"
    status__name: "Staging"
    prefixes:
      - prefix: "10.37.27.0/24"
        status__name: "Reserved"
    devices:
      - name: "LWM1-LR1"
        status__name: "Planned"
        device_type__model: "C8300-1N1S-6T"
        device_role__name: "Edge Router"
        interfaces:
          - name: "GigabitEthernet0/0"
            type: "1000base-t"
            description: "Uplink to backbone"
            status__name: "Planned"
      - name: "LWM1-LR2"
        status__name: "Planned"
        device_type__model: "C8300-1N1S-6T"
        device_role__name: "Edge Router"      
        interfaces:
          - name: "GigabitEthernet0/0"
            type: "1000base-t"
            description: "Uplink to backbone"
            status__name: "Planned"

circuits:
  - cid: "LWM1-CKT-1"
    status__name: "Planned"
    provider__name: "NTC"
    type__name: "Ethernet"
    terminations:
      - term_side: "A"
        site__name: "LWM1"
      - term_side: "Z"
        provider_network__name: "NTC-WAN"

  - cid: "LWM1-CKT-2"
    status__name: "Planned"
    provider__name: "NTC"
    type__name: "Ethernet"
    terminations:
      - term_side: "A"
        site__name: "LWM1"
      - term_side: "Z"
        provider_network__name: "NTC-WAN"

This is still quite a bit of information to write. Luckily, the Design Builder application can consume Jinja templates to produce the design files. Using some Jinja templating, we can reduce the above design a bit:


---
sites:
  - name: "LWM1"
    status__name: "Staging"
    prefixes:
      - prefix: "10.37.27.0/24"
        status__name: "Reserved"
    devices:
    {% for i in range(2) %}
      - name: "LWM1-LR{{ i }}"
        status__name: "Planned"
        device_type__model: "C8300-1N1S-6T"
        device_role__name: "Edge Router"
        interfaces:
          - name: "GigabitEthernet0/0"
            type: "1000base-t"
            description: "Uplink to backbone"
            status__name: "Planned"
    {% endfor %}
circuits:
  {% for i in range(2) %}
  - cid: "LWM1-CKT-{{ i }}"
    status__name: "Planned"
    provider__name: "NTC"
    type__name: "Ethernet"
    terminations:
      - term_side: "A"
        site__name: "LWM1"
      - term_side: "Z"
        provider_network__name: "NTC-WAN"
  {% endfor %}

The above design file gets closer to a re-usable design. It has reduced the amount of information we have to represent by leveraging Jinja2 control structures, but there is still statically defined information. At the moment, the design includes hard coded site information (for the site name, device names and circuit IDs) as well as a hard coded IP prefix. Design Builder also provides a way for this information to be gathered dynamically. Fundamentally, all designs are just Nautobot Jobs. Therefore, a design Job can include user-supplied vars that are then copied into the Jinja2 render context. Consider the design job for our edge site design:

class EdgeDesign(DesignJob):
    """A basic design for design builder."""
    site_name = StringVar(label="Site Name", regex=r"\w{3}\d+")
    site_prefix = IPNetworkVar(label="Site Prefix")

#...

This design Job collects a site_name variable as well as a site_prefix variable from the user. Users provide values for these variables through the normal Job launch entrypoint:

Once the job has been launched, the Design Builder will provide these input variables to the Jinja rendering context. The variable names, within the jinja2 template, will match the attribute names used in the Design Job class. With the site_name and site_prefix variables now being defined dynamically, we can produce a final design document using them:

---

sites:
  - name: "{{ site_name }}"
    status__name: "Staging"
    prefixes:
      - prefix: "{{ site_prefix }}"
        status__name: "Reserved"
    devices:
    {% for i in range(2) %}
      - name: "{{ site_name }}-LR{{ i }}"
        status__name: "Planned"
        device_type__model: "C8300-1N1S-6T"
        device_role__name: "Edge Router"
        interfaces:
          - name: "GigabitEthernet0/0"
            type: "1000base-t"
            description: "Uplink to backbone"
            status__name: "Planned"
    {% endfor %}
circuits:
  {% for i in range(2) %}
  - cid: "{{ site_name }}-CKT-{{ i }}"
    status__name: "Planned"
    provider__name: "NTC"
    type__name: "Ethernet"
    terminations:
      - term_side: "A"
        site__name: "{{ site_name }}"
      - term_side: "Z"
        provider_network__name: "NTC-WAN"
  {% endfor %}

The design render context is actually much more flexible than simple user entry via script vars. Design Builder provides a complete system for managing the render context, including loading variables from YAML files and providing dynamic content via Python code. The official documentation covers all of the capabilities of the design context.

In addition to the YAML rendering capabilities, Design Builder includes a way to perform just-in-time operations while creating and updating Nautobot objects. For instance, in the above example, the site prefix is specified by the user that launches the job. It may be desirable for this prefix to be auto-assigned and provisioned out of a larger parent prefix. Design Builder provides a means to perform these just-in-time lookups and calculations in the form of something called an “action tag”. Action tags are evaluated during the object creation phase of a design’s implementation. That means that database lookups can occur and computations can take place as the design is being implemented. One of the provided action tags is the next_prefix action tag. This tag accepts query parameters to find a parent prefix, and also a parameter that specifies the length of the required new prefix. For example, if we want to provision a /24 prefix from the 10.0.0.0/16 parent, we could use the following:

prefixes:
  - "!next_prefix":
      prefix: "10.0.0.0/16"
      length: 24
    status__name: "Active"

The next_prefix action tag will find the parent prefix 10.0.00/16 and look for the first available /24 in that parent. Once found, Design Builder will create that child prefix with the status Active.

Several action tags are provided out of the box, but one of the most powerful features of Design Builder is the ability to include custom action tags in a design. Action tags are implemented in Python as specialized classes, and can perform any operation necessary to produce a just-in-time result.

There is quite a lot to understand with Design Builder, and we have only touched on a few of its capabilities. While there are several moving parts, the following diagram illustrates the high-level process that the Design Builder application uses to go from design files and templates to an implemented design.

Design Builder starts with some optional input variables from the Nautobot job and combines them with optional context variables written either in YAML or Python or both. This render context is used by the Jinja2 renderer to resolve variable names in Jinja2 templates. The Jinja2 templates are rendered into YAML documents that are unmarshaled as Python dictionaries and provided to the Builder. The Builder iterates all of the objects in this dictionary and performs necessary database creations and updates. In the process of creating and updating objects, any action tags that are present are evaluated. The final result is a set of objects in Nautobot that have been created or updated by Design Builder.

Roadmap

Our plans for Design Builder are far from over. There are many more features we’re currently working on, as well as some that are still in the planning stages. Some of the near-term features include design lifecycle and object protection.

The design lifecycle feature allows the implementations of a design to be tracked. Design instances can be created (such as an instance of the edge site design above) and can be subsequently decommissioned. Objects that belong to a design instance will be reverted to their state prior to the design implementation, or they may be removed entirely (if created specifically for a design). Designs can also track inter-design dependencies so that a design cannot be decommissioned if other design instances depend on it. The design lifecycle feature will also allow designs to be versioned so that an implementation can be updated over time.

The ability to protect objects that belong to a design is also planned. The idea is that if an object is created as part of a design implementation, any attributes that were initially set in this design cannot be updated outside of that design’s lifecycle. This object protection assures that our source of truth has data that complies with a design and prevents manually introduced errors.


Conclusion

Design Builder is a great tool that ensures your network designs are used for every deployment, and simplifies populating data in Nautobot along the way. It provides a streamlined way to represent hierarchical relationships with a clear syntax and concepts that should be familiar to those that have started to embark on their NetDevOps journey. I encourage you to try it out.

-Andrew, Christian and Paddy



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Introducing Cookiecutter Project Templates to Support Nautobot App Development for Network Automation

Blog Detail

In June of 2022 Network to Code announced the opensourcing of a Cookiecutter project that contains a project template ChatOps integrations to help in the initial bootstrapping of a Nautobot ChatOps App. The Nautobot ChatOps Cookiecutter announcement post can be found here. This first cookie helped to lower the barrier for entry when it came to developing new ChatOps integrations AND made it possible to have baseline standards on things like local development environments, docs structure (with docs already built for installation), administration, and contributing.

Two New Cookies and a New Home for Another

Today we are announcing that we are doubling down on the benefits of Cookiecutter and are opensourcing a new Cookiecutter repository that contains three separate cookies. Two of the cookies are new to open sourcing and the original ChatOps cookie is getting a new home within this same repository!

Nautobot App

Nautobot App cookie is broadly applicable to most Nautobot Apps you may develop and provides the initial scaffolding for building additional models for your Nautobot App. This cookie provides a question when baking the cookie for a Model Class Name. If provided, a simple model class will be created with name and description fields on the model along with all standard components, to have a fully functioning set of UI and API views.

Nautobot App SSoT

Nautobot App SSoT is a superset of the Nautobot App cookie, as it provides the same capability to automatically generate a model with all required components in order to have a fully functioning set of UI and API views to support a network source of truth. In addition to features provided by Nautobot Apps cookie, this cookie will also build out the Network to Code recommended folder and file structure for developing an SSoT App as well as the required Nautobot SSoT Jobs (creates both Data Source and Data Target Jobs). This includes the initial creation of DiffSync adapters, models, and utils along with their use in the Nautobot SSoT Jobs.

Nautobot App ChatOps

Nautobot App ChatOps cookie is also a superset of the Nautobot App cookie but instead provides a base command and a hello_world subcommand along with the required settings in the pyproject.toml that are used to inform the Nautobot ChatOps App that an additional base command is registered as part of this app. This cookie was previously open sourced as its own Cookiecutter repository but has now been migrated to the new repository, and the old repository has been archived.

Why the New Repository?

As the amount of open source projects Network to Code maintains continues to expand, we are evaluating how to be as effective as possible in the care and feeding that is required for maintaining any open source project. This also is factoring in things like continual maintenance of standards when it comes to docs structure, how to interface with development environments, and CI workflow standards. With the new repository, we are able to provide the same standard for three separate cookies and the use of symbolic links. This allows a change to a single file to immediately be applicable to all cookies in the repository. By providing this functionality we are able to avoid drift between all cookies, as the symbolic links all point back to one overarching standard!

Codified Standards

With the open sourcing of more cookies for Nautobot Apps, Network to Code has also built an internal Drift Management process that is already helping to keep Nautobot Apps maintained by Network to Code all at the same standard, no matter when the cookie was baked.


Conclusion

This helps to improve developer experience across all of our Official Nautobot Apps and ensure consistent standards for testing, basic docs, and CI!

-Jeremy White



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!