Introducing the New Device Onboarding App

Blog Detail

As Network Automation becomes more popular and companies decide on Nautobot to fit the Network Source of Truth (NSoT) component within their reference architecture, the next crucial problem to solve is data population. This allows organizations quick automation wins that upper management wants to see! The starting point of population for most organizations is “Devices.” Up until now that process was probably a mix of manual populations, CSV imports, nautobot-app-device-onboarding, and most likely the Python library “network-importer” to extend that data further. All these methods have their own pros and cons, but one of the most common asks was to make onboarding devices to Nautobot easier and more flexible. Introducing the Device Onboarding app 4.0!

This revamp of the Onboarding app exposes two new SSoT jobs to simplify the device onboarding process. The first job will onboard basic device information from an IP address. The second job extends the data gathered by pulling in Interface data including VLANs, VRFs, IP addresses (creating prefixes if needed), descriptions, and more! Onboarding 4.0 retains the original implementation for users who are making use of that framework, so you can try out the new features while retaining your existing workflow. I will discuss the new release in more detail throughout this blog post.

Why?

Populating a device inventory into Nautobot takes time. The time commitment is multiplied by the need for a number of different methods, applications, and libraries just to get a decent level of metadata assigned to devices. Onboarding 4.0 addresses these and additional concerns as outlined below.

  • The original OnboardingTask job in the plugin was capable of getting only basic device data into Nautobot.
  • Setting up network-importer as an external program felt disjointed and required additional infrastructure resources.
    • The dependency on Batfish was a challenge, as it required Batfish and its dependency on Docker to be able to be run in the environment.
    • The diffsync dependency didn’t have access to many of the new “contrib” features that nautobot-app-ssot exposes.
  • Adding new support for additional operating systems and data was difficult.
    • Extending an existing platform’s capabilities required additional Python modules to be installed into the environment.
      • The same challenge existed for adding new platform support.
  • The original Onboarding extension framework required a custom app and/or Python library to be available in the environment, which, depending on the deployment method used, can result in delays and complications.

What About the Original Extension Framework?

The original OnboardingTask job and its extension framework will remain available in Onboarding 4.0. We understand that this application has been around since the release of Nautobot, and many users have invested resources into extending the application using the original framework. A deprecation of the OnboardingTask job is planned for the future; but for now, the only change users of the original extension framework need to be aware of is that this job is now hidden by default.

To find the hidden job, simply navigate to Jobs–>Jobs. Click on the Filter button and select “hidden=Yes”.

Revealing the hidden job will allow you to run it and edit job attributes as usual.

First enable the job.

Next, feel free to override the property of the job to un-hide it by overriding the default.

The New SSoT Jobs Explained

The biggest change implemented in the 4.0 release is the use of the Single Source of Truth (SSoT) framework. The SSoT app (nautobot-app-ssot) uses a combination of diffsync, SSoT contrib, and other tools to diff inputs from disparate data sources and then sync data between those systems. This allows us to not only onboard device data but compare and update as needed. There are two new SSoT jobs to accomplish this.

  • Sync devices from network – Mimics what the original onboarding task did, including creation of device(s), serial number, MGMT IP, and interface.
  • Sync data from network – Mimics what the old NTC library network-importer did—syncs interfaces, their MTU, description, IP address, type, status, etc. There is a toggle option to sync VRFs and add them to interfaces as well as a toggle for VLANs that can sync VLANs and add tagged/untagged VLANs to ports.

How It Works

This section will describe the newer SSoT jobs that this App exposes and how they work.

Frameworks in Use

  • Nautobot SSoT – Utilizing the existing Nautobot SSoT framework allows a common pattern to be reused and offers a path forward to add additional support and features.
  • Nautobot App Nornir – Utilized for Nornir Inventory plugins for Nautobot (specifically for Sync Network Data Job).
  • Nornir Netmiko – Used to execute commands and return results.
  • jdiff – Used to simplify parsing required data fields out of command outputs returned from command parser libraries like textFSM. Specifically extract_data_from_json method.
  • Parsers – Initially NTC Templates via textFSM, but support for pyATS, TTP, etc. is planned for the future.

YAML Definition DSL

The key extensibility feature in the new release is the ability to add new platform support by creating a single YAML definition file. The application comes with some logical defaults, but these can be overloaded and new platforms can be added via Git repositories.

File Format

Let’s review a few of the components of the file:

  • ssot job name – Name of the job to define the commands and metadata needed for that job. (choices: sync_devices or sync_network_data)
  • root key data name – Is fully defined in the schema definition.
  • commands – List of commands to execute in order to get the required data.
  • command – Actual show command to execute.
  • parser – Whether to use a parser (TextFSM, pyATS, TTP, etc.). Alternatively, none can be used if the platform supports some other method to return structured data, e.g., | display json or an equivalent.
  • jpath – The JMESPath (specifically jdiff’s implementation) to extract the data from the parsed JSON returned from parser.
  • post_processor – Jinja2-capable code to further transform the returned data post jpath extraction.
  • iterable_type – A optional value to enforce type casting.

As an example:

---
sync_devices:
  hostname:
    commands:
      - command: "show version"
        parser: "textfsm"
        jpath: "[*].hostname"
        post_processor: ""
..omitted..

How the SSoTSync Devices From NetworkJob Works

  1. The job is executed with inputs selected.
    • List of comma-separated IP/DNS names is provided.
    • Other required fields are selected in the job inputs form.
  2. The SSoT framework loads the Nautobot adapter information.
  3. The SSoT frameworks network adapter load() method calls Nornir functionality.
    • The job inputs data is passed to the InitNornir initializer. Because we only have basic information, a custom EmptyInventory Nornir inventory plugin is packaged with the App. This gets initialized in the InitNornir function, but actually initializes a true inventory that is empty.
    • Since Platform information may need to be auto-detected before adding a Nornir Host object to the inventory, a create_inventory function is executed that uses the SSH-Autodetect via Netmiko to try to determine the platform so it can be injected into the “Host” object.
    • Finally, all the platform-specific commands to run plus all the JPath post_processor information loaded from the platform-specific YAML files must be injected into the Nornir data object to be accessible later in the extract/transform functions.
  4. Within the code block of a Nornir with_processor context manager, call the netmiko_send_commands Nornir task.
    • Access the loaded platform-specific YAML data and deduplicate commands to avoid running the same command multiple times; e.g., multiple required data attributes come from the same Show command.
  5. Utilize native Nornir Processor to overload functionality on task_instance_completed() to run command outputs through extract and transformation functions.
    • This essentially is our “ET” portion of an “ETL” (Extract, Transform, Load) process.
    • Next, the JSON result from the show command after the parser executes, e.g., Textfsm, gets run through the jdiff function extract_data_from_json() with the data and the jpath from the YAML file definition.
    • Finally, an optional post_processor Jinja2-capable execution can further transform the data for that command before passing it to finish the SSoT synchronization.

How the SSoTSync Network Data From NetworkJob Works

For those looking to deep dive into the technical details or troubleshooting, here is how it works:

  1. The job is executed with inputs selected.
    • One or multiple device selection.
    • Other required fields are selected in the job inputs form.
    • Toggle certain metadata booleans to True if you want more data synced.
  2. The SSoT framework loads the Nautobot adapter information.
  3. The SSoT framework’s network adapter load() method calls Nornir functionality.
    • The job inputs data is passed to the InitNornir initializer. Because devices now exist in Nautobot, we use NautobotORMInventory. Nornir inventory plugin comes from nautobot-plugin-nornir.
    • Finally, all the platform-specific commands to run plus all the jpath post_processor information loaded from the platform-specific YAML files must be injected into the Nornir data object to be accessible later in the extract/transform functions.
  4. Within the code block of a Nornir with_processor context manager call the netmiko_send_commands Nornir task.
    • Access the loaded platform-specific YAML data and deduplicate commands to avoid running the same command multiple times; e.g., multiple required data attributes come from the same Show command.
  5. Utilize native Nornir Processor to overload functionality on subtask_instance_completed() to run command outputs through extract and transformation functions.
    • This essentially is our “ET” portion of an “ETL” (Extract, Transform, Load) process.
    • Next, the JSON result from the show command after the parser executes, e.g., Textfsm, gets run through the jdiff function extract_data_from_json() with the data and the jpath from the YAML file definition.
    • Finally, an optional post_processor Jinja2-capable execution can further transform the data for that command before passing it to finish the SSoT synchronization.

Extending Platform Support

Adding support can be done by adding a file that parses data into the proper schema. There is a new Git datasource exposed that allows the included YAML files to be overwritten or new platform support to be added for maximum flexibility.

For simplicity, a merge was not implemented for the Git repository functionality. Any file loaded in from a Git repo is preferred. If a file in the repo exists that matches what the app exposes by default, e.g., cisco_ios.yml, the entire file from the repo becomes preferred. So keep in mind if you’re going to overload a platform exposed by the app, you must overload the full file! No merge will happen between two files that are named the same. Additionally, Git can be used to add new support. For example, if you have Aruba devices in your environment, and you want to add that functionality to device onboarding, this can be done with a custom YAML file. Simply create a Git repo and create the YAML file (name it aruba_osswitch.yml), and you’ve just added support for Aruba in your environment.

The filenames must be named <network_driver_name>.yml. See configured choices in the Nautobot UI under a platform definition.

Even better if you follow that up with a PR into the main application!


Conclusion

As the device onboarding application continues to mature, we expect to add further platform support to the defaults the app exposes. We hope the new DSL- and YAML-based extension framework makes it quick and easy to add support and load it in via Git.

Happy automating!

-Jeff, David, Susan



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Introduction to Event-Driven Ansible and Nautobot

Blog Detail

At Network to Code, we are continually working on new solutions to extend automation capabilities for our customers. One project that I recently worked on used Event-Driven Ansible, or EDA, to simplify the process of automating other systems based on changes in Nautobot. This blog post will cover the basics of EDA, and how we used it to update ServiceNow CMDB records based on changes in Nautobot.

What Was the Problem We Were Trying to Solve?

The customer is using ServiceNow as their CMDB and Nautobot as their source of truth for network infrastructure. They wanted to be able to update ServiceNow records when changes were made in Nautobot. For example, when a device is added to Nautobot, they wanted to create a corresponding record in ServiceNow. There are other systems that we are integrating with Nautobot using EDA, but for this blog post we will focus on ServiceNow. Any system with an API or Ansible Galaxy role/collection can be integrated with Nautobot using EDA.

What Is Event-Driven Ansible?

Event-Driven Ansible was developed by Red Hat to allow listening to events from various sources and then taking action on those events using Rulebooks to define three components — sources, rules, and actions.

  • Sources — where the events are coming from. This can be a webhook, Kafka, Azure Service Bus, or other sources.
  • Rules — define the conditions that must be met for an action to be taken.
  • Actions — an action is commonly running a local playbook, but could also be generating an event, running a job template in AAP, or other actions.

How Did We Use EDA to Update ServiceNow Based on an Event from Nautobot?

We developed a small custom plugin for Nautobot that utilizes Nautobot Job Hooks to publish events to an Azure Service Bus queue. An added benefit to using ASB as our event bus was that Event-Driven Ansible already had a source listener plugin built for ASB, so no additional work was needed! See event source plugins. This allows us to initiate the connection to Azure Service Bus from Nautobot and then send events to Azure Service Bus when changes are made in Nautobot.

The flow of events is as follows:

  1. Nautobot device create (or update, delete) triggers a Job Hook.
  2. A Nautobot App publishes the event to Azure Service Bus queue. This App receives the Job Hook event from Nautobot and publishes the payload to the defined Azure Service Bus queue.
  3. Ansible EDA source plugin connects and subscribes to the Azure Service Bus queue and listens for events.
  4. EDA runs Ansible playbook to update ServiceNow.

What Do the Rulebooks and Playbooks Look Like?

Below is an example of a basic rulebook we are using. This rulebook will run the playbook add_device_to_servicenow.yml when a device is created in Nautobot.

Rulebook

---
- name: "LISTEN TO ASB QUEUE"
    hosts: localhost
    sources:
      - ansible.eda.azure_service_bug:
          connection_string: ""
          queue_name: ""

    rules:
      - name: "ADD DEVICE TO SERVICENOW"
        condition: "event.body.data.action =='create'"
        action:
          run_playbook:
            name: "add_device_to_servicenow.yml"
            verbosity: 1

You can add different sources, conditions, and rules as needed. Any information that you can extract from the event can be used in the condition.

Playbook

---
- name: "ADD DEVICE TO SERVICENOW"
  hosts: localhost
  connection: local
  gather_facts: false
  tasks:
    - name: "ADD DEVICE TO SERVICENOW"
      servicenow.servicenow.snow_record:
        state: present
        table: "cmdb_ci_netgear"
        instance: ""
        username: ""
        password: ""
        name: ""
        description: ""
        serial_number: ""
        model_id: ""
        manufacturer_id: ""

Playbooks are structured as normal, with the addition of the event variable. This variable contains the event data that was sent from Nautobot. In this example, we are using the event.body.data to extract the device name, description, serial number, platform, and manufacturer.

In the above example, we used the ServiceNow Ansible Collection to update ServiceNow. You can use any Ansible module, role, or collection to update the system you are integrating with Nautobot. One of the systems I was updating did not have an Ansible module, so I used the uri module to make API calls to the system.

Resources


Conclusion

Event-Driven Ansible is a powerful tool that can be used to integrate Nautobot with other systems. It can solve the very real problem of keeping multiple systems in sync and can be used to automate many different tasks. Feel free to join us at the Network to Code Slack channel to discuss this and other automation topics.

-Susan



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Getting Started with Python Network Libraries for Network Engineers – Part 2

Blog Detail

In the first part of this series, we looked at Netmiko. In this blog, we’ll look at NAPALM, another library that is available to address these challenges is. This post takes a look at the basics of NAPALM, and how it can be used to interact with network devices with a focus on data collection.

The currently supported platforms of NAPALM are – IOS, EOS, NXOS, and IOS-XR. See the support matrix for more detailed information on platform support and back-end library.

NAPALM 101 – Demo

I will be using a Cisco DevNet Sandbox to demonstrate the basic setup and use of NAPALM in a network environment. The sandbox is free to use – there are shared and dedicated sandbox environments you can use to try out new technologies.

For this tutorial, I am connected via SSH to a VM that has SSH access to a Cisco IOS XR router. On the VM, install the NAPALM library.

pip install napalm

In the Python shell, you can directly connect to the router using a few lines of code. The driver is selected based on which networking device you are connecting to, in this instance “ios”. See the docs on supported devices.

>>> import napalm
>>>
>>> driver = napalm.get_network_driver("ios")
>>> device  = driver(hostname="10.10.20.70", username="admin", password="admin", optional_args={"port": 2221},)
>>> device.open()

Getters

The power of NAPALM is built on the getters. Getters are Python methods that have been written to return structured data in a normalized format. Using the getters you can retrieve information from the networking device and programmatically interact with it with Python. Using the JSON python library, you can structure the return data to be more readable. Below is an example using the get_interfaces_ip() getter.

>>> import json
>>> output = device.get_interfaces_ip()
>>> print(json.dumps(output, indent=4)
{
    "MgmtEth0/RP0/CPU0/0":{
        "ipv4":{
            "192.168.122.21":{
                "prefix_length":24
            }
        },
        "ipv6":{
            
        }
    },
    "Wed":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/4":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/2":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/3":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/0":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/1":{
        "ipv6":{
            
        }
    }
}

After you are finished making changes or gathering information, don’t forget to close the connection.

>>> device.close()

There are many other useful getters, such as – get_bgp_neighborsget_arpping, and traceroute. These have been built and improved upon with community support. Information on contributing to the NAPALM library can be found here.

Additional Functionality

In addition to show command functionality, there is also support for configuration changes on network devices. For most supported platforms, there are methods to merge or replace config, and to compare changes before you commit. See the support matrix for platform support information.

Extending Support

The final item I’d like to touch on is the extensibility of NAPALM. If there is a method that exists but does not return data in the structure you need, you can extend the driver. Extending a NAPALM driver allows you to write custom methods in Python to enhance your structured data response.

Outside of the main NAPALM library, there is community support for additional drivers, such as the NAPALM PANOS driver in the NAPALM community GitHub.


Conclusion

NAPALM is a robust tool for network automation, and benefits from active open-source contributions. Take a look at the GitHub for additional information on support, contributing, and to see what’s happening in the community.

-Susan

New to Python libraries? NTC’s Training Academy is holding a 3-day course Automating Networks with Python I on September 26-28, 2022 with 50% labs to get you up to speed.
Visit our 2022 public course schedule to see our full list.



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!