Update Your Ansible Nautobot Environment & Helm Chart

Blog Detail

With the release of Nautobot 2.1.9 and 1.6.16 came new requirements for pynautobot to include an authentication token that for some initial calls that were not previously required. So to make sure that pynautobot (and subsequently Nautobot Ansible) and Nautobot Helm Chart work with the most recent version of Nautobot, new versions have been released.

pynautobot & Nautobot Ansible

First to check what version of pynautobot you have, you can run pip list to get that environment. Here is an example of using grep to only look for pynautobot.

❯ pip list | grep pynautobot
pynautobot         2.0.2

Nautobot 1.6 Environments

If you are continuing on the LTM release train of 1.6, your pynautobot needs to be upgraded to 1.5.2 in order to continue using the Ansible modules (4.5.0). No update to the Ansible modules is required-only the underlying pynautobot version. Complete this with:

pip install pynautobot==1.5.2

Accidental Upgrade to 2.x of pynautobot?

If you accidentally upgrade to the latest version of pynautobot but intended to be on 1.x, just issue the same command as above and you will get the right version. Nothing further would needs to be done-no harm.

pip install pynautobot=-1.5.2

Nautobot 2.1 Environments

For those with the latest Nautobot application version of 2.1.9, please upgrade the pynautobot instance in your Ansible environment to the latest of 2.1.1

pip install --upgrade pynautobot

Nautobot Helm Chart

First to check what version of Nautobot Helm Chart you have configured, you can run helm show chart nautobot/nautobot to get the full information about the configured chart. There will be multiple versions you will see in the output, the chart version that matters is the last line in the output and is a root key in the yaml output.

❯ helm show chart nautobot/nautobot
annotations:

... Truncated for bevity ...

sources:
- https://github.com/nautobot/nautobot
- https://github.com/nautobot/helm-charts
version: 2.0.5

Warning – READ BEFORE PROCEEDING

The latest version of the helm chart has a default version for Nautobot that is set to 2.1.9, if you are NOT providing custom image or statically declaring the version you WILL be upgraded to 2.1.9. For more information on using a custom image please see the documentation here or for using the Network to Code maintained images with a specific version please ensure nautobot.image.tag is set to the tagged version you are expecting to use. Below are some examples for values.yaml provided to a helm release.

If you are on a 1.X.X version of the helm chart please review the upgrade guide here before proceeding.

Custom Image

nautobot:
  image:
    registry: "ghcr.io"
    repository: "my-namespace/nautobot"
    tag: "1.6.16-py3.11"
    pullPolicy: "Always"
    pullSecrets:
      - ghcr-pull-secret

Network to Code Image

nautobot:
  image:
    tag: "1.6.16-py3.11"

Update Helm Repo

Before you can use the new version of the helm chart you must update the helm repo.

❯ helm repo update nautobot
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nautobot" chart repository
Update Complete. ⎈Happy Helming!⎈

Update Helm Release

Now you can proceed to update your helm release with the latest helm chart version.

❯ helm upgrade <name of helm release> -f values.yml --version 2.1.0
Release "nautobot" has been upgraded. Happy Helming!
NAME: nautobot
LAST DEPLOYED: Wed Mar 27 20:09:47 2024
NAMESPACE: default
STATUS: deployed
REVISION: 3
NOTES:
*********************************************************************
*** PLEASE BE PATIENT: Nautobot may take a few minutes to install ***
*********************************************************************

... Truncated for bevity ...

Conclusion

When issues do arise on playbooks that were previously working fine, it’s best to give your dependency software packages a quick update. Hope that this helps. Happy automating.

-Josh, Jeremy



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Introduction to Event-Driven Ansible and Nautobot

Blog Detail

At Network to Code, we are continually working on new solutions to extend automation capabilities for our customers. One project that I recently worked on used Event-Driven Ansible, or EDA, to simplify the process of automating other systems based on changes in Nautobot. This blog post will cover the basics of EDA, and how we used it to update ServiceNow CMDB records based on changes in Nautobot.

What Was the Problem We Were Trying to Solve?

The customer is using ServiceNow as their CMDB and Nautobot as their source of truth for network infrastructure. They wanted to be able to update ServiceNow records when changes were made in Nautobot. For example, when a device is added to Nautobot, they wanted to create a corresponding record in ServiceNow. There are other systems that we are integrating with Nautobot using EDA, but for this blog post we will focus on ServiceNow. Any system with an API or Ansible Galaxy role/collection can be integrated with Nautobot using EDA.

What Is Event-Driven Ansible?

Event-Driven Ansible was developed by Red Hat to allow listening to events from various sources and then taking action on those events using Rulebooks to define three components — sources, rules, and actions.

  • Sources — where the events are coming from. This can be a webhook, Kafka, Azure Service Bus, or other sources.
  • Rules — define the conditions that must be met for an action to be taken.
  • Actions — an action is commonly running a local playbook, but could also be generating an event, running a job template in AAP, or other actions.

How Did We Use EDA to Update ServiceNow Based on an Event from Nautobot?

We developed a small custom plugin for Nautobot that utilizes Nautobot Job Hooks to publish events to an Azure Service Bus queue. An added benefit to using ASB as our event bus was that Event-Driven Ansible already had a source listener plugin built for ASB, so no additional work was needed! See event source plugins. This allows us to initiate the connection to Azure Service Bus from Nautobot and then send events to Azure Service Bus when changes are made in Nautobot.

The flow of events is as follows:

  1. Nautobot device create (or update, delete) triggers a Job Hook.
  2. A Nautobot App publishes the event to Azure Service Bus queue. This App receives the Job Hook event from Nautobot and publishes the payload to the defined Azure Service Bus queue.
  3. Ansible EDA source plugin connects and subscribes to the Azure Service Bus queue and listens for events.
  4. EDA runs Ansible playbook to update ServiceNow.

What Do the Rulebooks and Playbooks Look Like?

Below is an example of a basic rulebook we are using. This rulebook will run the playbook add_device_to_servicenow.yml when a device is created in Nautobot.

Rulebook

---
- name: "LISTEN TO ASB QUEUE"
    hosts: localhost
    sources:
      - ansible.eda.azure_service_bug:
          connection_string: ""
          queue_name: ""

    rules:
      - name: "ADD DEVICE TO SERVICENOW"
        condition: "event.body.data.action =='create'"
        action:
          run_playbook:
            name: "add_device_to_servicenow.yml"
            verbosity: 1

You can add different sources, conditions, and rules as needed. Any information that you can extract from the event can be used in the condition.

Playbook

---
- name: "ADD DEVICE TO SERVICENOW"
  hosts: localhost
  connection: local
  gather_facts: false
  tasks:
    - name: "ADD DEVICE TO SERVICENOW"
      servicenow.servicenow.snow_record:
        state: present
        table: "cmdb_ci_netgear"
        instance: ""
        username: ""
        password: ""
        name: ""
        description: ""
        serial_number: ""
        model_id: ""
        manufacturer_id: ""

Playbooks are structured as normal, with the addition of the event variable. This variable contains the event data that was sent from Nautobot. In this example, we are using the event.body.data to extract the device name, description, serial number, platform, and manufacturer.

In the above example, we used the ServiceNow Ansible Collection to update ServiceNow. You can use any Ansible module, role, or collection to update the system you are integrating with Nautobot. One of the systems I was updating did not have an Ansible module, so I used the uri module to make API calls to the system.

Resources


Conclusion

Event-Driven Ansible is a powerful tool that can be used to integrate Nautobot with other systems. It can solve the very real problem of keeping multiple systems in sync and can be used to automate many different tasks. Feel free to join us at the Network to Code Slack channel to discuss this and other automation topics.

-Susan



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Network Configuration Templating with Ansible – Part 4

Blog Detail

In Part 3 of this series we looked at macros and filters with Jinja2 templating. Now it’s time to ramp it up to the next level and look at something a little bit more like what you would see in the real world. In this post we will cover how to use complex Ansible inventories to create a hierarchy of configuration variables, and how they are inherited down into the resulting configuration. Using variable inheritance and host groups gives you the ability to flexibly assign variables to devices in a dynamic fashion based on group membership. We will see this when we get to the example at the end, where we assign variables at different levels in the hierarchy. This way we will be able to set common variables at a high level, and get more specific or override those variables at a lower point the tree.

Ansible Inventory

In order to understand variable inheritance we will use a common example of a network with multiple regions, having sites within those regions. Ansible has the concept of host groups, which can consist of hosts or other groups. These groups provide the ability to run jobs or assign specific configuration parameters to specific groups of devices. For more advanced Ansible inventory documentation, check out the Ansible docs on inventory. Most of the advanced topics of Ansible inventory, such as dynamic inventory and using multiple inventory sources, are beyond the scope of this blog post.

We’ll start off with an inventory structure set up like this (for visualization):

└── regions
    ├── central
    │   └── sites
    │       ├── chicago
    │       ├── dallas
    │       └── minneapolis
    ├── eastern
    │   └── sites
    │       └── newyork
    └── western
        └── sites
            └── phoenix

Here we can see that regions is the top-level “grouping”. Central, eastern, and western are the regions, with sites underneath each of those. We can then build an Ansible inventory in yaml that looks like this:

# inventory
all:
  children:
    regions:
      children:
        central:
          children:
            sites:
              children:
                chicago:
                  hosts:
                    chi-router1:
                dallas:
                  hosts:
                    dal-router1:
                minneapolis:
                  hosts:
                    min-router1:
        eastern:
          children:
            sites:
              children:
                newyork:
                  hosts:
                    new-router1:
        western:
          children:
            sites:
              children:
                phoenix:
                  hosts:
                    phe-router1:

Now, this looks a little scary, but it is just expanded because of the children keywords that denote child groups of the parent groups. This adds a few extra lines, but should still be readable. In a production environment it would be best to set up a file structure similar to the one in the Ansible docs – inventory where group variables go in their own files named for the group they are assigned to versus all in the same file. Another option is to use dynamic inventories, or Golden Configuration-style jobs within Nautobot. This is because this yaml file gets unwieldy rather quickly once you add a few sites and variables. In Nautobot (Golden Config plugin), you’re able to create a small yaml “inventory” (variable). These inventory files are pulled into “config contexts”, and linked to various objects in Nautobot like regions, tenants, sites, etc., with metadata. See config context docs and Golden Config Docs for more information on this. For brevity and simplicity, we will stick with a single inventory file in our examples in this post. One note: be careful with spacing/indentation in yaml; it is very important. Now that we have the basic inventory structure, we can move into assigning variables to various groupings.

Variable Assignment

In our example we are going to assign (via the vars key) a aaa server for all regions, dns and ntp servers per region, and a syslog server per site to each of the routers in our inventory. Then, in Chicago we have a different dns server that we want those devices to use instead of the regional one. Again, this is a pretty basic example, but you should be able to begin to see the ways this can be used and adapted for different use cases and environments. One thing to note here as you get into more advanced variable structures, is that there is a merge that happens in a specific order for variable scopes in Ansible. This is beyond the scope of this post, but you can read more on that here Ansible Variables – Merging. All we will need to know for this example is that the lower down the tree a variable is assigned, the higher preference it is given; and it will override variables assigned at a higher level. In our example with the dns_server assigned at the central region, will be overridden for hosts at the chicago site.

# inventory
all:
  children:
    regions:
      vars:
        aaa_server: 4.4.4.4
      children:
        central:
          vars:
            ntp_server: 1.1.1.1
            dns_server: 1.1.2.1
          children:
            sites:
              children:
                chicago:
                  vars:
                    syslog_server: 1.1.1.2
                    dns_server: 1.1.2.2
                  hosts:
                    chi-router1:
                dallas:
                  vars:
                    syslog_server: 1.1.1.3
                  hosts:
                    dal-router1:
                minneapolis:
                  vars:
                    syslog_server: 1.1.1.4
                  hosts:
                    min-router1:
        eastern:
          vars:
            ntp_server: 2.2.2.2
            dns_server: 2.2.3.2
          children:
            sites:
              children:
                newyork:
                  vars:
                    syslog_server: 2.2.2.3
                  hosts:
                    new-router1:
        western:
          vars:
            ntp_server: 3.3.3.3
            dns_server: 3.3.4.3
          children:
            sites:
              children:
                phoenix:
                  vars:
                    syslog_server: 3.3.3.4
                  hosts:
                    phe-router1:

We can run ansible-inventory --inventory=inventory.yaml --list to validate our inventory file and also view how the variables will get collapsed/merged, then applied to hosts. We can see in the hostvars section what actual variables and values will be assigned to each host in the inventory based on the inherited values. We can see that chi-router has the same ntp_server as the other central region devices, but the dns_server is different. This is because the dns_server variable assigned to the chicago site overrides the one set by the central region. We can also see the aaa_server value is the same across all devices because it was assigned in the regions host group.

{
    "_meta": {
        "hostvars": {
            "chi-router1": {
                "aaa_server": "4.4.4.4",
                "ntp_server": "3.3.3.3",
                "dns_server": "1.1.2.2",
                "syslog_server": "1.1.1.2"
            },
            "dal-router1": {
                "aaa_server": "4.4.4.4",
                "ntp_server": "3.3.3.3",
                "dns_server": "3.3.4.3",
                "syslog_server": "1.1.1.3"
            },
            "min-router1": {
                "aaa_server": "4.4.4.4",
                "ntp_server": "3.3.3.3",
                "dns_server": "3.3.4.3",
                "syslog_server": "1.1.1.4"
            },
            "new-router1": {
                "aaa_server": "4.4.4.4",
                "ntp_server": "3.3.3.3",
                "dns_server": "3.3.4.3",
                "syslog_server": "2.2.2.3"
            },
            "phe-router1": {
                "aaa_server": "4.4.4.4",
                "ntp_server": "3.3.3.3",
                "dns_server": "3.3.4.3",
                "syslog_server": "3.3.3.4"
            }
        }
    }
    # extra lines omitted for brevity. Here you can see other information on how inventory is grouped.
}

Templating with Inherited Variables

Now that we have our variable and inventory structure created, we can start to look at using that inventory to create configuration sections with Jinja2 templating. We will use the same playbook from blog Part 3 (with inventory.yaml file above), to generate a configuration snippet. We will make one minor change to the playbook, which is that we’re using hosts: regions, which will target the regions host group. If we had another group at the same level (under the all group) as regions, we could target the hosts in those groups separately. If we had an inventory file like the one below, the playbook would only run against hosts inside the regions group, not the datacenters group.

# extra_groups_inventory.yaml
all:
  children:
    regions:
      {ommitted for brevity...}
    datacenters:
      {omitted for brevity...}

Now, to our example playbook, where we will target the regions group and generate configurations for the devices within that group.

<span role="button" tabindex="0" data-code="# playbook.yaml – name: Template Generation Playbook hosts: regions
# playbook.yaml
- name: Template Generation Playbook
  hosts: regions   <--- This is where we can limit to specific groups
  gather_facts: false

  tasks:
    - name: Generate template
      ansible.builtin.template:
        src: ./template.j2
        dest: ./configs/.cfg
      delegate_to: localhost

We will use a simple template to generate the configuration lines to set the dns server, ntp server, and logging server for a Cisco device.

# template.j2
hostname {{ inventory_hostname }}
ip name-server {{ dns_server }}
logging host {{ syslog_server }}
ntp server {{ ntp_server }}
tacacs-server host {{ aaa_server }} key mysupersecretkey

We can now run the playbook with our template with the command ansible-playbook -i inventory.yaml playbook.yaml. First, we see below how the file structure looks prior to running the playbook. Then we see the output of running the playbook with the command above.

# before running the playbook
.
├── configs
├── inventory.yaml
├── playbook.yaml
(base) {} ansible_inventory ansible-playbook -i inventory.yaml playbook.yaml

PLAY [Template Generation Playbook] **************************************************************************************************

TASK [Generate template] **************************************************************************************************
changed: [phe-router1 -> localhost]
changed: [new-router1 -> localhost]
changed: [min-router1 -> localhost]
changed: [chi-router1 -> localhost]
changed: [dal-router1 -> localhost]

PLAY RECAP **************************************************************************************************
chi-router1: ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
dal-router1: ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
min-router1: ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
new-router1: ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
phe-router1: ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

We can see the playbook generates three files in the ./configs folder, one for each router. Here we see the new file structure, and the resulting configuration files that are generated from our playbook.

# after running the playbook
.
├── configs
│   ├── chi-router1.cfg
│   ├── dal-router1.cfg
│   ├── min-router1.cfg
│   ├── new-router1.cfg
│   └── phe-router1.cfg
├── inventory.yaml
├── playbook.yaml
# chi-router1.cfg
hostname chi-router1
ip name-server 1.1.2.2
logging host 1.1.1.2
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
# min-router1.cfg
hostname min-router1
ip name-server 3.3.4.3
logging host 1.1.1.4
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
# dal-router1.cfg
hostname dal-router1
ip name-server 3.3.4.3
logging host 1.1.1.3
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
# phe-router1.cfg
hostname phe-router1
ip name-server 3.3.4.3
logging host 3.3.3.4
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
# new-router1.cfg
hostname new-router1
ip name-server 3.3.4.3
logging host 2.2.2.3
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }

In the results we can see that the routers have each inherited their syslog_server and ntp_server values from their regional variables, and chicago has overridden with the site-specific value. The aaa_server variable is shared across all regions due to its being assigned at the regions group level. We could override this for a specific host or site if we assigned the same aaa_server variable lower in the hierarchy, or if we had the datacenters group like we mentioned at the beginning of the post, we could have a different aaa server assigned to those devices by setting the variable there. Also, note how we didn’t even have to modify the variable calls in the Jinja2 template, because Ansible handles which values get applied via the variable merge process we mentioned above. This makes templates simple and clean, so they don’t require a lot of if/else logic to say “if it’s a device in this region, apply dns server X, but if it’s in another region apply dns server Y”.


Conclusion

This is a very simple example of variable inheritance, and should have given you a taste of what things are possible and how you might be able to apply this to your own network or environment. You can combine variables and inheritance trees within Ansible host groups in an unlimited number of ways to fit each individual or organizational need. We hope you have enjoyed this series and have found it helpful. Until next time.

-Zach



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!