Introducing Nautobot SSoT for Device42

Blog Detail

In the rapidly evolving world of network automation, the need for efficient management and seamless integration between different tools has become paramount. Device42 and Nautobot are two powerful applications that, when combined, offer enhanced capabilities for network engineers. We’re excited to announce the latest addition to our portfolio of Nautobot apps, the Single Source of Truth (SSoT) App for Device42! In this post, we’ll explore the capabilities of Device42 and how Nautobot SSoT for Device42 can augment its functionality, revolutionizing the way network infrastructure is managed and automated.

To learn more about Nautobot SSoT Apps (aka plugins), please see this blog post.

Nautobot SSoT for Device42 Overview

Device42 is a comprehensive infrastructure management platform that provides a holistic view of your network environment. It offers a range of features and capabilities designed to simplify infrastructure management and streamline the automation process. Some key capabilities of Device42 include:

Discovery and Inventory: Device42 automates the discovery and inventory process, allowing you to gain a complete understanding of your network infrastructure. It automatically discovers devices and their configurations and associated dependencies, providing accurate and up-to-date information. It is able to utilize multiple discovery mechanisms from Active Directory synchronization to SNMP polling. It also has the capability to integrate with other orchestration and ITSM products, such as Cisco ACI, Cisco UCS, and ServiceNow.

IP Address Management (IPAM): Device42 offers robust IPAM capabilities, enabling efficient management of IP addresses, subnets, VLANs, and DHCP/DNS services. It helps eliminate IP conflicts, simplifies subnet allocation, and improves overall network efficiency.

Dependency Mapping: Device42 maps dependencies between devices and applications, providing a clear understanding of the relationships and interactions within your infrastructure. This mapping enables better troubleshooting, change management, and capacity planning.

Rack and Cable Management: Device42 provides tools for managing rack layouts, cable connections, and data center infrastructure. This feature ensures accurate documentation of physical connections, simplifies troubleshooting, and aids in effective capacity planning.

With the wealth of information that Device42 contains, it serves as the perfect System of Record for network inventory and address allocations. With Nautobot acting as your Single Source of Truth, pulling network data from Device42 enables a continuously accurate inventory and reflection of the network state.

The Nautobot SSoT for Device42 application focuses on ingesting all network devices and related data from Device42. A full list of the supported objects is found in the following section. When utilized with Nautobot and its library of Apps, it enables multiple unique use cases, from utilizing Nautobot ChatOps framework to get information about your network to doing configuration compliance with the Nautobot Golden Config App.

Nautobot SSoT for Device42 Capabilities

The SSoT App for Device42 currently synchronizes the following objects:

Device42Nautobot
BuildingSite
RoomRackGroup
RackRack
VendorManufacturer
Hardware ModelDeviceType
DeviceDevice
ClusterVirtualChassis
PortInterface
VRF GroupVRF
SubnetPrefix
IP AddressIP Address
VLANVLAN
ProviderProvider
Telco CircuitCircuit
Patch PanelDevice

Custom Fields and Tags are both supported on all applicable models from Device42 into Nautobot.

Currently the synchronization is one-way from Device42 to Nautobot. While it is possible to push data from Nautobot to Device42, the focus for this release was to only pull data from Device42 in order to have an accurate reflection of the network state.

Building

The Device42 SSoT App will pull in all location-related data for Building objects including the following:

  • Building name
  • Building address
  • Facility*
  • Latitude
  • Longitude
  • Contact name and phone

The facility field will be imported into Nautobot only if the default42_facility_prepend application settings is enabled with a corresponding Tag applied to the Building in Device42. The Site status can be updated by specifying the name of the Status in the device42_defaults["site_status"] option. The default setting is Active.

Room

The Device42 SSoT App treats Room as a child of a Building and requires that the Building be defined for the Room and that the Room name be unique. It will import the Room name and any notes applied to the Room.

Rack

The Device42 SSoT App treats Rack as a child of a Room and requires that the Building and Room are defined for the Rack. The name of the Rack must also be unique.

The Rack status can be updated by specifying the name of the Status in the device42_defaults["rack_status"] option.

Vendor

The SSoT App imports all entries for Vendor, and each must have a unique name.

Hardware Model

Hardware Model objects are required to have a Vendor specified for the Device42 SSoT import.

Device

The SSoT App imports all Devices marked as a network device, ie with the is_switch field set to True. It imports the following data for network Devices:

  • Device name
  • Building
  • Room
  • Rack
  • Rack position and orientation
  • Hardware model
  • Operating System
  • Operating System version
  • In Service
  • Serial number

The Device role can be updated by specifying the name of the Role in the device42_defaults["device_role"] option. The default is Unknown.

For a network device to be imported by the SSoT App, it must have the following defined:

  • Device name
  • Building (or Customer)*
  • Hardware model

Customer is used if the device42_customer_is_facility setting is True. It is also required that the device42_facility_prepend setting for the imported Building.

In addition to the standard import, there are multiple application settings that can influence how Device objects are synchronized and/or modify the data as it is imported. Those settings are detailed below:

role_prepend: This setting is for specifying the role for a Device. It defines the string prepended in a Tag that defines the role name. For example, if a Tag is found with nautobot-core_router, then the role would be defined as core_router for all Device objects with that Tag.

ignore_tag: ignore_tag defines a Tag that, if found, will have the Device skipped from the import. This is helpful if there are specific Device objects you wish to exclude from the synchronization process.

hostname_mapping: This option enables the capability for a Device to be assigned to a Site based upon its hostname. The value is expected to be a list of dictionaries with the key being the regex used to match the hostname. The value is the slug of the Site to assign to the Device. This option takes precedence over the device42_customer_is_facility determination of a Device’s Site, with the Building denoted in Device42 being the last resort.

delete_on_sync: This option allows you to prevent data from being deleted when it’s missing from a sync. This is useful in situations where the data is inaccurate in Device42 and you want to control what is removed automatically.

Cluster

The Device42 SSoT application treats all clustered networking Devices as stacked devices importing them into Nautobot as a Virtual Chassis. Due to the way that Nautobot allows data to be stored, an additional “master” Device is created for each cluster to act as the control plane for all data specific to the cluster. The master device should be in the first position in the Virtual Chassis, with each cluster member added in subsequent order based off the order indicator in its name; e.g., a Device with Switch 1 in its hostname will be in position 2. All Ports that are discovered assigned to the Cluster will be created on the master Device, while the member stacks should be assigned their Port objects as shown in Device42. The operating system and version for the cluster will be duplicated from the first cluster member. Otherwise, Cluster objects are treated the same as Device objects.

Port

The SSoT App imports all information about a Device’s Port objects that are provided including:

  • Port name
  • Enabled
  • MTU
  • Description
  • MAC Address
  • Port Type
  • Port Mode
  • Port Speed
  • Port Status
  • Tagged/Untagged VLANs

For a Port to be imported into Nautobot, it must have an assigned Device and a unique name. The Port’s speed is attempted to be determined by the interface name or the discovered speed from Device42.

VRF Group

All VRF Group objects in Device42 will be imported as long as names are unique. It will pull in the description on the VRF Group, if defined.

Subnet

All Subnet objects in Device42 will be synchronized to Nautobot. This includes any defined description and associated VRF Group.

IP Address

All IP Address objects documented in Device42 are synchronized and associated to Device interfaces as discovered. The following information about an IP Address is imported:

  • Address
  • Availability
  • Label
  • VRF Group

If the IP Address is found on a Port with Management in the name, e.g., mgmt or Management, it should be assigned as primary IP for the associated Device. This behavior can be modified with the device42_use_dns setting. Enabling this setting will have the import process perform a DNS query for all Device hostnames that follow a standard fully qualified domain name pattern, e.g., router.company.tld. If a DNS record is found for the hostname, then the answered address will be set as the primary IP address for the Device. If it is unable to determine the associated Port, a new Management port will be created and associated to the Device.

VLAN

Due to the way that SNMP auto-discovery can return data about VLANs, the Device42 SSoT application handles VLAN objects in a special manner. It is advised to clean up all VLAN records in Device42 and ensure there is only a single VLAN ID per Building with the correct name. The SSoT App will load the first non-zero VLAN ID and VLAN name that it finds, along with associated Building or Customer. The first VLAN ID shown assigned to a Port will be the one added to the Port.

Provider

Any Provider that is attached to a Circuit will be synchronized to Nautobot. The following data about a Provider will be imported:

  • Provider Name
  • Provider Notes
  • Provider URL
  • Provider Account
  • Provider Contacts

Telco Circuit

The SSoT App will attempt to import all Telco Circuit objects from Device42 into Nautobot. The following information will be imported:

  • Circuit ID
  • Circuit Provider
  • Notes
  • Status of Circuit
  • Install Date
  • Bandwidth

Patch Panel

Patch Panel objects are treated like simple Devices with identical front and rear ports for the passthrough.

Using the App

The instructions for installing and configuring the Device42 SSoT integration are detailed in the SSoT project’s documentation. At this time, the App supports only a single instance of Device42 to import data from. The connection and authentication information is defined in the PLUGINS and PLUGINS_CONFIG section of the nautobot_config.py file under the nautobot_ssot key, as shown below:

PLUGINS = ["nautobot_ssot"]

PLUGINS_CONFIG = {
  "nautobot_ssot": {
    "enable_device42": True,
    "device42_host": os.getenv("NAUTOBOT_SSOT_DEVICE42_HOST", ""),
    "device42_username": os.getenv("NAUTOBOT_SSOT_DEVICE42_USERNAME", ""),
    "device42_password": os.getenv("NAUTOBOT_SSOT_DEVICE42_PASSWORD", ""),
    "device42_verify_ssl": False,
    "device42_defaults": {
        "site_status": "Active",
        "rack_status": "Active",
        "device_role": "Unknown",
    },
    "device42_delete_on_sync": False,
    "device42_use_dns": False,
    "device42_customer_is_facility": False,
    "device42_facility_prepend": "",
    "device42_role_prepend": "",
    "device42_ignore_tag": "",
    "device42_hostname_mapping": []
  }
}

Each setting is explained, with the specific usage, in the README.md and in the object notes above.

Running the App

Once the Device42 SSoT App is configured and initialized, it can be accessed by navigating to the Plugins menu, going to the Single Source of Truth section, and clicking on Dashboard. This should show you all of your Single Source of Truth–specific Apps you have installed.

From here you can get more information about the Device42 SSoT App by clicking on the Device42 Data Source link. This will show you the mapping of Device42 import objects to Nautobot objects along with configuration details and Sync history.

Clicking on the Sync Now button will take you to a Job form to start the synchronization.

If you wish to just test the synchronization but not have any data created in Nautobot, you’ll want to select the Dry run toggle. Clicking the Debug toggle will enable more verbose logging to inform you of what is occurring behind the scenes. Finally, the Bulk import option will enable bulk create and update operations to be used when the synchronization is complete. This can improve performance times for the App by skipping validation of the imported data. Be aware that this could cause bad data to be pushed into Nautobot.

Once the Job has been started, you will be shown the Job Result screen. This screen will have log entries as the Job performs the synchronization, along with SSoT Sync DetailsExport, and Delete buttons.

Once the synchronization has been completed, the SSoT Sync Details page will display all of the relevant information about the Job, including execution times for the various parts, the number of objects that were created, updated, deleted, or had failures or errors and a diff of the loaded data from Device42 and Nautobot indicating what would be synchronized from Device42.

Special Integrations

The Device42 SSoT App has been written to support integration with the Device Lifecycle Management App. If the Device Lifecycle Management App is found to be installed in the same environment as the Device42 SSoT App, a new Software object will be created for each Operating System version that is imported from Device42. That Software will then be assigned to the specific Device to enable easy tracking of the Software versions in use for your fleet.


Conclusion

While the Device42 SSoT App is currently set up only to import data from Device42 into Nautobot, it is perfectly capable of also pushing data from Nautobot to Device42. We’d love to hear your use cases for this data flow and how you utilize the Device42 SSoT App! Please let us know in the comments, or hit us up on Slack.

If you are curious about building your own Single Source of Truth App with the SSoT framework, you can find more details in the Building a Nautobot SSoT App and the Advanced Options for Building a Nautobot SSoT App blog posts. You can also find out more information about Device42 at the Device42 homepage.

-Justin


Tags :

ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Managing Your Nautobot Environment with Poetry

Blog Detail

As we’ve written previouslyPoetry is the preferred method of managing Python projects using the PEP 621 method of storing project metadata in the pyproject.toml file. The intention behind using the PEP 621 format is to keep project metadata and related dependency management concise and contained in a single file. As Nautobot and all Apps are Django based applications that use Python, it makes Poetry the perfect solution for managing your Nautobot environment. This article will explain the various options Poetry provides for making managing and developing with Nautobot easier.

Managing Dependencies

The structure of your pyproject.toml file has been described in many other articles, so I won’t get into too much detail here. Below, I’ve provided an example of a new pyproject.toml for a Nautobot home lab that I’d like to manage using Poetry.

<span role="button" tabindex="0" data-code="[tool.poetry] name = "nautobot_homelab" version = "0.1.0" description = "Nautobot Home Lab Environment" authors = ["Network to Code, LLC
[tool.poetry]
name = "nautobot_homelab"
version = "0.1.0"
description = "Nautobot Home Lab Environment"
authors = ["Network to Code, LLC <info@networktocode.com>"]

[tool.poetry.dependencies]
python = "3.10"
nautobot = "1.5.5"
nautobot-capacity-metrics = "^2.0.0"
nautobot-homelab-plugin = {path = "plugins/homelab_plugin", develop = true}
nautobot-ssot = {git = "https://github.com/nautobot/nautobot-plugin-ssot.git", branch = "develop"}

[tool.poetry.dev-dependencies]
bandit = "*"
black = "*"
django-debug-toolbar = "*"
django-extensions = "*"
invoke = "*"
ipython = "*"
pydocstyle = "*"
pylint = "*"
pylint-django = "*"
pytest = "*"
requests_mock = "*"
yamllint = "*"
toml = "*"

As you can see, I’ve defined the versions to be used as Python 3.10 and Nautobot 1.5.5 for the environment. I’ve also included the Capacity Metrics App to enable use with my lab telemetry stack, an App called nautobot-homelab-plugin being locally developed, and finally the Single Source of Truth framework for use with the locally developed App. You’ll notice that those last two are defined using more than just the desired version. They were added into the Poetry environment using the local directory of the plugin being worked on or referencing a Git repository and branch where the code resides. The final group of dependencies are noted as dev-dependencies as they should only be used for development environments. This is where you’d put any packages that you wish to use while developing your Apps, such as code linters.

Local Path

The plugin added to the project via a local path was added by issuing the command poetry add --editable ./plugins/homelab_plugin at the command line. This works as long as Poetry finds another pyproject.toml file for that project in the specified folder. If found, it will include all documented dependencies when generating the project lockfile. This is extremely helpful when you are working with a local development environment and need to view your changes quickly. Adding the --editable tag will denote the path should be loaded in develop mode so changes will be dynamically loaded. This means that as you make changes to your App while developing it, you don’t have to rebuild the entire Python package for it to function. This makes it much easier and quicker to iterate on your App, as changes should be immediately reflected in your environment.

Git Repository

If the code for your App resides in a Git repository, it’s typically best to just reference the repository and branch where it’s found as opposed to cloning it locally. This is done by issuing the command poetry add git+https://github.com/nautobot/nautobot-plugin-ssot.git#develop at the command line. Using this method allows for you to retain the version control inherent to Git while still developing your App and testing it in your environment. This is especially handy when you’re working on a patch for some open-source project like the Infoblox SSoT App. As you don’t have direct access to the code, you would need to fork the repository and point to that for your environment. This enables you to test your fixes directly with your data and Nautobot before submitting a Pull Request back to the original repository for the fixes.

Local Development

Once you’ve determined all of the appropriate dependencies for your Nautobot environment, you should execute a poetry lock to generate the project lockfile. If you wish to use a local development environment, your next step would then be to issue a poetry install to install of those dependencies into the project virtual environment. This should include Nautobot and all of the dependencies you’ve defined in the pyproject.toml file. You will still be required to stand up either a Postgres or MySQL database and Redis server for full functionality. This can be quickly and easily accomplished by using a Docker container. Putting your secrets in a creds.env and all other environment variables in your development.env while using the following Docker Compose file will enable local development with your Poetry environment:

---
version: "3.8"
services:
  postgres:
    image: "postgres:13-alpine"
    env_file:
      - "development.env"
      - "creds.env"
    ports:
      - "5432:5432"
    volumes:
      # - "./nautobot.sql:/tmp/nautobot.sql"
      - "postgres_data:/var/lib/postgresql/data"
  redis:
    image: "redis:6-alpine"
    command:
      - "sh"
      - "-c"  # this is to evaluate the $NAUTOBOT_REDIS_PASSWORD from the env
      - "redis-server --appendonly yes --requirepass $$NAUTOBOT_REDIS_PASSWORD"
    env_file:
      - "development.env"
      - "creds.env"
    ports:
      - "6379:6379"
volumes:
  postgres_data: {}

Docker Development

If you are working with Nautobot in a container-based environment, such as part of a Kubernetes or Nomad cluster, it might make sense to have your entire environment inside Docker containers instead of having Nautobot in your Poetry environment. However, you can still utilize the Poetry lockfile generated from the earlier step to create your Nautobot containers. By passing the desired Python and Nautobot versions to the Dockerfile below, you can generate a development container with Nautobot and your App installed.

ARG NAUTOBOT_VERSION
ARG PYTHON_VER
FROM ghcr.io/nautobot/nautobot:${NAUTOBOT_VERSION}-py${PYTHON_VER} as nautobot-base

USER 0

RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get autoremove -y && \
    apt-get clean all && \
    rm -rf /var/lib/apt/lists/* && \
    pip --no-cache-dir install --upgrade pip wheel

FROM ghcr.io/nautobot/nautobot-dev:${NAUTOBOT_VERSION}-py${PYTHON_VER} as nautobot-dev

CMD ["nautobot-server", "runserver", "0.0.0.0:8080", "--insecure"]

RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get autoremove -y && \
    apt-get clean all && \
    rm -rf /var/lib/apt/lists/*

COPY ./pyproject.toml ./poetry.lock /source/
COPY ./plugins /source/plugins

# Install the Nautobot project to include Nautobot
RUN cd /source && \
    poetry install --no-interaction --no-ansi && \
    mkdir /tmp/dist && \
    poetry export --without-hashes -o /tmp/dist/requirements.txt

# -------------------------------------------------------------------------------------
# Install all included plugins
# -------------------------------------------------------------------------------------
RUN for plugin in /source/plugins/*; do \
        cd $plugin && \
        poetry build && \
        cp dist/*.whl /tmp/dist; \
    done

COPY ./jobs /opt/nautobot/jobs
COPY nautobot_config.py /opt/nautobot/nautobot_config.py

WORKDIR /source

Having a self-contained environment for developing your App can be extremely helpful in ensuring that all variables are accounted for, can be easily reproduced by others, and will not impact the system you’re developing on. It also makes it easy to create a production environment by copying the appropriate files from your development container into the production container. This can also be utilized in a CI/CD pipeline for automated testing of your application.


Conclusion

Today we’ve gone over how you can use Poetry to manage your Nautobot environment. We’ve shown how you can have the Apps included in your environment by referencing a Git repository or simply referencing a local directory where the App resides. We’ve also seen how the environment that Poetry creates can be used for developing Nautobot Apps.

-Justin



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Network Configuration Templating with Ansible – Part 3

Blog Detail

In the first and second parts of this series we discussed extracting variables from your device configurations, building your host and group data structures with those variables, and then using that data along with Jinja2 templates to generate configurations programmatically. In the third part of this series, we will dive deeper into the more advanced methods of manipulating the data output during generation by using two key features of Jinaj2, filters and macros.

Filters

When using Jinja2 to generate configurations you might find yourself at times wanting to convert a variable value to another format. This is useful in cases where you don’t necessarily want to document every possible variable for your configuration. One example of this would be using a CIDR notation for an IP address variable. By using CIDR notation, you’re able to document not only the host address but also determine the network address, broadcast address, and associated netmask. Extracting that information from the CIDR address variable is where Jinja2 filters come into play. By using the ipaddr filter that’s based on top of the netaddr Python library, you’re able to pass the CIDR address to a specific variable to get the desired piece of data.

In order to utilize a filter such as ansible.utils.ipaddr, you pipe (using the | character) a value to your desired filter. You can chain together as many filters as you like as shown below:

# router1.yml
network: "192.168.1.0/24"
# template.j2
ip route 0.0.0.0 0.0.0.0 {{ network | ansible.utils.ipaddr("1") | ansible.utils.ipv4("address") }}

By using the CIDR address notation defined by the network variable and passing it through the ipaddr and ipv4 filters, we are able to obtain the gateway address and render the default route as shown:

ip route 0.0.0.0 0.0.0.0 192.168.1.1

This works due to a Jinja2 filter simply being a Python function that accepts arguments and returns some value. With the first ansible.utils.ipaddr("1") step, it’s taking the value of the network variable and finding the first IP address in the network, 192.168.100.1/24. Then the ansible.utils.ipv4("address") filter takes that value and finds just the address, which strips the /24 and returns 192.168.100.1.

Now, you might be asking yourself what would be the use for something like this? Using templates like the above allows for you to make changes across your fleet while still taking into account variations in configurations. For example, you could write a playbook like below to set a new default gateway on devices:

# update_gateways.yaml
- name: Default Route Update Playbook
  hosts: all
  gather_facts: false

  tasks:
    - name: Update default gateway on inventory hosts
      cisco.ios.ios_config:
        backup: "yes"
        src: "./template.j2"
        save_when: "modified"
(base) {} ansible-playbook -i inventory update_gateways.yaml

PLAY [Default Route Update Playbook] ***********************************************************************************************************************

TASK [Update default gateway on inventory hosts] ***********************************************************************************************************************
changed: [router1]
changed: [router2]
changed: [router3]

PLAY RECAP ***********************************************************************************************************************
router1                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
router2                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
router3                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

To obtain the currently available filters included with Ansible, you’ll need to install them from the ansible.utils collection. This can be done by issuing ansible-galaxy collection install ansible.utils at the command line. In addition, as filters are simply Python functions, you are able to write your own for utilizing within your templates. This is extremely helpful when you have some complex piece of data that you wish to manipulate before inserting into a configuration. Using the example from above, we can write a function to perform the same and simplify the template:

#custom_filters.py
import netaddr

class FilterModule(object):
    def filters(self):
        return {
            "get_gateway_address": self.get_gateway_address
        }

    def get_gateway_address(self, network: str):
        network = netaddr.IPNetwork(network)
        return str(network.ip + 1)

As custom filters are just simple Python functions, adding them into the Jinja environment for Ansible requires some specific code. As you can see in the example above, there is a FilterModule class that Ansible looks for when adding custom filters. In this class there must be a filters method that returns a dictionary where the key is the name you want to use for your filter and the value being the function itself. There isn’t a requirement for the called method to reside in the FilterModule class, but putting it there helps prevent potential namespace conflicts.

In order to test this filter within Ansible, you can place the Python file containing the filter definition inside a folder called filter_plugins alongside your playbooks, utilizing it as shown in the diagram below:

(base) {} tree
.
├── filter_plugins
│   ├── custom_filters.py
├── group_vars
├── inventory
├── update_gateways.yaml
├── template.j2

You would then simply update the template line to use the filter like so:

ip route 0.0.0.0 0.0.0.0 {{ network | get_gateway_address }}

Once you’re confident it’s working as intended, you can bundle it alongside others in a collection for easy installation in other environments. If you’re curious about the included filters in Ansible, the source for them is available in their GitHub repo. The Jinja2 framework also includes a number of filters that can be found in their reference documentation.

Macros

Macros are the equivalent of functions in Jinja2. They can be used to store a single word or phrase, or even do some processing and manipulation of your data using Jinja2 syntax as opposed to Python syntax. These are handy when you might not be comfortable with Python but still want to process your data in some manner. Continuing with the example from above, we could write a macro to note the combination of filters like below:

{% macro get_gateway_ip(network) -%}
{% network | ansible.utils.ipaddr("1") | ansible.utils.ipv4("address") -%}
{%- endmacro -%}

We would then need to update the template to call the macro by passing the variable value to the macro as an argument much like Python takes arguments:

ip route 0.0.0.0 0.0.0.0 {{ get_gateway_ip(network) }}

Notice how we pass in the network variable, which in the template file is equivalent to the CIDR notation 192.168.1.0/24. This value is what is then passed to the macro, and acted upon by the filters. The macro then returns the ip address as 192.168.1.1. The rendered result would be:

ip route 0.0.0.0 0.0.0.0 192.168.1.1

Another example would be to define the default interface configuration and expand that macro for your other port roles. Using the configuration information below, we can create a macro that covers the basics of an interface like so:

interfaces:
    - name: "GigabitEthernet0/1"
      duplex: "full"
      speed: 1000
      port_security: false
    - name: "GigabitEthernet0/2"
      duplex: "full"
      speed: 1000
      port_security: true
{% macro base_intf(intf) -%}
interface {{ intf["name"] }}
  duplex {{ intf["duplex"] }}
  speed {{ intf["speed"] }}
{%- endmacro -%}

We can then create another macro that extends the base_intf macro to add in the appropriate port security configuration like so:

{% macro secure_port(intf) -%}
{{ base_intf(intf) }}
  access-session port-control auto
  dot1x pae authenticator
{%- endmacro -%}

Now, when we want to generate the configuration we simply need to call the appropriate macro, like so:

# interfaces.j2
{% for intf in interfaces %}
{% if intf["port_security"] %}
{{ secure_port(intf) }}
{% else %}
{{ base_intf(intf) }}
{% endif %}
{% endfor %}

The above template would then render the following configuration using the interface information above:

interface GigabitEthernet0/1
  duplex full
  speed 1000
interface GigabitEthernet0/2
  duplex full
  speed 1000
  access-session port-control auto
  dot1x pae authenticator

As above, we can then utilize this template in a playbook to update the interfaces on our Devices like so:

# update_interfaces.yaml
- name: Update Interfaces Playbook
  hosts: all
  gather_facts: false

  tasks:
    - name: Update interfaces on inventory hosts
      cisco.ios.ios_config:
        backup: "yes"
        src: "./interfaces.j2"
        save_when: "modified"
(base) {} ansible-playbook -i inventory update_interfaces.yaml

PLAY [Update Interfaces Playbook] ***********************************************************************************************************************

TASK [Update interfaces on inventory hosts] ***********************************************************************************************************************
changed: [router1]
changed: [router3]
changed: [router2]

PLAY RECAP ***********************************************************************************************************************
router1                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
router2                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
router3                    : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

As you can see, using macros enables you to minimize duplicated code, which simplifies things in addition to building compartmentalized logic into your templates. In addition, as with Python functions, you can place these macros in a central repository and reference them in your templates using Jinja imports. However, that’s outside the scope of this post, but more information can be found in the Jinja2 documentation.


Conclusion

In this post we went over the basics of Jinja2 filters and macros and how they can be utilized to aid you in manipulating your data being inserted into your templates. In Part 4 of this series, we’ll go into how Ansible handles variable inheritance and how that can enable more advanced templates.

-Justin



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!