Getting Started with Python Network Libraries for Network Engineers – Part 2

Blog Detail

In the first part of this series, we looked at Netmiko. In this blog, we’ll look at NAPALM, another library that is available to address these challenges is. This post takes a look at the basics of NAPALM, and how it can be used to interact with network devices with a focus on data collection.

The currently supported platforms of NAPALM are – IOS, EOS, NXOS, and IOS-XR. See the support matrix for more detailed information on platform support and back-end library.

NAPALM 101 – Demo

I will be using a Cisco DevNet Sandbox to demonstrate the basic setup and use of NAPALM in a network environment. The sandbox is free to use – there are shared and dedicated sandbox environments you can use to try out new technologies.

For this tutorial, I am connected via SSH to a VM that has SSH access to a Cisco IOS XR router. On the VM, install the NAPALM library.

pip install napalm

In the Python shell, you can directly connect to the router using a few lines of code. The driver is selected based on which networking device you are connecting to, in this instance “ios”. See the docs on supported devices.

>>> import napalm
>>>
>>> driver = napalm.get_network_driver("ios")
>>> device  = driver(hostname="10.10.20.70", username="admin", password="admin", optional_args={"port": 2221},)
>>> device.open()

Getters

The power of NAPALM is built on the getters. Getters are Python methods that have been written to return structured data in a normalized format. Using the getters you can retrieve information from the networking device and programmatically interact with it with Python. Using the JSON python library, you can structure the return data to be more readable. Below is an example using the get_interfaces_ip() getter.

>>> import json
>>> output = device.get_interfaces_ip()
>>> print(json.dumps(output, indent=4)
{
    "MgmtEth0/RP0/CPU0/0":{
        "ipv4":{
            "192.168.122.21":{
                "prefix_length":24
            }
        },
        "ipv6":{
            
        }
    },
    "Wed":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/4":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/2":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/3":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/0":{
        "ipv6":{
            
        }
    },
    "GigabitEthernet0/0/0/1":{
        "ipv6":{
            
        }
    }
}

After you are finished making changes or gathering information, don’t forget to close the connection.

>>> device.close()

There are many other useful getters, such as – get_bgp_neighborsget_arpping, and traceroute. These have been built and improved upon with community support. Information on contributing to the NAPALM library can be found here.

Additional Functionality

In addition to show command functionality, there is also support for configuration changes on network devices. For most supported platforms, there are methods to merge or replace config, and to compare changes before you commit. See the support matrix for platform support information.

Extending Support

The final item I’d like to touch on is the extensibility of NAPALM. If there is a method that exists but does not return data in the structure you need, you can extend the driver. Extending a NAPALM driver allows you to write custom methods in Python to enhance your structured data response.

Outside of the main NAPALM library, there is community support for additional drivers, such as the NAPALM PANOS driver in the NAPALM community GitHub.


Conclusion

NAPALM is a robust tool for network automation, and benefits from active open-source contributions. Take a look at the GitHub for additional information on support, contributing, and to see what’s happening in the community.

-Susan

New to Python libraries? NTC’s Training Academy is holding a 3-day course Automating Networks with Python I on September 26-28, 2022 with 50% labs to get you up to speed.
Visit our 2022 public course schedule to see our full list.



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Getting Started with Python Network Libraries for Network Engineers – Part 1

Blog Detail

This blog post will be the first in a series covering common Python libraries that can be used to interact with network devices. In this post we will cover the Netmiko Python library by Kirk Byers. Netmiko is based on the Paramiko Python library, but whereas Paramiko was designed to interact with standard OpenSSH devices (like Linux), Netmiko was designed to interact with network devices. It has a large number of supported platforms included for connecting via SSH, and it can also accommodate limited Telnet or serial connections as well as Secure Copy (SCP) for file transfers.

Installation

You can install Netmiko via pip install netmiko. Or, if you are using Poetry, you can use poetry add netmiko.

Getting Connected

Note: We will only be covering connecting via SSH in this blog post.

Like all SSH connections, Netmiko requires a hostname (IP or DNS name) and an authentication method (generally username and password) to get connected. In addition to this, you will also need to specify the device type you will be connecting to.

>>> from netmiko import ConnectHandler
>>> 
>>> conn = ConnectHandler(
...     host="192.0.2.3",
...     username="cisco",
...     password="cisco",
...     device_type="cisco_ios"
... )

There are two ways of determining the device type: looking it up in a list or having Netmiko try to detect the device type automatically. You can see the list of current device types by digging into the code on GitHub, specifically the CLASS_MAPPER_BASE dictionary in the ssh_dispatcher.py file. If, however, you aren’t exactly sure which device type you need to choose, you can use the SSHDetect class to have Netmiko help:

>>> from netmiko import ConnectHandler, SSHDetect
>>> 
>>> detect = SSHDetect(
...     host="192.0.2.3",
...     username="cisco",
...     password="cisco",
...     device_type="autodetect"  # Note specifically passing 'autodetect' here is required
... )
>>> detect.autodetect()  # This method returns the most likely device type
'cisco_ios'
>>> detect.potential_matches  # You can also see all the potential device types and their corresponding accuracy rating
{'cisco_ios': 99}
>>> conn = ConnectHandler(
...     host="192.0.2.3",
...     username="cisco",
...     password="cisco",
...     device_type=detect.autodetect()
... )

Common Methods

Once you have instantiated your ConnectHandler object, you can send single show commands via the .send_command() method. You will use the same command syntax you would type in if you were directly connected to the device via SSH:

>>> output = conn.send_command("show ip int br")
>>> print(output)
Interface              IP-Address      OK? Method Status                Protocol
FastEthernet0          unassigned      YES NVRAM  down                  down
GigabitEthernet1/0/1   unassigned      YES unset  up                    up
GigabitEthernet1/0/2   unassigned      YES unset  up                    up
GigabitEthernet1/0/3   unassigned      YES unset  up                    up
...

Note: You can send multiple show commands back-to-back with the .send_multiline(["command1", "command2"]).

If you want to run a command to edit the configuration, you would use the .send_config_set() method instead. This method takes care of entering and exiting configuration mode for you, and it requires the commands be in a list or set:

>>> output = conn.send_config_set(("interface Gi1/0/3", "no description"))
>>> print(output)
configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
cisco(config)#interface Gi1/0/3
cisco(config-if)#no description
cisco(config-if)#end
cisco#

And since we are good network engineers, we know we should always save our configuration after making changes with the .save_config() method:

>>> output = conn.save_config()
>>> print(output)
write mem
Building configuration...
[OK]
cisco#

Command Output Parsing

We’ve covered parsing strategies here on this blog before, including the three currently supported parsers Netmiko supports which are TextFSMTTP, and Genie. Let’s take a quick look on how to use them with Netmiko.

TextFSM

If you are just starting out, the easiest way to get structured output data would be to use the included TextFSM parser. By default, Netmiko includes the TextFSM library for the parsing as well as NTC Templates to use as the default templates. To get structured output, simply add use_textfsm=True to the parameters of the .send_command() method:

>>> output = conn.send_command("show interfaces", use_textfsm=True)
>>> pprint(output)
[{'abort': '',
  'address': '381c.1ae6.cd81',
  'bandwidth': '100000 Kbit',
  'bia': '381c.1ae6.cd81',
  'crc': '0',
  'delay': '100 usec',
...

You can see the template used for this command here.

TTP

Netmiko also supports the TTP parsing library, but you will need to install it via pip install ttp or poetry add ttp first. It also does not currently include any templates, so you will need to find or create your own and then provide the path to those templates when you send your command.

Creating TTP templates yourself is definitely more of a manual process, but it gives you the freedom to pare down to only the information that you need. For example, if you just need the interface name, status, and description you can have a template like so:

{{ interface }} is {{ link_status }}, line protocol is {{ protocol_status }} {{ ignore }}
  Description: {{ description }}

And then you would reference the template path using ttp_template:

>>> output = conn.send_command("show interfaces", use_ttp=True, ttp_template="templates/show_interfaces.ttp")
>>> pprint(output)
[[[{'description': 'CAM1',
    'interface': 'GigabitEthernet1/0/1',
    'link_status': 'up',
    'protocol_status': 'up'},
   {'description': 'CAM2',
    'interface': 'GigabitEthernet1/0/2',
    'link_status': 'up',
    'protocol_status': 'up'},
...

Genie

The last parser that Netmiko currently supports is Genie. Netmiko, however, does not install Genie nor its required library, PyATS, by default, so you will need to install them separately via pip install 'pyats[library]' or poetry add 'pyats[library]'. Once they are installed, it is again very similar to enable the Genie parsing:

>>> output = conn.send_command("show interfaces", use_genie=True)
>>> pprint(output)
{'GigabitEthernet1/0/1': {'arp_timeout': '04:00:00',
                          'arp_type': 'arpa',
                          'bandwidth': 100000,
                          'connected': True,
                          'counters': {'in_broadcast_pkts': 41240,
...

Note: Genie does not support custom templates.


Conclusion

As you can see, it doesn’t take much to get started with Netmiko. If you’d like to learn more advanced interactions with Netmiko, such as transferring files via SCP, connecting via SSH keys, or even handling commands that prompt for additional input, the best place to start would be the Netmiko Examples page in the GitHub repository. Another great resource is the #netmiko channel in the Network to Code Slack.

-Joe

New to Python libraries? NTC’s Training Academy is holding a 3-day course Automating Networks with Python I on September 26-28, 2022 with 50% labs to get you up to speed.
Visit our 2022 public course schedule to see our full list.



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

How to Monitor Your VPN Infrastructure with Netmiko, NTC-Templates, and a Time Series Database

Blog Detail

With many people being asked to work from home, we have heard several customers looking to enhance the visibiility and monitoring of their VPN infrastructure. In this post I will show you how you can quickly collect information from your Cisco ASA firewall leveraging Netmiko, NTC-Templates (TextFSM), combined with Telegraf/Prometheus/Grafana. The approach would work on other network devices, not just Cisco ASAs. Considering the recent demand for ASA information, we will use this as an example.

Here is what the data flow will look like:

data_flow
  • Users will connect to the ASA for remote access VPN services
  • Python: Collects information from the device via CLI and gets structured data by using a NEW template to parse the CLI output. This is presented via stdout in Influx data format
  • Telegraf: Generic collector that has multiple plugins to ingest data and that can send data to many databases out there.
    • INPUT: Execute the Python script every 60s and read the results from stdout.
    • OUTPUT: Expose the data over HTTP in a format compatible with Prometheus
  • Prometheus: Time Series DataBase (TSDB). Collects the data from Telegraf over HTTP, stores it, and exposes an API to query the data
  • Grafana: Solution to build dashboards, natively support Prometheus to query data.

An alternative to creating this Python script, you could have looked at using the Telegraf SNMP plugin as well. An SNMP query would be quicker than using SSH and getting data if you want basic counts. In this you will see that you can get custom metrics into a monitoring solution without having to use only SNMP.

Execution of Python

If executing just the Python script without being executed by Telegraf, this is what you would see:

$ python3 asa_anyconnect_to_telegraf.py --host 10.250.0.63
asa connected_users=1i,anyconnect_licenses=2i

This data will then get transformed by Telegraf into an output that is usable by Prometheus. It is possible to remove the requirement for Telegraf and have Python create the Prometheus Metrics. We wanted to keep the Python execution as simple as possible. To use the prometheus_client library check out their Github page.

Python Script

In this post we have the following components being used:

  • Python:
    • Netmiko to SSH into an ASA, gather command output, and leverage the corresponding NTC Template
    • NTC Template which is a TextFSM template for parsing raw text output into structured data
  • Telegraf: Takes the output of the Python script as an input and translates it to Prometheus metrics as an output

Python Requirements

The Python script below will have the following requirements set up before hand:

  • ENV Variables for authentication into the ASA
    • ASA_USER: Username to log into the ASA
    • ASA_PASSWORD: Password to log into the ASA
    • ASA_SECRET (Optional): Enable password for the ASA, if left undefined will pick up the ASA_PASSWORD variable
  • Required Python Packages:
    • Netmiko: For SSH and parsing
    • Click: For argument handling – to get the hostname/IP address of the ASA
  • Github Repository for NTC-Templates setup in one of two ways:
    • Cloned to user home directory cd ~ and git clone https://github.com/networktocode/ntc-templates.git
    • NET_TEXTFSM Env variable set NET_TEXTFSM=/path/to/ntc-templates/templates/

The specific template is the newer templatefor cisco asa show vpn-sessiondb anyconnect introduced March 18, 2020

Python Code

There are two functions used in this quick script:

"""
(c) 2020 Network to Code
Licensed under the Apache License, Version 2.0 (the "License").
You may not use this file except in compliance with the License.
You may obtain a copy of the License at
http: // www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Python application to gather metrics from Cisco ASA firewall and export them as a metric for Telegraf
"""
from itertools import count
import os
import sys
import re
import click
from netmiko import ConnectHandler

def print_influx_metrics(data):
    """
    The print_influx_metrics function takes the data collected in a dictionary format and prints out
    each of the necesary components on a single line, which matches the Influx data format.

    Args:
        data (dictionary): Dictionary of the results to print out for influx
    """
    data_string = ""
    cnt = count()
    for measure, value in data.items():
        if next(cnt) > 0:
            data_string += ","
        data_string += f"{measure}={value}i"

    print(f"asa {data_string}")

    return True


def get_anyconnect_license_count(version_output):
    """
    Searches through the `show version` output to find all instances of the license and gets the
    output into integers to get a license count.

    Since there could be multiple ASAs in a cluster or HA pair, it is necessary to gather multiple data
    points for the license count that the ASAs are licensed for. This function uses regex to find all of
    the instances and returns the total count based on the `show version` command output.  

    Args:
        version_output (String): Output from Cisco ASA `show version`
    """
    pattern = r"AnyConnect\s+Premium\s+Peers\s+:\s+(\d+)"
    re_list = re.findall(pattern, version_output)

    total_licenses = 0
    for license_count in re_list:
        total_licenses += int(license_count)

    return total_licenses


# Add parsers for output of data types
@click.command()
@click.option("--host", required=True, help="Required - Host to connect to")
def main(host):
    """
    Main code execution
    """
    # Get ASA connection Information
    try:
        username = os.environ["ASA_USER"]
        password = os.environ["ASA_PASSWORD"]
        secret = os.getenv("ASA_SECRET", os.environ["ASA_PASSWORD"])
    except KeyError:
        print("Unable to find Username or Password in environment variables")
        print("Please verify that ASA_USER and ASA_PASSWORD are set")
        sys.exit(1)

    # Setup connection information and connect to host
    cisco_asa_device = {
        "host": host,
        "username": username,
        "password": password,
        "secret": secret,
        "device_type": "cisco_asa",
    }
    net_conn = ConnectHandler(**cisco_asa_device)

    # Get command output for data collection
    command = "show vpn-sessiondb anyconnect"
    command_output = net_conn.send_command(command, use_textfsm=True)

    # Check for no connected users
    if "INFO: There are presently no active sessions" in command_output:
        command_output = []

    # Get output of "show version"
    version_output = net_conn.send_command("show version")

    # Set data variable for output to Influx format
    data = {"connected_users": len(command_output), "anyconnect_licenses": get_anyconnect_license_count(version_output)}

    # Print out the metrics to standard out to be picked up by Telegraf
    print_influx_metrics(data)


if __name__ == "__main__":
    main()

Telegraf

Now that the data is being output via the stdout of the script, you will need to have an application read this data and transform it. This could be done in other ways as well, but Telegraf has this function built in already.

Telegraf will be setup to execute the Python script every minute. Then the output will be transformed by defining the output.

Telegraf Configuration

The configuration for this example is as follows:

# Globally set tags that shuld be set to meaningful tags for searching inside of a TSDB
[agent]
hostname = "demo"

[global_tags]
  device = "10.250.0.63"
  region = "midwest"

[[inputs.exec]]
  ## Interval is how often the execution should occur, here every 1 min (60 seconds)
  interval = "60s"
  # Commands to be executed in list format
  # To execute against multiple hosts, add multiple entries within the commands
  commands = [
      "python3 asa_anyconnect_to_telegraf.py --host 10.250.0.63"
  ]

  ## Timeout for each command to complete.
  # Tests in lab environment next to the device with local authentication has been 6 seconds
  timeout = "15s"

  ## Measurement name suffix (for separating different commands)
  name_suffix = "_parsed"

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## More about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

# Output to Prometheus Metrics format
# Define the listen port for which TCP port the web server will be listening on. Metrics will be
# available at "http://localhost:9222/metrics" in this instance.
# There are two versions of metrics and if `metric_version` is omitted then version 1 is used
[[outputs.prometheus_client]]
  listen = ":9222"
  metric_version = 2

Telegraf Output Example

Here is what the metrics will look like when exposed, without the default Telegraf information metrics.

# HELP asa_parsed_anyconnect_licenses Telegraf collected metric
# TYPE asa_parsed_anyconnect_licenses untyped
asa_parsed_anyconnect_licenses{device="10.250.0.63",host="demo",region="midwest"} 2
# HELP asa_parsed_connected_users Telegraf collected metric
# TYPE asa_parsed_connected_users untyped
asa_parsed_connected_users{device="10.250.0.63",host="demo",region="midwest"} 1

There are two metrics of anyconnect_licenses and connected_users that will get scraped. There are a total of 2 Anyconnect licenses available on this firewall with a single user connected. This can now get scraped by the Prometheus configuration and give insight to your ASA Anyconnect environment.

Prometheus Installation

There are several of options for installing a Prometheus TSDB (Time Series DataBase)including:

  • Precompiled binaries for Windows, Mac, and Linux
  • Docker images
  • Building from source

To get more details on installation options take a look at the Prometheus Github page.

Once installed you can navigate to the Prometheus API query page by going to http://<prometheus_host>:9090. You will then be presented with a search bar. This is where you can start a query for your metric that you wish to graph, such as start typing asa. Prometheus will help with an autocomplete set of options in the search bar. Once you have selected what you wish to query you can select Execute. This will give you a value at this point. To see what the query looks like over time you can select Graph next to the world console to give you a graph over time. Grafana will then use the same query language to add a graph.

Once up and running, you add your Telegraf host to the scraping configuration and Prometheus will start to scrape the HTML page provided and add the associated metrics into its TSDB.

A good video tutorial for getting started with Prometheus queries with network equipment can be found on YouTube from NANOG 77

Grafana Installation

Grafana is the dashboarding component of choice in the open source community. Grafana is able to use several sources to create graphs including modern TSDBs of InfluxDB and Prometheus. With the latest release, Grafana can even use Google Sheets as a datasource.

As you get going with Grafana there are pre-built dashboards available for download.

TYou will want to download Grafana to get started. There are several installation methods available on their download page including options for:

  • Linux
  • Windows
  • Mac
  • Docker
  • ARM (Raspberry Pi)

The Prometheus website has an article that is helpful for getting started with your own Prometheus dashboards.

If the video just above this link does not show up, you can see the video on the Network to Code YouTube.

In the next post you can see how to monitor websites and DNS queries will include how to alert using this technology stack.

-Josh



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!