Using the Python Requests Module to Work with REST APIs

Blog Detail

In this post we’ll review one of the most widely used Python modules for interacting with web-based services such as REST APIs, the Python requests module. If you were ever wondering what magic is going on behind the scenes when running one of the thousands of Ansible networking modules, or many of the Python-based SDKs that are available from vendors, there’s a good chance the underlying operations are being performed by requests. The Python requests module is a utility that emulates the operations of a web browser using code. It enables programs to interact with a web-based service across the network, while abstracting and handling the lower-level details of opening up a TCP connection to the remote system. Like a web browser, the requests module allows you to programmatically:

  • Initiate HTTP requests such as GET, PUT, POST, PATCH, and DELETE
  • Set HTTP headers to be used in the outgoing request
  • Store and access the web server content in various forms (HTML, XML, JSON, etc.)
  • Store and access cookies
  • Utilize either HTTP or HTTPS

Retrieving Data

The most basic example of using requests is simply retrieving the contents of a web page using an HTTP GET:

import requests
response = requests.get('https://google.com')

The resulting response object will contain the actual HTML code that would be seen by a browser in the text object, which can be accessed by typing response.text.

<span role="button" tabindex="0" data-code=">>> response.text '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpages, images, videos and more. Google has many special features to help you find exactly what you\'re looking for." name="description"><meta content="noodp" name="robots">
>>> response.text
'<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpages, images, videos and more. Google has many special features to help you find exactly what you\'re looking for." name="description"><meta content="noodp" name="robots"><meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
-- output omitted for brevity --

There are a lot of great utilities for parsing HTML, but in most cases we will not be doing that when working with networking vendor APIs. In the majority of cases, the data will come back structured as XML or JSON.

response = requests.get('https://nautobot.demo.networktocode.com/api')

>>> response.content
b'{"circuits":"https://nautobot.demo.networktocode.com/api/circuits/","dcim":"https://nautobot.demo.networktocode.com/api/dcim/","extras":"https://nautobot.demo.networktocode.com/api/extras/","graphql":"https://nautobot.demo.networktocode.com/api/graphql/","ipam":"https://nautobot.demo.networktocode.com/api/ipam/","plugins":"https://nautobot.demo.networktocode.com/api/plugins/","status":"https://nautobot.demo.networktocode.com/api/status/","tenancy":"https://nautobot.demo.networktocode.com/api/tenancy/","users":"https://nautobot.demo.networktocode.com/api/users/","virtualization":"https://nautobot.demo.networktocode.com/api/virtualization/"}'

Notice that the above output is in bytes format. This is indicated by the lowercase “b” in front of the response text. We could convert this into a string using response.content.decode() and then use the Python json module to load it into a Python dictionary. However, because json is one of the most common data formats, the requests module has a convenience method that will automatically convert the response from bytes to a Python dictionary. Simply call response.json():

<span role="button" tabindex="0" data-code=">>> response.json() {'circuits': 'https://nautobot.demo.networktocode.com/api/circuits/', 'dcim': 'https://nautobot.demo.networktocode.com/api/dcim/', 'extras': 'https://nautobot.demo.networktocode.com/api/extras/', 'graphql': 'https://nautobot.demo.networktocode.com/api/graphql/', 'ipam': 'https://nautobot.demo.networktocode.com/api/ipam/', 'plugins': 'https://nautobot.demo.networktocode.com/api/plugins/', 'status': 'https://nautobot.demo.networktocode.com/api/status/', 'tenancy': 'https://nautobot.demo.networktocode.com/api/tenancy/', 'users': 'https://nautobot.demo.networktocode.com/api/users/', 'virtualization': 'https://nautobot.demo.networktocode.com/api/virtualization/'} >>> type(response.json())
>>> response.json()
{'circuits': 'https://nautobot.demo.networktocode.com/api/circuits/', 'dcim': 'https://nautobot.demo.networktocode.com/api/dcim/', 'extras': 'https://nautobot.demo.networktocode.com/api/extras/', 'graphql': 'https://nautobot.demo.networktocode.com/api/graphql/', 'ipam': 'https://nautobot.demo.networktocode.com/api/ipam/', 'plugins': 'https://nautobot.demo.networktocode.com/api/plugins/', 'status': 'https://nautobot.demo.networktocode.com/api/status/', 'tenancy': 'https://nautobot.demo.networktocode.com/api/tenancy/', 'users': 'https://nautobot.demo.networktocode.com/api/users/', 'virtualization': 'https://nautobot.demo.networktocode.com/api/virtualization/'}

>>> type(response.json())
<class 'dict'>

In some cases, we will have to specify the desired data format by setting the Accept header. For example:

headers = {'Accept': 'application/json'}
response = requests.get('https://nautobot.demo.networktocode.com/api', headers=headers)

In this example, we are informing the API that we would like the data to come back formatted as JSON. If the API provides the content as XML, we would specify the header as {'Accept': 'application/xml'}. The appropriate content type to request should be spelled out in the vendor API documentation. Many APIs use a default, so you may not need to specify the header. Nautobot happens to use a default of application/json, so it isn’t necessary to set the header. If you do not set the Accept header, you can find out the type of returned content by examining the Content-Type header in the response:

>>> response.headers['Content-Type']
'application/json'

Although we are using Nautobot for many of the examples of requests module usage, there is a very useful SDK called pynautobot that can handle a lot of the heavy lifting for you, so definitely check that out!

Authentication

Most APIs are protected by an authentication mechanism which can vary from product to product. The API documentation is your best resource in determining the method of authentication in use. We’ll review a few of the more common methods with examples below.

API Key

With API key authentication you typically must first access an administrative portal and generate an API key. Think of the API key the same way as you would your administrative userid/password. In some cases it will provide read/write administrative access to the entire system, so you want to protect it as such. This means don’t store it in the code or in a git repository where it can be seen in clear text. Commonly the API keys are stored as environment variables and imported at run time, or are imported from password vaults such as Hashicorp or Ansible vault. Once an API key is generated, it will need to be included in some way with all requests. Next we’ll describe a few common methods for including the API key in requests and provide example code.

Token in Authorization Header

One method that is used across a wide variety of APIs is to include the API key as a token in the Authorization header. A few examples of this are in the authentication methods for Nautobot and Cisco Webex. The two examples below are very similar, with the main difference being that Nautobot uses Token {token} in the Authorization header whereas Cisco Webex uses Bearer {token} in the Authorization header. Implementation of this is not standardized, so the API documentation should indicate what the format of the header should be.

Nautobot API

First, it is necessary to generate an API key from the Nautobot GUI. Sign into Nautobot and select your username in the upper right-hand corner, and then view your Profile. From the Profile view, select API Tokens and click the button to add a token. The token will then need to be specified in the Authorization header in all requests as shown below.

import requests
import os

# Get the API token from an environment variable
token = os.environ.get('NAUTOBOT_TOKEN')

# Add the Authorization header
headers = {'Authorization': f'Token {token}'}

# This is the base URL for all Nautobot API calls
base_url = 'https://nautobot.demo.networktocode.com/api'

# Get the list of devices from Nautobot using the requests module and passing in the authorization header defined above
response = requests.get('https://nautobot.demo.networktocode.com/api/dcim/devices/', headers=headers)

>>> response.json()
{'count': 511, 'next': 'https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=50', 'previous': None, 'results': [{'id': 'fd94038c-f09f-4389-a51b-ffa03e798676', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/devices/fd94038c-f09f-4389-a51b-ffa03e798676/', 'name': 'ams01-edge-01', 'device_type': {'id': '774f7008-3a75-46a2-bc75-542205574cee', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/device-types/774f7008-3a75-46a2-bc75-542205574cee/', 'manufacturer': {'id': 'e83e2d58-73e2-468b-8a86-0530dbf3dff9', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/manufacturers/e83e2d58-73e2-468b-8a86-0530dbf3dff9/', 'name': 'Arista', 'slug': 'arista', 'display': 'Arista'}, 'model': 'DCS-7280CR2-60', 'slug': 'dcs-7280cr2-60', 'display': 'Arista DCS-7280CR2-60'}, 'device_role': {'id': 'bea7cc02-e254-4b7d-b871-6438d1aacb76', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/device-roles/bea7cc02-e254-4b7d-b871-6438d1aacb76/'
--- OUTPUT TRUNCATED FOR BREVITY ---

Cisco Webex API

When working with the Webex API, a bot must be created to get an API key. First create a bot in the dashboard https://developer.webex.com/docs/. Upon creating the bot you are provided a token which is good for 100 years. The token should then be included in the Authorization header in all requests as shown below.

import requests
import os

# Get the API token from an environment variable
token = os.environ.get('WEBEX_TOKEN')

# Add the Authorization header
headers = {'Authorization': f'Bearer {token}'}

# This is the base URL for all Webex API calls
base_url = 'https://webexapis.com'

# Get list of rooms
response = requests.get(f'{base_url}/v1/rooms', headers=headers)

>>> response.json()
{'items': [{'id': 'Y2lzY29zcGFyazovL3VzL1JPT00vNjZlNmZjYTAtMjIxZS0xMWVjLTg2Y2YtMzk0NmQ2YTMzOWVi', 'title': 'nautobot-chatops', 'type': 'group', 'isLocked': False, 'lastActivity': '2021-10-22T19:37:38.091Z', 'creatorId': 'Y2lzY29zcGFyazovL3VzL1BFT1BMRS9iYmRiZDljNC1hMTRkLTQwMTYtYjVjZi1jOGExNzY0MWI1YWQ', 'created': '2021-09-30T18:44:11.242Z', 'ownerId': 'Y2lzY29zcGFyazovL3VzL09SR0FOSVpBVElPTi8zZjE3OTcwNi1mMTFhLTRhYjctYmEzZS01N2E0YTk2YjA4OWY'}, {'id': 'Y2lzY29zcGFyazovL3VzL1JPT00vNzBjZTgwYTAtMjIxMi0xMWVjLWEwMDAtZjcyZTAyM2Q2MDIx', 'title': 'Webex space for Matt', 'type': 'group', 'isLocked': False, 'lastActivity': '2021-09-30T17:18:33.898Z', 'creatorId': 'Y2lzY29zcGFyazovL3VzL1BFT1BMRS9iYmRiZDljNC1hMTRkLTQwMTYtYjVjZi1jOGExNzY0MWI1YWQ', 'created': '2021-09-30T17:18:33.898Z', 'ownerId': 'Y2lzY29zcGFyazovL3VzL09SR0FOSVpBVElPTi8zZjE3OTcwNi1mMTFhLTRhYjctYmEzZS01N2E0YTk2YjA4OWY'}, {'id': 'Y2lzY29zcGFyazovL3VzL1JPT00vOWIwN2FmMjYtYmQ4Ny0zYmYwLWI2YzQtNTdlNmY1OGQwN2E2', 'title': 'Jason Belk', 'type': 'direct', 'isLocked': False, 'lastActivity': '2021-01-26T19:53:01.306Z', 'creatorId': 'Y2lzY29zcGFyazovL3VzL1BFT1BMRS9jNzg2YjVmOC1hZTdjLTQyMzItYjRiNS1jNzQxYTU3MjU4MzQ', 'created': '2020-12-10T17:53:01.202Z'}, {'id': 'Y2lzY29zcGFyazovL3VzL1JPT00vNTYwNzhhNTAtMTNjMi0xMWViLWJiNjctMTNiODIxYWUyMjE1', 'title': 'NTC NSO Projects', 'type': 'group', 'isLocked': False, 'lastActivity': '2021-05-28T17:46:16.727Z', 'creatorId': 'Y2lzY29zcGFyazovL3VzL1BFT1BMR
--- OUTPUT TRUNCATED FOR BREVITY ---

Custom Token Header

Some APIs require that the API key be provided in a custom header that is included with all requests. The key and the format to use for the value should be spelled out in the API documentation.

Cisco Meraki

Cisco Meraki requires that all requests have an X-Cisco-Meraki-API-Key header with the API key as the value. As with the Token in Authorization Header method discussed previously, you must first go to the API dashboard and generate an API key. This is done in the Meraki Dashboard under your profile settings. The key should then be specified in the X-Cisco-Meraki-API-Key for all requests.

import requests
import os

# Get the API key from an environment variable
api_key = os.environment.get('MERAKI_API_KEY')

# The base URI for all requests
base_uri = "https://api.meraki.com/api/v0"

# Set the custom header to include the API key
headers = {'X-Cisco-Meraki-API-Key': api_key}

# Get a list of organizations
response = requests.get(f'{base_uri}/organizations', headers=headers)

>>> response.json()
[{'id': '681155', 'name': 'DeLab', 'url': 'https://n392.meraki.com/o/49Gm_c/manage/organization/overview'}, {'id': '575334852396583536', 'name': 'TNF - The Network Factory', 'url': 'https://n22.meraki.com/o/K5Faybw/manage/organization/overview'}, {'id': '573083052582914605', 'name': 'Jacks_test_net', 'url': 'https://n18.meraki.com/o/22Uqhas/manage/organization/overview'}, {'id': '549236', 'name': 'DevNet Sandbox', 'url': 'https://n149.meraki.com/o/-t35Mb/manage/organization/overview'}, {'id': '575334852396583264', 'name': 'My organization', 'url': 'https://n22.meraki.com/o/
--- OUTPUT TRUNCATED FOR BREVITY ---

HTTP Basic Authentication w/ Token

Some APIs require that you first issue an HTTP POST to a login url using HTTP Basic Authentication. A token that must be used on subsequent requests is then issued in the response. This type of authentication does not require going to an administrative portal first to generate the token; the token is automatically generated upon successful login.

HTTP Basic Authentication/Token – Cisco DNA Center

The Cisco DNA Center login process requires that a request first be sent to a login URL with HTTP Basic Authentication, and upon successful authentication issues a token in the response. The token must then be sent in an X-Auth-Token header in subsequent requests.

import requests
from requests.auth import HTTPBasicAuth
import os

username = os.environ.get('DNA_USERNAME')
password = os.environ.get('DNA_PASSWORD')

hostname = 'sandboxdnac2.cisco.com'

# Create an HTTPBasicAuth object that will be passed to requests
auth = HTTPBasicAuth(username, password)

# Define the login URL to get the token
login_url = f"https://{hostname}/dna/system/api/v1/auth/token"

# Issue a login request
response = requests.post(login_url, auth=auth)

# Parse the token from the response if the response was OK 
if response.ok:
    token = response.json()['Token']
else:
    print(f'HTTP Error {response.status_code}:{response.reason} occurred')

# Define the X-Auth-Token header to be used in subsequent requests
headers = {'X-Auth-Token': token}

# Define the url for getting network health information from DNA Center
url = f"https://{hostname}/dna/intent/api/v1/network-health"

# Retrieve network health information from DNA Center
response = requests.get(url, headers=headers, auth=auth)

>>> response.json()
{'version': '1.0', 'response': [{'time': '2021-10-22T19:40:00.000+0000', 'healthScore': 100, 'totalCount': 14, 'goodCount': 14, 'unmonCount': 0, 'fairCount': 0, 'badCount': 0, 'entity': None, 'timeinMillis': 1634931600000}], 'measuredBy': 'global', 'latestMeasuredByEntity': None, 'latestHealthScore': 100, 'monitoredDevices': 14, 'monitoredHealthyDevices': 14, 'monitoredUnHealthyDevices': 0, 'unMonitoredDevices': 0, 'healthDistirubution': [{'category': 'Access', 'totalCount': 2, 'healthScore': 100, 'goodPercentage': 100, 'badPercentage': 0, 'fairPercentage': 0, 'unmonPercentage': 0, 'goodCount': 2, 'badCount': 0, 'fairCount': 0, 'unmonCount': 0}, {'category': 'Distribution', 'totalCount': 1, 'healthScore': 100, 'good
--- OUTPUT TRUNCATED FOR BREVITY ---

POST with JSON Payload

With this method of authentication, the user must first issue a POST to a login URL and include a JSON (most common), XML, or other type of payload that contains the user credentials. A token that must be used with subsequent API requests is then returned. In some cases the token is returned as a cookie in the response. When that is the case, a shortcut is to use a requests.session object. By using a session object, the token in the cookie can easily be reused on subsequent requests by sourcing the requests from the session object. This is the strategy used in the Cisco ACI example below.

POST with JSON Payload – Cisco ACI

Cisco ACI requires a JSON payload to be posted to the /aaaLogin URL endpoint with the username/password included. The response includes a cookie with key APIC-cookie and a token in the value that can be used on subsequent requests.

import requests
import os

username = os.environ.get('USERNAME')
password = os.environ.get('PASSWORD')
hostname = 'sandboxapicdc.cisco.com'

# Build the JSON payload with userid/password
payload = {"aaaUser": {"attributes": {"name": username, "pwd" : password }}}

# Create a Session object
session = requests.session()

# Specify the login URL
login_url = f'https://{hostname}/api/aaaLogin.json'

# Issue the login request. The cookie will be stored in session.cookies. 
response = session.post(login_url, json=payload, verify=False)

# Use the session object to get ACI tenants
if response.ok:
    response = session.get(f'https://{hostname}/api/node/class/fvTenant.json', verify=False)
else:
    print(f"HTTP Error {response.status_code}:{response.reason} occurred.")

>>> response.json()
{'totalCount': '4', 'imdata': [{'fvTenant': {'attributes': {'annotation': '', 'childAction': '', 'descr': '', 'dn': 'uni/tn-common', 'extMngdBy': '', 'lcOwn': 'local', 'modTs': '2021-10-08T15:31:47.480+00:00', 'monPolDn': 'uni/tn-common/monepg-default', 'name': 'common', 'nameAlias': '', 'ownerKey': '', 'ownerTag': '', 'status': '', 'uid': '0', 'userdom': 'all'}}}, {'fvTenant': {'attributes': {'annotation': '', 'childAction': '', 'descr': '', 'dn': 'uni/tn-infra', 'extMngdBy': '', 'lcOwn': 'local', 'modTs': '2021-10-08T15:31:55.077+00:00', 'monPolDn': 'uni/tn-common/monepg-default', 'name': 'infra', 'nameAlias': '', 'ownerKey': '', 'ownerTag': '', 'status': '', 'uid': '0', 'userdom': 'all'}}},
--- OUTPUT TRUNCATED FOR BREVITY ---

Certificate Checking

Note the verify=False in the above example. This can be used to turn off certificate checking when the device or API you are targeting is using a self-signed or invalid SSL certificate. This will cause a log message similar to the following to be generated:

InsecureRequestWarning: Unverified HTTPS request is being made to host ‘sandboxapicdc.cisco.com’. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings`

The solution that should be used for a production deployment would be to install a valid SSL certificate, and don’t use verify=False. However, if you are dealing with lab devices that may never have a valid certificate then the message can be disabled using the following snippet:

import urllib3

urllib3.disable_warnings()

Handling Errors

It is helpful when working with requests to understand HTTP status codes and some of the common triggers for them when working with APIs. HTTP status codes indicate the success or failure of a request, and when errors occur, can give a hint toward what the problem might be. Here are some common HTTP status codes that you might see when working with APIs and potential causes:

200 OK: The request was successful

201 Created: Indicates a POST or PUT request was successful

204 Deleted: Indicates a successful DELETE request

400 Bad Request: Usually indicates there was a problem with the payload in the case of a POST, PUT, or PATCH request

401 Unauthorized: Invalid or missing credentials

403 Forbidden: An authenticated user does not have permission to the requested resource

404 Not Found: The URL was not recognized

429 Too Many Requests: The API may have rate limiting in effect. Check the API docs to see if there is a limit on number of requests per second or per minute.

500 Internal Server Error: The server encountered an error processing your request. Like a 400, this can also be caused by a bad payload on a POST, PUT or PATCH.

When the requests module receives the above status codes in the response, it returns a response object and populates the status_code and reason fields in the response object. If a connectivity error occurs, such as a hostname that is unreachable or unresolvable, requests will throw an exception. However, requests will not throw an exception by default for HTTP-based errors such as the 4XX and 5XX errors above. Instead it will return the failure status code and reason in the response. A common strategy in error handling is to use the raise_for_status() method of the response object to also throw an exception for HTTP-based errors as well. Then a Python try/except block can be used to catch any of the errors and provide a more human-friendly error message to the user, if desired.

Note that HTTP status codes in the 2XX range indicate success, and thus raise_for_status() will not raise an exception.

<span role="button" tabindex="0" data-code="# Example of error for which Requests would throw an exception # Define a purposely bad URL url = 'https://badhostname' # Implement a try/except block to handle the error try: response = requests.get(url, json=data) response.raise_for_status() except requests.exceptions.RequestException as e: print(f"Error while connecting to {url}: {e}") Error while connecting to https://badhostname: HTTPSConnectionPool(host='badhostname', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('
# Example of error for which Requests would throw an exception

# Define a purposely bad URL
url = 'https://badhostname'

# Implement a try/except block to handle the error
try:
    response = requests.get(url, json=data)
    response.raise_for_status()
except requests.exceptions.RequestException as e:
    print(f"Error while connecting to {url}: {e}")

Error while connecting to https://badhostname: HTTPSConnectionPool(host='badhostname', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x108890d60>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))

# Example of HTTP error, no exception thrown but we force one to be triggered with raise_for_status()

# Define a purposely bad URL
url = 'https://nautobot.demo.networktocode.com/api/dcim/regions/bogus'

# Get the API token from an environment variable.
token = os.environ.get('NAUTOBOT_TOKEN')

# Add the Authorization header
headers = {'Authorization': f'Token {token}'}

# Implement a try/except block to handle the error
try:
   response = requests.get(url, headers=headers)
   response.raise_for_status()
except requests.exceptions.RequestException as e:
   print(f"Error while connecting to {url}: {e}")

Error while connecting to https://nautobot.demo.networktocode.com/api/dcim/regions/bogus: 404 Client Error: Not Found for url: https://nautobot.demo.networktocode.com/api/dcim/regions/bogus/

CRUD (Create, Replace, Update, Delete) API Objects

So far we have mostly discussed retrieving data from an API using HTTP GET requests. When creating/updating objects, HTTP POST, PUT, and PATCH are used. A DELETE request would be used to remove objects from the API.

  • POST: Used when creating a new object
  • PATCH: Update an attribute of an object
  • PUT: Replaces an object with a new one
  • DELETE: Delete an object

It should be noted that some APIs support both PUT and PATCH, while some others may support only PUT or only PATCH. The Meraki API that we’ll be using for the following example supports only PUT requests to change objects.

POST

When using a POST request with an API, you typically must send a payload along with the request in the format required by the API (usually JSON, sometimes XML, very rarely something else). The format needed for the payload should be documented in the API specification. When using JSON format, you can specify the json argument when making the call to requests.post. For example, requests.post(url, headers=headers, json=payload). Other types of payloads such as XML would use the data argument. For example, requests.post(url, headers=headers, data=payload).

Create a Region in Nautobot

With Nautobot, we can determine the required payload by looking at the Swagger docs that are on the system itself at /api/docs/. Let’s take a look at the Swagger spec to create a Region in Nautobot.

Create a Region in Nautobot

The fields marked with a red * above indicate that they are required fields, the other fields are optional. If we click the Try it out button as shown above, it gives us an example payload.

Create a Region in Nautobot

Since the name and slug are the only required fields, we can form a payload from the example omitting the other fields if desired. The below code snippet shows how we can create the Region in Nautobot using requests.post.

<span role="button" tabindex="0" data-code="import requests import os # Get the API token from an environment variable. token = os.environ.get('NAUTOBOT_TOKEN') # Add the Authorization header headers = {'Authorization': f'Token {token}'} # This is the base URL for all Nautobot API calls base_url = 'https://nautobot.demo.networktocode.com/api' # Form the payload for the request, per the API specification payload = { "name": "Asia Pacific", "slug": "asia-pac", } # Create the region in Nautobot response = requests.post('https://nautobot.demo.networktocode.com/api/dcim/regions/', headers=headers, json=payload) >>> response
import requests
import os

# Get the API token from an environment variable.
token = os.environ.get('NAUTOBOT_TOKEN')

# Add the Authorization header
headers = {'Authorization': f'Token {token}'}

# This is the base URL for all Nautobot API calls
base_url = 'https://nautobot.demo.networktocode.com/api'

# Form the payload for the request, per the API specification
payload = {
    "name": "Asia Pacific",
    "slug": "asia-pac",
}

# Create the region in Nautobot
response = requests.post('https://nautobot.demo.networktocode.com/api/dcim/regions/', headers=headers, json=payload)

>>> response
<Response [201]>
>>> response.reason
'Created'

PATCH

PATCH request can be used to update an attribute of an object. For example, in this next snippet we will change the description of the Region we just created in the POST request. It was omitted in the previous POST request so it is currently a blank string. Although it is not called out in the Swagger API specification, the PATCH request for Nautobot requires the id field to be defined in the payload. The id can be looked up for our previously created Region by doing a requests.get on /regions?slug=asia-pac. The ?slug=asia-pac at the end of the URL is a query parameter that is used to filter the request for objects having a field matching a specific value. In this case, we filtered the objects for the one with the slug field set to asia-pac to grab the ID. In addition, the payload needs to be in the form of a list of dictionaries rather than a single dictionary as is shown in the Swagger example.

Update a Region Description in Nautobot

<span role="button" tabindex="0" data-code="import requests import os # Get the API token from an environment variable. token = os.environ.get('NAUTOBOT_TOKEN') # Add the Authorization header headers = {'Authorization': f'Token {token}'} # This is the base URL for all Nautobot API calls base_url = 'https://nautobot.demo.networktocode.com/api' # First we get the region_id from our previously created region response = requests.get('https://nautobot.demo.networktocode.com/api/dcim/regions/?slug=asia-pac', headers=headers) >>> response.json() {'count': 1, 'next': None, 'previous': None, 'results': [{'id': 'be2c22a2-56ce-4d84-8ac9-5a68c6a39d62', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/regions/be2c22a2-56ce-4d84-8ac9-5a68c6a39d62/', 'name': 'Asia Pacific', 'slug': 'asia-pac', 'parent': None, 'description': 'Test region created from the API!', 'site_count': 0, '_depth': 0, 'custom_fields': {}, 'created': '2021-10-22', 'last_updated': '2021-10-22T21:20:07.628690', 'display': 'Asia Pacific'}]} # Parse the above response for the region identifier region_id = response.json()['results'][0]['id'] # Form the payload for the request, per the API specification (see preceding paragraph for some nuances!) payload = [{ "name": "Asia Pacific", "slug": "asia-pac", "description": "Test region created from the API!", "id": region_id }] # Update the region in Nautobot response = requests.patch('https://nautobot.demo.networktocode.com/api/dcim/regions/', headers=headers, json=payload) >>> response
import requests
import os

# Get the API token from an environment variable.
token = os.environ.get('NAUTOBOT_TOKEN')

# Add the Authorization header
headers = {'Authorization': f'Token {token}'}

# This is the base URL for all Nautobot API calls
base_url = 'https://nautobot.demo.networktocode.com/api'

# First we get the region_id from our previously created region
response = requests.get('https://nautobot.demo.networktocode.com/api/dcim/regions/?slug=asia-pac', headers=headers)

>>> response.json()
{'count': 1, 'next': None, 'previous': None, 'results': [{'id': 'be2c22a2-56ce-4d84-8ac9-5a68c6a39d62', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/regions/be2c22a2-56ce-4d84-8ac9-5a68c6a39d62/', 'name': 'Asia Pacific', 'slug': 'asia-pac', 'parent': None, 'description': 'Test region created from the API!', 'site_count': 0, '_depth': 0, 'custom_fields': {}, 'created': '2021-10-22', 'last_updated': '2021-10-22T21:20:07.628690', 'display': 'Asia Pacific'}]}

# Parse the above response for the region identifier
region_id = response.json()['results'][0]['id']

# Form the payload for the request, per the API specification (see preceding paragraph for some nuances!)
payload = [{
    "name": "Asia Pacific",
    "slug": "asia-pac",
    "description": "Test region created from the API!",
    "id": region_id
}]

# Update the region in Nautobot
response = requests.patch('https://nautobot.demo.networktocode.com/api/dcim/regions/', headers=headers, json=payload)

>>> response
<Response [200]>

PUT

PUT request is typically used to replace an an entire object including all attributes of the object.

Replace a Region Object in Nautobot

Let’s say we want to replace the entire Region object that we created previously, giving it a completely new name, slug and description. For this we can use a PUT request, specifying the id of the previously created Region and providing new values for the name, slug, and description attributes.

<span role="button" tabindex="0" data-code="import requests import os # Get the API token from an environment variable token = os.environ.get('NAUTOBOT_TOKEN') # Add the Authorization header headers = {'Authorization': f'Token {token}'} # This is the base URL for all Nautobot API calls base_url = 'https://nautobot.demo.networktocode.com/api' # First we get the region_id from our previously created region response = requests.get('https://nautobot.demo.networktocode.com/api/dcim/regions/?slug=asia-pac', headers=headers) >>> response.json() {'count': 1, 'next': None, 'previous': None, 'results': [{'id': 'be2c22a2-56ce-4d84-8ac9-5a68c6a39d62', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/regions/be2c22a2-56ce-4d84-8ac9-5a68c6a39d62/', 'name': 'Asia Pacific', 'slug': 'asia-pac', 'parent': None, 'description': 'Test region created from the API!', 'site_count': 0, '_depth': 0, 'custom_fields': {}, 'created': '2021-10-22', 'last_updated': '2021-10-22T21:20:07.628690', 'display': 'Asia Pacific'}]} # Parse the above response for the region identifier region_id = response.json()['results'][0]['id'] # Form the payload for the request, per the API specification (see preceding paragraph for some nuances!) payload = [{ "name": "Test Region", "slug": "test-region-1", "description": "Asia Pac region updated with a PUT request!", "id": region_id }] # Update the region in Nautobot response = requests.put('https://nautobot.demo.networktocode.com/api/dcim/regions/', headers=headers, json=payload) >>> response
import requests
import os

# Get the API token from an environment variable
token = os.environ.get('NAUTOBOT_TOKEN')

# Add the Authorization header
headers = {'Authorization': f'Token {token}'}

# This is the base URL for all Nautobot API calls
base_url = 'https://nautobot.demo.networktocode.com/api'

# First we get the region_id from our previously created region
response = requests.get('https://nautobot.demo.networktocode.com/api/dcim/regions/?slug=asia-pac', headers=headers)

>>> response.json()
{'count': 1, 'next': None, 'previous': None, 'results': [{'id': 'be2c22a2-56ce-4d84-8ac9-5a68c6a39d62', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/regions/be2c22a2-56ce-4d84-8ac9-5a68c6a39d62/', 'name': 'Asia Pacific', 'slug': 'asia-pac', 'parent': None, 'description': 'Test region created from the API!', 'site_count': 0, '_depth': 0, 'custom_fields': {}, 'created': '2021-10-22', 'last_updated': '2021-10-22T21:20:07.628690', 'display': 'Asia Pacific'}]}

# Parse the above response for the region identifier
region_id = response.json()['results'][0]['id']

# Form the payload for the request, per the API specification (see preceding paragraph for some nuances!)
payload = [{
    "name": "Test Region",
    "slug": "test-region-1",
    "description": "Asia Pac region updated with a PUT request!",
    "id": region_id
}]

# Update the region in Nautobot
response = requests.put('https://nautobot.demo.networktocode.com/api/dcim/regions/', headers=headers, json=payload)

>>> response
<Response [200]>

# Search for the region using the new slug
response = requests.get('https://nautobot.demo.networktocode.com/api/dcim/regions/?slug=test-region-1', headers=headers)

# This returns the replaced object, while retaining the same identifier
>>> response.json()
{'count': 1, 'next': None, 'previous': None, 'results': [{'id': 'be2c22a2-56ce-4d84-8ac9-5a68c6a39d62', 'url': 'https://nautobot.demo.networktocode.com/api/dcim/regions/be2c22a2-56ce-4d84-8ac9-5a68c6a39d62/', 'name': 'Test Region', 'slug': 'test-region-1', 'parent': None, 'description': 'Asia Pac region updated with a PUT request!', 'site_count': 0, '_depth': 0, 'custom_fields': {}, 'created': '2021-10-22', 'last_updated': '2021-10-25T17:31:04.003235', 'display': 'Test Region'}]}

Enable an SSID in Meraki

Let’s look at another example of using a PUT to enable a wireless SSID in the Cisco Meraki dashboard. For this we will use a PUT request including the appropriate JSON payload to enable SSID 14.

import requests
import os

# Get the API key from an environment variable
api_key = os.environment.get('MERAKI_API_KEY')

# The base URI for all requests
base_uri = "https://api.meraki.com/api/v0"

# Set the custom header to include the API key
headers = {'X-Cisco-Meraki-API-Key': api_key}

net_id = 'DNENT2-mxxxxxdgmail.com' 
ssid_number = 14

url = f'{base_uri}/networks/{net_id}/ssids/{ssid_number}'

# Initiate the PUT request to enable an SSID. You must have a reservation in the Always-On DevNet sandbox to gain authorization for this. 
response = requests.put(url, headers=headers, json={"enabled": True})

DELETE

An object can be removed by making a DELETE request to the URI (Universal Resource Indicator) of an object. The URI is the portion of the URL that refers to the object, for example /dcim/regions/{id} in the case of the Nautobot Region.

Remove a Region from Nautobot

Let’s go ahead and remove the Region that we previously added. To do that, we’ll send a DELETE request to the URI of the region. The URI can be seen when doing a GET request in the url attribute of the Region object. We can also see in the API specification for DELETE that the call should be made to /regions/{id}

<span role="button" tabindex="0" data-code="# Search for the region using the slug response = requests.get('https://nautobot.demo.networktocode.com/api/dcim/regions/?slug=test-region-1', headers=headers) # Parse the URL from the GET request url = response.json()['results'][0]['url'] >>> url 'https://nautobot.demo.networktocode.com/api/dcim/regions/be2c22a2-56ce-4d84-8ac9-5a68c6a39d62/' # Delete the Region object response = requests.delete(url, headers=headers) # A status code of 204 indicates successful deletion >>> response
# Search for the region using the slug
response = requests.get('https://nautobot.demo.networktocode.com/api/dcim/regions/?slug=test-region-1', headers=headers)

# Parse the URL from the GET request
url = response.json()['results'][0]['url']

>>> url
'https://nautobot.demo.networktocode.com/api/dcim/regions/be2c22a2-56ce-4d84-8ac9-5a68c6a39d62/'

# Delete the Region object
response = requests.delete(url, headers=headers)

# A status code of 204 indicates successful deletion
>>> response
<Response [204]>

Rate Limiting

Some APIs implement a throttling mechanism to prevent the system from being overwhelmed with requests. This is usually implemented as a rate limit of X number of requests per minute. When the rate limit is hit, the API returns a status code 429: Too Many Requests. To work around this, your code must implement a backoff timer in order to avoid hitting the threshold. Here’s an example working around the Cisco DNA Center rate limit of 5 requests per minute:

<span role="button" tabindex="0" data-code="import requests from requests.auth import HTTPBasicAuth import time from pprint import pprint import os # Pull in credentials from environment variables username = os.environ.get('USERNAME') password = os.environ.get('PASSWORD') hostname = "sandboxdnac2.cisco.com" headers = {"Content-Type": "application/json"} # Use Basic Authentication auth = HTTPBasicAuth(username, password) # Request URL for the token login_url = f"https://{hostname}/dna/system/api/v1/auth/token" # Retrieve the token resp = requests.post(login_url, headers=headers, auth=auth) token = resp.json()['Token'] # Add the token to subsequent requests headers['X-Auth-Token'] = token url = f"https://{hostname}/dna/intent/api/v1/network-device" resp = requests.get(url, headers=headers, auth=auth) count = 0 # Loop over devices and get device by id # Each time we reach five requests, pause for 60 seconds to avoid the rate limit for i, device in enumerate(resp.json()['response']): count += 1 device_count = len(resp.json()['response']) print (f"REQUEST #{i+1}") url = f"https://{hostname}/dna/intent/api/v1/network-device/{device['id']}" response = requests.get(url, headers=headers, auth=auth) pprint(response.json(), indent=2) if count == 5 and (i+1)
import requests
from requests.auth import HTTPBasicAuth
import time
from pprint import pprint
import os

# Pull in credentials from environment variables  
username = os.environ.get('USERNAME')
password = os.environ.get('PASSWORD')
hostname = "sandboxdnac2.cisco.com"

headers = {"Content-Type": "application/json"}
# Use Basic Authentication
auth = HTTPBasicAuth(username, password)

# Request URL for the token
login_url = f"https://{hostname}/dna/system/api/v1/auth/token"

# Retrieve the token
resp = requests.post(login_url, headers=headers, auth=auth)
token = resp.json()['Token']

# Add the token to subsequent requests
headers['X-Auth-Token'] = token

url = f"https://{hostname}/dna/intent/api/v1/network-device"
resp = requests.get(url, headers=headers, auth=auth)

count = 0
# Loop over devices and get device by id
# Each time we reach five requests, pause for 60 seconds to avoid the rate limit
for i, device in enumerate(resp.json()['response']):
    count += 1
    device_count = len(resp.json()['response'])
    print (f"REQUEST #{i+1}")
    url = f"https://{hostname}/dna/intent/api/v1/network-device/{device['id']}"
    response = requests.get(url, headers=headers, auth=auth)
    pprint(response.json(), indent=2)
    if count == 5 and (i+1) < device_count:
      print("Sleeping for 60 seconds...")
      time.sleep(60)
      count = 0

Pagination

Some API calls may set a limit on the number of objects that are returned in a single call. In this case, the API should return paging details in the JSON body including the URL to request the next set of data as well as the previous set. If Previous is empty, we are on the first set of data. If Next is empty, we know we have reached the end of the dataset. Some API implementations follow RFC5988, which includes a Link header in the format:

Link: https://webexapis.com/v1/people?displayName=Harold&max=10&before&after=Y2lzY29zcGFyazovL3VzL1BFT1BMRS83MTZlOWQxYy1jYTQ0LTRmZWQtOGZjYS05ZGY0YjRmNDE3ZjU;

The above example is from the Webex API, which implements RFC5988. This is described in the API documentation here: https://developer.webex.com/docs/api/basics

Keep in mind that not all implementations use the RFC, however. The API documentation should explain how pagination is handled.

Handling Pagination in Nautobot

A good example of pagination can be seen when making a GET request to retrieve all Devices from Nautobot. Nautobot includes a countnext, and previous attribute in responses that are paginated. By default, the API will return a maximum of 50 records. The limit value as well as an offset value are indicated in the next value of the response. For example: 'next': 'https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=50'. In the URL, the limit indicates the max amount of records, and the offset indicates where the next batch of records begins. The previous attribute indicates the url for the previous set of records. If previous is None, it means we are on the first set of records. And if next is None, it means we are on the last set of records.

In the below snippet, we first retrieve the first set of 50 records and store them in a device_list variable. We then create a while loop that iterates until the next field in the response contains None. The returned results are added to the device_list at each iteration of the loop. At the end we can see that there are 511 devices, which is the same value as the count field in the response.

import requests
import os

# Get the API token from an environment variable
token = os.environ.get('NAUTOBOT_TOKEN')

# Add the Authorization header
headers = {'Authorization': f'Token {token}'}

# This is the base URL for all Nautobot API calls
base_url = 'https://nautobot.demo.networktocode.com/api'

# Create the initial request for the first batch of records
response = requests.get(f'{base_url}/dcim/devices', headers=headers)

# Store the initial device list
device_list = [device for device in response.json()['results']]

# Notice that we now have the first 50 devices
>>> len(device_list)
50

# But there are 511 total!
>>> response.json()['count']
511

# Loop until 'next' is None, adding the retrieved devices to device_list on each iteration
if response.json()['next']:
    while response.json()['next']:
        print(f"Retrieving {response.json()['next']}")
        response = requests.get(response.json()['next'], headers=headers)
        for device in response.json()['results']:
            device_list.append(device)

Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=50
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=100
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=150
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=200
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=250
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=300
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=350
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=400
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=450
Retrieving https://nautobot.demo.networktocode.com/api/dcim/devices/?limit=50&offset=500

>>> len(device_list)
511

Handling Pagination in Cisco Webex

In the code below, first we get the Room IDs for the WebEx Teams rooms I am a member of. Then we retrieve the members from the DevNet Dev Support Questions room and create a continuous function that follows the Link URL and displays the content. The While loop is broken when the Link header is no longer present, returning None when we try to retrieve it with headers.get(‘Link’).

import requests
import re
import os

api_path = "https://webexapis.com/v1"

# You can retrieve your token here: https://developer.webex.com/docs/api/getting-started
token = os.environ.get('WEBEX_TOKEN')
headers = {"Authorization": f"Bearer {token}"}

# List the rooms, and collect the ID for the DevNet Support Questions room
get_rooms = requests.get(f"{api_path}/rooms", headers=headers)
for room in get_rooms.json()['items']:
    print(room['title'], room['id'])
    if room['title'] == "Sandbot-Support DevNet":
      room_id = room['id']

# This function will follow the Link URLs until there are no more, printing out
# the member display name and next URL at each iteration. Note that I have decreased the maximum number of records to 1 so as to force pagination. This should not be done in a real implementation. 

def get_members(room_id):
    params = {"roomId": room_id, "max": 1}
    # Make the initial request and print the member name
    response = requests.get(f"{api_path}/memberships", headers=headers, params=params)
    print(response.json()['items'][0]['personDisplayName'])
    # Loop until the Link header is empty or not present
    while response.headers.get('Link'):
        # Get the URL from the Link header
        next_url = response.links['next']['url']
        print(f"NEXT: {next_url}")
        # Request the next set of data
        response = requests.get(next_url, headers=headers)
        if response.headers.get('Link'):
            print(response.json()['items'][0]['personDisplayName'])
        else:
            print('No Link header, finished!')

# Execute the function using the Sandbox-Support DevNet RoomID
get_members(room_id)


Conclusion

I hope this has been a useful tutorial on using requests to work with vendor REST APIs! While no two API implementations are the same, many of the most common patterns for working with them are covered here. As always, hit us up in the comments or on our public Slack channel with any questions. Thanks for reading!

-Matt



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Intro to Automation Part 2 – New Tools for a New Network

Blog Detail

In my previous blog post, I mentioned a few tools you can get started with when beginning your journey into Network Automation. Today, I hope to dive into them further and explore them and a few others in more detail. While there are many tools available for network automation, some are more common than others and will serve as a good foundation when getting started.

If you missed Part 1 of this Intro to Automation series, you can find it here.

Tools Overview

As mentioned in Part 1 of this series, making the transition from Network Engineer to Network Automation Engineer requires a shift in how you think of your network. But with this shift comes a different set of tools you’ll start to use, which will help you as you progress into this field.

There are so many tools out there for Software Developers, but we as Network Engineers tend to think differently and use different tools. It can be overwhelming figuring out which tools to start out with. In this post, I hope to go over some of the most popular ones used by Network Automation Engineers and explain them in a way that you not only understand, but that shows their real-world value to a Network Engineer.

Before I get into tools and software themselves, I need to put in a disclaimer. Just like with all things, there is no “one-size-fits-all” tool or solution that will always work in every situation. Some of these tools will work most of the time, and some will work only in specific instances. My goal is to outline the ones that are the most popular today with a wide range of community or commercial support and explain the pros and cons to each.

Git

I put Git as the first tool in the list on purpose because I believe it is one of the most important tools a Network Engineer can learn to use. Even if you never write a single script or piece of automation, Git can still be very useful for a Network Engineer.

There are many non-automation related use cases, including:

  • Config backups
  • Script backups
  • Version controlling
  • Auditing

Note: Regarding auditing, it can be helpful internally or for external auditors, but may require some advanced setup to meet different auditing and compliance criteria.

Git can be used locally without ever needing to create an account with popular online services like GitHub or Bitbucket. As a new Network Automation Engineer, you can use it to create backups of scripts you write or device configurations. These can be used later to undo a mistake you made, or even answer the timeless question, “when was the last change made and what was it?”

Git is not GitHub, just like the Linux kernel is not Ubuntu.

Git is the most popular version control system available, and used by the majority of developers around the world. It is open-source, easy to use, and with a wide user base has a lot of community support available. While it is natively used with CLI commands, there are various GUI applications that can make learning and using Git easier when getting started. A brief list of some popular ones are:

Or if you use an IDE for writing scripts, they usually include Git support as well. I discuss IDEs more in a later section below.

PlayStation in a Git Article?

There are many features and benefits to Git, but seeing as Git is a VCS or Version Control System, the main one I use Git for is keeping track of different files and how they change over time. In other words, keeping track of all changes made to a set of files and folders.

Here’s a fun analogy for video gamers when learning about Git. When I was a kid, I was playing Tomb Raider II on the original PlayStation. I got in the habit of frequently saving and did it so often I could save the game without thinking about it. Well at one point, I fell off a cliff at the end of a level and went to reload my last save. But muscle memory kicked in and I accidentally saved the game instead of loading it! I had to restart the whole level over from the beginning! I learned a lesson that day, and it’s stuck with me the rest of my gaming life: always have multiple save files, and backup those files whenever possible.

Git can be thought of similarly but with multiple files or directories. Every time you make a “commit” in Git (save your progress), it’s like saving your game in a brand-new save file every single time. Did you spend hours working on a script, only to have it break and you have no idea why? You can easily revert back to any previous snapshot (commit) and start over or use it as a reference point. Yes, I’ve done exactly this before too many times to count!

Fun fact: Git was created by Linus Torvalds, the same person who created Linux!

Python

If there is one programming language I would recommend to a Network Engineer getting into automation, it is without a doubt Python. According to the TIOBE Index, as of September 2021, Python is the 2nd most popular programming language in the world, and about to take over the #1 spot from C. In fact, I’d be surprised if you haven’t heard of Python’s benefits by now or even started learning it. If you haven’t, the best time to start is today.

One of the driving factors behind its popularity is its ease of use and learning curve, specifically for people without programming backgrounds. The benefits of this make it very easy to get into and start writing something useful pretty fast, and yet it can still be used to write complex automation.

With such popularity comes large community support. While learning Python, when you run into a question, chances are someone else has already posted the answer online, and it’s a quick Google search away.

Some of the cons can be realized after using Python for a few years. For example, there are other languages that are inherently faster with certain tasks, though they’re more complicated to learn. Additionally, there are certain software development “best practices” that are taken care of for you under the hood of Python, but need to be learned when using other languages.

In summary, from a Network Engineer just getting into scripting and automation to seasoned senior-level engineers, Python is perfect.

Fun fact: Did you know that some network hardware vendors have native Python support built right in to their devices? For example, if it’s installed and enabled, you can run the command python from privileged exec mode on a Cisco 9k switch and load a Python prompt!

Hello World

In the world of programming languages, there’s a concept called the “Hello World” program. Essentially, it’s when someone who is learning a new programming language learns just enough to print out the phrase “Hello World!” to the screen. It’s seen as a good starting point while learning the basics and is actually really fun to do!

To demonstrate the simplicity of Python when compared to the other programming languages, here’s a “Hello World” program written in the other 2 languages at the top of the TIOBE Index I mentioned earlier, C and Java:

Python

print("Hello World!")

C

#include <stdio.h>
int main() {
   printf("Hello World!");
}

Java

class HelloWorld {
  static public void main( String args[] ) {
    System.out.println( "Hello World!" );
  }
}

Python is about as simple as you can get and makes it easy to get started! As I mentioned before, you may not need to understand what everything in the C or Java examples is right away since Python handles most of that behind the scenes, but you will eventually want to learn what they are and why they’re important.

Important: When starting with Python, make sure you’re using and learning Python 3, not Python 2.

Ansible

Ansible is an open-source automation platform used for managing multiple devices easily. It was acquired by Red Hat in 2015, and it remains one of the most popular open-source automation tools for network automation engineers. Ansible can be used for relatively simple playbooks (scripts that you run) for a single switch, all the way up to complex fleet management systems for thousands of devices!

While there are other open-source tools available that are similar, Ansible is the best and most popular choice for managing network devices for a few reasons:

  1. Agentless – Ansible connects directly to a network device, usually over SSH but can use other methods, and does not require an “agent”, or other piece of software, to already be pre-installed on the device. Installing an agent on a network device for management is not feasible, so this is where Ansible works well compared to similar tools like Chef or Puppet.
  2. Inventories – Want to configure 100 switches without manually connecting to them one at a time? This is where Ansible really shines! Just provide it with a specially formatted inventory file, which includes a list of devices and a few other parameters, and Ansible will handle connecting to them all behind the scenes.
  3. Modular – Similar to Python, with Ansible’s popularity comes a wide range of modules (plugins) you can use. Some are submitted by the open-source community, while others are officially supported third-party modules (e.g., Arista EOS).
  4. Customization – If you need Ansible to do something unique to your environment, or there is a feature not yet created by the open-source community, you can write your own using…..Python! That’s right, Ansible runs off of Python and natively supports custom Python scripts to be imported into Ansible playbooks.
  5. Commercial Support – Since Red Hat’s acquisition of Ansible in 2015, companies can now purchase commercial support for Ansible through Red Hat, or even through third-party companies like Network to Code.

IDEs and Text Editors

Earlier I mentioned a couple popular IDEs available that are free for personal use, however I want to explain them in more detail. An IDE or fancy text editor aren’t things you hear about much, but they’re SO important when working with automation.

As a network engineer, my text editor of choice was a generic notepad-style application, where I mostly used it to write out a switch config before configuring the device by copy/pasting the text into the CLI.

When you get into automation, you’ll be spending a lot more time with scripts, configs, settings files, etc. For this reason, I highly recommend you get a good IDE or text editor right away. The more you use it, the more familiar you’ll become with it. Eventually, you’ll never want to work without it!

One feature that is an absolute must have for whichever program you choose is syntax highlighting. While different programs will use different colors for syntax highlighting, they all essentially work the same. Instead of explaining what this is, look at the below images and ask yourself this question: which one is easier to read through?

Text Editors
Text Editors

IDEs

An Integrated Development Environment, or IDE for short, is an application that contains many common tools used for writing software (or even basic scripts) and is frequently used by software developers. There are many benefits to IDEs, and the popular ones have too many features to list.

Some of the downsides to them are also found with their strengths. They can be complex with many settings that make no sense when first starting out scripting. However in my opinion, their benefits strongly outweigh the negatives and I encourage you to try one out and give it some time before giving up on it right away. Don’t worry about every button or feature, and focus on the basics. As you become more familiar with them, you’ll start learning their features and other benefits more and more.

Two of the most popular ones available today used by Network Automation Engineers are VSCode by Microsoft and PyCharm by JetBrains. The syntax highlighting example above is from this free VSCode extension for Cisco IOS configs, though both support many other color variations and file formats.

Text Editors

There are many, many good text editors out there. Ask anyone who’s been in IT long enough, and they’ll not only have a favorite but a list of reasons why it’s the best. The real answer is there is no “best” text editor, only the one that works best for you. Most popular GUI-based text editors now offer some level of built-in syntax highlighting, but not all. If you’re uncomfortable with starting out with an IDE right away, or if you just want something better than Notepad to use in your day-to-day activities, I list three of the more popular ones available right now that are free to use:

APIs

While not necessarily a specific tool, APIs are more of a back-end technology. In fact, I’m sure you’ve heard before how great APIs are by now from someone you know. Before I wrote my very first script, I had IT friends telling me how great APIs were and how they used them and loved them. But when trying to explain to me what they are, I couldn’t understand why they were so good. I wrote this section in a way that hopefully explains APIs to network engineers that have a hard time grasping their usefulness, and in a way I wish they had been explained to me years ago.

An API allows one application or system to be able to interact with a completely different application or system in a structured, predefined manner with expected inputs and outputs on both sides. Simplified, it is a way for two programs to be able to talk to each other.

An analogy would be how people talk to each other using the English language. If two people can both speak English, then they are both able to understand what each word means, know how to talk to someone so they understand what was said, know what to expect as a response, and what that response means. One person’s response may even differ to the exact same request if it comes from someone they know (authenticated) vs. coming from someone they do not (unauthenticated).

Think of an API like this. Amy natively speaks Spanish (application 1) and Bob natively speaks French (application 2). They normally can’t understand each other, but if they both agree to speak to each other in their secondary language English (APIs), they can communicate in a limited but effective manner. In that analogy, an API is not a translator, but a predefined set of rules (English) for Amy and Bob to talk to each other.

To take it a step further, some types of APIs (like REST APIs) can require another application to be authenticated before it will listen to what it has to say (process the data). In the previous analogy, it is similar to how if Amy is friends with Bob (authenticated), they may respond to each other in one way. However if a complete stranger named Charley (unauthenticated) walked up to Amy and started saying the same thing Bob was saying, she may respond differently.

Note: There are multiple types of APIs, each with its own sets of rules, data formats, communication methods, etc.

Previously as a network engineer, I never really understood why I would need to use them. As a network automation engineer, I now use APIs for my automation scripts to be able to interact with network devices.

Traditionally, if I want to enable an interface on a Cisco switch, I have to connect to the CLI on the switch over SSH, and run these commands:

switch01# config t
switch01(config)# interface FastEthernet1/1
switch01(config-if)# no shutdown
switch01(config-if)# end
switch01# copy running-config startup-config

Simple right? Well, at least simple for humans to perform and understand. However when you start writing scripts to do this, you’ll find it’s a lot harder and very unreliable to do it this way. When I started writing scripts to configure switches, I would have my script connect over SSH and configure it in the exact same way as I would via CLI. While this worked, there are faster, more reliable, and easier ways of doing so.

A common scenario occurs when you try to configure a setting across multiple network devices with different OSs, or sometimes even different versions of the same OS. For example, if you look at the AAA configuration for Cisco IOS, compared to Cisco NX-OS, then compare it again to Cisco ASA, they are all different. As an engineer, I can manually adjust the commands in the CLI on the fly, but in my scripting, I have to account for each variation I might encounter.

Important: I also have to account for variations I am not aware of!

This is where APIs come in. Instead of worrying about variations in each OS or how each command is different, what if I could have my script configure it using the same method and know for certain it will get configured as expected? Or if it fails, can I have it tell me there’s an error without breaking anything? Using an API, you absolutely can!

In this example, you can use a network device or application’s built-in API to send it specific data. It will then receive the request, and since it already knows what to expect, it’s able to parse it out, perform any action requested, then return data back to your script in a pre-formatted and expected way. If you send it information in a way that it isn’t expecting, or are missing information that’s necessary, it will let you know as well!

Examples of data returned can be anything, including:

  • Was the job successful?
  • Command output
  • Configurations
  • Errors encountered
  • etc.

Conclusion

It’s absolutely amazing how many tools and applications you can use when automating your network. It’s even more amazing knowing that most of them are either open-source or offer some sort of free licensing agreement.

I encourage you to start with learning the basics of the tools I’ve mentioned. You don’t have to become an expert in any of them right away. I’ve been using Python for years, and I still learn new things about it every day from my peers here at Network to Code!

I also encourage you to give back to the open-source and network automation community as you progress in your career. Join us in Slack, and feel free to participate in discussions and ask for advice. It’s a Slack community run by Network Automation Engineers for anyone interested in automation, network automation, general networking, or even non-network-related IT systems.

Many resources are available online to learn these tools. While many of them are free and written by the community, Network to Code offers excellent training for those that learn better in a structured class environment. We cover topics such as Python, Ansible, and even general network automation concepts.

 

Thanks for reading, and happy automating!

-Matt

Intro to Automation Series

Part 1 – Rethink How You Think

Part 2 – New Tools for a New Network

Part 3 – Your New Best Friend Git



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

GraphQL vs. REST API Case Study

Blog Detail

If you’ve been watching this space, you’ve seen me talking about Nautobot’s GraphQL capabilities and how GraphQL helps you:

  • GraphQL queries are much more efficient than RESTful queries
  • GraphQL makes your life easier by making data more accessible
  • The above results in dramatic improvement in your quality of life

This post is a case study in those aspects. It will empirically demonstrate how GraphQL:

  • Minimizes your number of queries
  • Returns only the data you want
  • Makes it so you don’t have to manually filter data and build the desired data structure in a script
  • Creates faster automation
  • Reduces your workload

I will be running this case study using https://demo.nautobot.com/. Readers are encouraged to follow along, using the scripts below.

The Problem Statement

In this case study, the goal is to gather specific information for certain network elements from Nautobot. Specifically, we want a data structure with the following information:

  • We want information from all devices within the ams site
  • The data structure should organize information so that all the data is grouped on a per-device basis
  • We want this specific data for each device:
    • Device name
    • Device role
    • All the interface names
    • The list of IP address(es) for each interface, even if the interface has no configured IP address(es)

The GraphQL Solution

The GraphQL solution will leverage the pynautobot Python package, which provides a customized and efficient way to programmatically query Nautobot via GraphQL.

Also, from an earlier blog post in this GraphQL series, recall that you can craft GraphQL queries in Nautobot’s GraphiQL interface.

Here is the script we will use to accomplish our task using GraphQL:

import pynautobot

from pprint import pprint
from time import time

start_time = time()

# Nautobot URL and auth info
url = "https://demo.nautobot.com"
token = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"

# GraphQL query
query = """
query {
  devices(site:"ams") {
    name
    device_role {
      name
    }
    interfaces {
      name
        ip_addresses {
        address
      }
    }
  }
}
"""

print("Querying Nautobot via pynautobot.")
print()

print("url is: {}".format(url))
print()

print("query is:")
print(query)
print()

nb = pynautobot.api(url, token)

response = nb.graphql.query(query=query)
response_data = response.json

print("Here is the response data in json:")
pprint(response_data)
print()

end_time = time()

run_time = end_time - start_time

print("Run time = {}".format(run_time))

GraphQL Results

Here are the results when the script runs (snipped in places for brevity):

blogs/graphql_vs_restful % python3 -i graphql_query_ams_device_ints_pynautobot.py
Querying Nautobot via pynautobot.

url is: https://demo.nautobot.com

query is:

query {
  devices(site:"ams") {
    name
    device_role {
      name
    }
    interfaces {
      name
	  ip_addresses {
          address
      }
    }
  }
}


Here is the response data in json:
{'data': {'devices': [{'device_role': {'name': 'edge'},
                       'interfaces': [{'ip_addresses': [{'address': '10.11.192.0/32'}],
                                       'name': 'Ethernet1/1'},
                                      {'ip_addresses': [{'address': '10.11.192.2/32'}],
                                       'name': 'Ethernet2/1'},
                                      {'ip_addresses': [{'address': '10.11.192.4/32'}],
                                       'name': 'Ethernet3/1'},
                                      {'ip_addresses': [{'address': '10.11.192.8/32'}],
                                       'name': 'Ethernet4/1'},
                                      < --- snip for brevity --- >
                                      {'ip_addresses': [],
                                       'name': 'Ethernet60/1'},
                                      {'ip_addresses': [{'address': '10.11.128.1/32'}],
                                       'name': 'Loopback0'},
                                      {'ip_addresses': [],
                                       'name': 'Management1'}],
                       'name': 'ams-edge-01'},
                      {'device_role': {'name': 'edge'},
                       'interfaces': [{'ip_addresses': [{'address': '10.11.192.1/32'}],
                                       'name': 'Ethernet1/1'},
                                      < --- snip for brevity --- >
                                      {'ip_addresses': [{'address': '10.11.128.2/32'}],
                                       'name': 'Loopback0'},
                                      {'ip_addresses': [],
                                       'name': 'Management1'}],
                       'name': 'ams-edge-02'},
                      {'device_role': {'name': 'leaf'},
                       'interfaces': [{'ip_addresses': [{'address': '10.11.192.5/32'}],
                                       'name': 'Ethernet1'},
                                      {'ip_addresses': [], 'name': 'Ethernet3'},
                                      < --- snip for brevity --- >
                                      {'ip_addresses': [{'address': '10.11.64.0/32'}],
                                       'name': 'vlan99'},
                                      {'ip_addresses': [{'address': '10.11.0.0/32'}],
                                       'name': 'vlan1000'}],
                       'name': 'ams-leaf-01'},
                      {'device_role': {'name': 'leaf'},
                       'interfaces': [{'ip_addresses': [{'address': '10.11.192.9/32'}],
                                       'name': 'Ethernet1'},
                                      {'ip_addresses': [{'address': '10.11.192.11/32'}],
                                       'name': 'Ethernet2'},
                                      {'ip_addresses': [], 'name': 'Ethernet3'},
                                      < --- snip for brevity --- >                                      
                                      {'ip_addresses': [], 'name': 'Management1'},
                                      {'ip_addresses': [{'address': '10.11.1.0/32'}],
                                       'name': 'vlan1000'}],
                        < --- snip for brevity --- >
                      {'device_role': {'name': 'leaf'},
                       'interfaces': [{'ip_addresses': [{'address': '10.11.192.33/32'}],
                                       'name': 'Ethernet1'},
                                      < --- snip for brevity --- >
                                      {'ip_addresses': [{'address': '10.11.7.0/32'}],
                                       'name': 'vlan1000'}],
                       'name': 'ams-leaf-08'}]}}

Run time = 1.8981318473815918
>>> 

Take specific note of the following GraphQL features demonstrated above:

  • The returned data comes back in a structure that matches that of the query
  • GraphQL returns only the requested data
  • The returned data is ready for programmatic parsing

Running the script six times produced an average of 2.23 seconds, returning data for ten devices in the ams site.

RESTful Solution

For the RESTful solution, we’re not concerned about matching the exact data structure returned by GraphQL. We’re only concerned with getting the same data into a structure that can be parsed for programmatic use.

The GraphQL results were grouped by device, and the RESTful solution will do that as well, but will have some small format changes.

Here is the format for the data structure that the RESTful solution will return:

{ 
  <device_1_name>: {
    'role': <device_1_role>,
    'interface_info': {
      <interface_1_name>: [list of ip addresses for interface_1],
      <interface_2_name>: [list of ip addresses for interface_2],
        . . . 
    }
  }
  . . . 
  <device_n_name>: {
    'role': <device_n_role>,
    'interface_info': {
      <interface_1_name>: [list of ip addresses for interface_1],
      <interface_2_name>: [list of ip addresses for interface_2],
        . . . 
    }
  }
}

The format above is slightly different than that of the GraphQL results, but is still programmatically parsable.

The RESTful script that returns the data is below. When examining it, take note of the following:

  • We had to artificially construct the data structure, which required a non-trivial amount of work
  • The RESTful script requires three distinct API calls, with some calls iterated multiple times
  • Each API call returns WAY more information than we are interested in
  • Since the call to get interface data for the ams site returns so much extraneous information, Nautobot applies the default limit of 50 results per call
    • The limit constraint reduces the load on the Nautobot server
    • With the default database in https://demo.nautobot.com, the call to get all the interface data iterates six times, returning up to 50 results per call
  • The call to get the IP address information must iterate once for each of the ten devices in the ams site
  • The RESTful script is over twice as long as the GraphQL script and is much more complex
  • The amount of time required to construct, test, and validate the RESTful script was well over an order magnitude longer than that required for the GraphQL script (your mileage may vary!)
"""
Use REST API calls to get the following info for each device in 'ams' site:
- device name
- device role
- interface info
  - interface name
  - ip address

"""

import json
import requests

from pprint import pprint
from time import time

start_time = time()

# Looking at the Nautobot API:
#  - /api/dcim/devices gives you name and role
#  - /api/dcim/interfaces gives you all the interface info
#  - /api/addresses gets IP address info

# Define general request components
payload = {}
headers = {
    "Content-Type": "application/json",
    "Authorization": "Token aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
}

##########################################

# Define devices url, query for 'ams' site devices
ams_dev_url = "https://demo.nautobot.com/api/dcim/devices?site=ams"

# Query Nautobot for the ams devices
ams_dev_resp = requests.get(ams_dev_url, headers=headers, data=payload)

# Turn the response text string into json
ams_devices_json = ams_dev_resp.json()

# Device info dict
device_info = {}

# Create a dict with device names as keys; the value for each key will be a dict.
for device in ams_devices_json["results"]:
    role = device['device_role']['display']
    dev_name = device["name"]
    device_info[dev_name] = {
        'role': role,
        'interface_info': {},
     }

print("device_info is:")
pprint(device_info)
print()
print()

##########################################

print("The GraphQL query returned all interfaces for a device, regardless of whether ")
print("an ip address was configured; we will match that here.")
print()

print("Gathering interface info for `ams` site.")

# Define url for device interfaces in 'ams' site
ams_interface_url = "https://demo.nautobot.com/api/dcim/interfaces?site=ams"

# Define a list to hold the interface info for `ams` site
ams_interface_info = []

# Account for ams_interface_url results limit; iterate url until 'next' url is None
while ams_interface_url is not None:
    ams_interface_resp = requests.get(ams_interface_url, headers=headers, data=payload)
    ams_interface_json = ams_interface_resp.json()
    ams_interface_url = ams_interface_json["next"]
    print("ams_interface_url is {}".format(ams_interface_url))
    ams_interface_info.extend(ams_interface_json["results"])
print()

print("Adding interface names to device_info for the appropriate device.")
# Filter out the interface names and add them in device_info
for interface_entry in ams_interface_info:
    dev_name = interface_entry["device"]["name"]
    interface_name = interface_entry["name"]
    device_info[dev_name]['interface_info'][interface_name] = []
print()

#####################################

print("Finally, gather the IP address info for each interface.")
print("This RESTful call returns only interfaces that have IP addresses configured.")
print()
ip_info_list = []

for device in device_info.keys():
    ip_url = "https://demo.nautobot.com/api/ipam/ip-addresses?device={}".format(device)

    # Account for ip_url results limit; iterate url until 'next' url is None
    while ip_url is not None:
        print("ip_url = {}".format(ip_url))
        ip_url_response = requests.get(ip_url, headers=headers, data=payload)
        ip_json = ip_url_response.json()
        ip_url = ip_json["next"]
        ip_info_list.extend(ip_json["results"])
print()

print("Add the IP address info to device_info.")
print()
for item in ip_info_list:
    device = item["assigned_object"]["device"]["name"]
    interface = item["assigned_object"]["name"]
    address = item["address"]
    device_info[device]['interface_info'][interface].append(address)

print("Here is the completed data structure:")
pprint(device_info)
print()
end_time = time()

run_time = end_time - start_time

print("Run time = {}".format(run_time))


Here are the results of the RESTful script:

blogs/graphql_vs_restful % python3 -i restful_api_query_ams_device_ints.py
device_info is:
{'ams-edge-01': {'interface_info': {}, 'role': 'edge'},
 'ams-edge-02': {'interface_info': {}, 'role': 'edge'},
 'ams-leaf-01': {'interface_info': {}, 'role': 'leaf'},
 'ams-leaf-02': {'interface_info': {}, 'role': 'leaf'},
 'ams-leaf-03': {'interface_info': {}, 'role': 'leaf'},
 'ams-leaf-04': {'interface_info': {}, 'role': 'leaf'},
 'ams-leaf-05': {'interface_info': {}, 'role': 'leaf'},
 'ams-leaf-06': {'interface_info': {}, 'role': 'leaf'},
 'ams-leaf-07': {'interface_info': {}, 'role': 'leaf'},
 'ams-leaf-08': {'interface_info': {}, 'role': 'leaf'}}


The GraphQL query returned all interfaces for a device, regardless of whether 
an ip address was configured; we will match that here.

Gathering interface info for `ams` site.
ams_interface_url is https://demo.nautobot.com/api/dcim/interfaces/?limit=50&offset=50&site=ams
ams_interface_url is https://demo.nautobot.com/api/dcim/interfaces/?limit=50&offset=100&site=ams
ams_interface_url is https://demo.nautobot.com/api/dcim/interfaces/?limit=50&offset=150&site=ams
ams_interface_url is https://demo.nautobot.com/api/dcim/interfaces/?limit=50&offset=200&site=ams
ams_interface_url is https://demo.nautobot.com/api/dcim/interfaces/?limit=50&offset=250&site=ams
ams_interface_url is https://demo.nautobot.com/api/dcim/interfaces/?limit=50&offset=300&site=ams
ams_interface_url is None

Adding interface names to device_info for the appropriate device.

Finally, gather the IP address info for each interface.
This RESTful call returns only interfaces that have IP addresses configured.

ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-edge-01
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-edge-02
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-leaf-01
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-leaf-02
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-leaf-03
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-leaf-04
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-leaf-05
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-leaf-06
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-leaf-07
ip_url = https://demo.nautobot.com/api/ipam/ip-addresses?device=ams-leaf-08

Add the IP address info to device_info.

Here is the completed data structure:
{'ams-edge-01': {'interface_info': {'Ethernet1/1': ['10.11.192.0/32'],
                                    'Ethernet10/1': ['10.11.192.32/32'],
                                    'Ethernet11/1': [],
                                    'Ethernet12/1': [],
                                     < --- snip for brevity --- >
                                    'Ethernet9/1': ['10.11.192.28/32'],
                                    'Loopback0': ['10.11.128.1/32'],
                                    'Management1': []},
                 'role': 'edge'},

 'ams-edge-02': {'interface_info': {'Ethernet1/1': ['10.11.192.1/32'],
                                    'Ethernet10/1': ['10.11.192.34/32'],
                                    < --- snip for brevity --- >
                                    'Loopback0': ['10.11.128.2/32'],
                                    'Management1': []},
                 'role': 'edge'},
 'ams-leaf-01': {'interface_info': {'Ethernet1': ['10.11.192.5/32'],
                                    < --- snip for brevity --- >
                                    'vlan99': ['10.11.64.0/32']},
                 'role': 'leaf'},
 < --- some devices snipped for brevity --- >
 'ams-leaf-07': {'interface_info': {'Ethernet1': ['10.11.192.29/32'],
                                    'Ethernet10': [],
                                    < --- snip for brevity --- >
                                    'vlan99': ['10.11.70.0/32']},
                 'role': 'leaf'},
 'ams-leaf-08': {'interface_info': {'Ethernet1': ['10.11.192.33/32'],
                                    < --- snip for brevity --- >
                                    'vlan99': ['10.11.71.0/32']},
                 'role': 'leaf'}}

Run time = 13.60936713218689
>>> 

Running the script six times produced an average run time of 14.9 seconds.

This script created a data structure that is not identical to the structure created by GraphQL, but is similar in nature and is still parsable.

Final Results

MethodAverage Run Time# of QueriesTime to Create Script
GraphQL2.2 seconds1~ 20 minutes
RESTful14.9 seconds17~ 200 minutes+

NOTE: These results are based on the baseline data in the Nautobot demo sandbox. If someone has modified the database, your actual results may vary a bit.

By any measure, GraphQL is the clear choice here! GraphQL allows a much simpler script that is much more efficient than REST.

Imagine your automation task being able to run an average of 12.2 seconds faster (14.9 – 2.2 seconds) by using GraphQL.

I also don’t want to undersell the amount of time and headache required to create the RESTful script, including parsing the REST data and crafting the data structure: it was not pleasant, and we should not talk about it again. Ever.

GraphQL Considerations for Server Load

Querying with GraphQL results in much less coding and post-processing for the user and is generally much more efficient than RESTful calls that achieve the same result.

However, the load on the Nautobot server must still be considered. Depending on the data you are after and your use case, it may make sense to:

  • Use multiple, targeted GraphQL queries instead of a single GraphQL query with a large scope
  • Use RESTful queries and offload the processing from the Nautobot server, doing the post-processing on your local host

Depending on how many sites and devices you have, the example query below may put undue load on the Nautobot server:

query {
  devices {
    name
    device_role {
      name
    }
    interfaces {
      name
	ip_addresses {
          address
      }
    }
  }
}

This script could cause a lot of undue load on the server because it is not targeted to a site or group of sites.

To ease the load, you could instead do the following:

1. Make a GraphQL query to return all the site names

 query {
     sites {
       name
   }
 }

2. Make additional GraphQL queries to programmatically iterate over each site name within the query parameter devices(site:"<site_name>"):

     query {
       devices(site:"ams") {
         name
         device_role {
           name
         }
         interfaces {
           name
         ip_addresses {
               address
           }
         }
       }
     }
     query {
       devices(site:"bkk") {
         name
         device_role {
           name
         }
         interfaces {
           name
         ip_addresses {
               address
           }
         }
       }
     }

    et cetera . . .

    Wrapping Up

    This case study validates the clear advantages GraphQL offers: simpler and faster automation, higher query efficiency, less time swimming in extraneous data, and thus less time coding. The great part is that Nautobot delivers GraphQL capabilities that you can leverage right now. Please do give it a try.

    If you have questions, you can check out these Network to Code resources for more info:


    Conclusion

    You can also hit us up on the #nautobot channel on NTC Slack.

    Thank you, and have a great day!

    -Tim



    ntc img
    ntc img

    Contact Us to Learn More

    Share details about yourself & someone from our team will reach out to you ASAP!