Nautobot Application: BGP Models

Blog Detail

We are happy to announce the release of a new application for Nautobot. With this application, it’s now possible to model your ASNs and BGP Peerings (internal and external) within Nautobot!

This is the first application of the Network Data Models family which gave us a great opportunity to test some new capabilities of the application framework introduced by Nautobot. Data modeling is an interesting exercise, and with BGP being a complex ecosystem, this has been an interesting project. This blog will present the application and some design principles that we had in mind when it was developed.

Nautobot

The development of this application was initially sponsored by the Riot Direct team at Riot Games. Thanks to them for contributing it back to the community.

Overview

This application adds the following new data models into Nautobot:

  • BGP Routing Instance : device-specific BGP process
  • Autonomous System : network-wide description of a BGP autonomous system (AS)
  • Peer Group Template : network-wide template for Peer Group objects
  • Peer Group : device-specific configuration for a group of functionally related BGP peers
  • Address Family : device-specific configuration of a BGP address family (AFI-SAFI)
  • Peering and Peer Endpoints : A BGP Peering is represented by a Peering object and two endpoints, each representing the configuration of one side of the BGP peering. A Peer Endpoint must be associated with a BGP Routing Instance.
  • Peering Role : describes the valid options for PeerGroupPeerGroupTemplate, and/or Peering roles

With these new models, it’s now possible to populate the Source of Truth (SoT) with any BGP peerings, internal or external, regardless of whether both endpoints are fully defined in the Source of Truth.

The minimum requirement to define a BGP peering is two IP addresses and one or two autonomous systems (one ASN for iBGP, two ASNs for eBGP).

Peering

Peering

Autonomous Systems

Autonomous Systems

Peer Endpoint

Peer Endpoint

Peer Group

Peer Group

Peering Roles

Peering Roles

Installing the Application

The application is available as a Python package in PyPI and can be installed atop an existing Nautobot installation using pip:

$ pip3 install nautobot-bgp-models

This application is compatible with Nautobot 1.3.0 and higher.

Once installed, the application needs to be enabled in the nautobot_config.py file:

# nautobot_config.py
PLUGINS = [
    # ...,
    "nautobot_bgp_models",
]

Design Principles

BGP is a protocol with a long and rich history of implementations. As we understand existing limitations of data modeling relevant to this protocol, we had to find right solutions both for innovations and improvements. In this section we explain our approach to the BGP data models.

Network View and Relationship First

One of the advantages of a Source of Truth is that it captures how all objects are related to each other and then exposes those relationships via the UI and API, making it easy for users to consume that information.

Instead of modeling a BGP session from a device point of view with a local IP address and a remote IP address, the decision to model a BGP peering as a relationship between two endpoints was chosen. This way, each endpoint has a complete understanding of what is connected on the other side, and information won’t be duplicated when a session between two devices exists in the SoT.

This design also accounts for external peering sessions where the remote device is not present in Nautobot, as is often the case when you are peering with a transit provider.

Start Simple

For the first version we decided to focus on the main building blocks that compose a BGP peering. Over time the BGP application will evolve to support more information: routing policy, community, etc. Before increasing the complexity we’d love to see how our customers and the community leverage the application.

Inheritance

Many of the Border Gateway Protocol implementations are based on the concept of inheritance. It’s possible to centralize almost all information into a Peer Group Template model, and all BGP endpoints associated with this Peer Group Template will inherit all its attributes.

The concept is very applicable to automation, and we wanted to have a similar concept in the SoT. As such, we implemented an inheritance system between some models:

  • PeerGroup inherits from PeerGroupTemplate.
  • PeerEndpoint inherits from PeerGroupPeerGroupTemplateBGPRoutingInstance.

As an example, a PeerEndpoint associated with a PeerGroup will automatically inherit attributes of the PeerGroup that haven’t been defined at the PeerEndpoint level. If an attribute is defined on both, the value defined on the PeerEndpoint will be used.

(*) Refer to the application documentation for all details about the implemented inheritance pattern.

The inherited values will be automatically displayed in the UI and can be retrieved from the REST API with the additional ?include_inherited=true parameter.

Inheritance

Extra Attributes

Extra attributes allow to describe models provided by the application with additional information. We made a design decision to allow application users to abstract their configuration parameters and store contextual information in this special field. What makes it very special is the support for inheritance. Extra attributes are not only inherited, but also intelligently deep-merged, thus allowing for inheriting and overriding attributes from related objects.

Integration with the Core Data Model

With Nautobot, one of our goals is to make it easy to extend the data model of the Source of Truth, not only by making it easy to introduce new models but also by allowing applications to extend the core data model. In multiple places, the BGP application is leveraging existing Core Data models.

Extensibility

We designed the BGP models to provide a sane baseline that will fit most of the use cases, and we encourage everyone to leverage all the extensibility features provided by Nautobot to store and organize the additional information that you need under each model or capture any relationship that is important for your organization.

All models introduced by this application support the same extensibility features supported by Nautobot, which include:

  • Custom fields
  • Custom links
  • Relationships
  • Change logging
  • Custom data validation logic
  • Webhooks in addition to the REST API and GraphQL.

An example can be seen in the Nautobot Sandbox where a relationship between a circuit and a BGP session was added to track the association between a BGP session and a given circuit.


Conclusion

More information on this application can be found at Nautobot BGP Plugin. You can also get a hands-on feel by visiting the public Nautobot Sandbox.

As usual, we would like to hear your feedback. Feel free to reach out to us on Network to Code’s Slack Channel!

-Damien & Marek



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Upgrade Your Python Project With Poetry

Blog Detail

Dependency management and virtual environments are integral to the Python ecosystem, yet the primary tools in use today are far from ideal. Some of the primary methods are:

  • Dependencies management: pip by way of requirements.txt is still the de facto solution for most of us. While this approach has worked in the past, there are limitations when it comes to guaranteeing that the same project will be consistently installed.
  • Virtual environments: a common setup is to use virtualenv to create your virtual environment and manually activate it using source <path to venv>/activate. While this approach works, it requires the user to know which venv needs to be activated for each project and the command to execute can be lengthy.
  • Code packaging: (only applicable if you need to share your code), it is common to use setuptools in a setup.py file, but this solution also has some shortcomings.

If you are using any or all of the methods described above, you should take a look at Poetry to help you manage your Python project(s). Poetry’s goal is to simplify the management of Python packaging and dependencies. Amongst other things, Poetry can help:

  • Manage your dependencies by replacing requirements.txt
  • Manage your virtualenv by simplifying the creation and activation of a virtualenv for your project
  • Manage your Python package by replacing setup.py
  • Publish your application to PyPi
  • Turn Python functions into command line programs
  • Ensure package integrity

It sounds like magic and too good to be true, but there is really nothing magical happening here. Poetry is just a modern tool implementing the best practices from Python and other tools to manage a project properly. Poetry is leveraging 2 main files:

  • pyproject.toml: As the main configuration file for your Python project, this file can be edited manually and Poetry also helps to manage the file. The pyproject.toml file is not specific to Poetry and is meant to be the main configuration file for your Python project and all the tools surrounding it (Poetry, Black, etc.). It was introduced to the Python community in 2016 by PEP 518 to improve how to define Python packages, but its scope has increased year over year to become the default configuration file.
  • poetry.lock: A lock file managed by Poetry, this file should never be edited manually. With the poetry.lock file, Poetry brings a much-needed feature to Python dependencies management where we can separately maintain the list of primary dependencies, the list of development dependencies, and the exact version of the libraries that should be installed on a system. This feature is common on other languages but it has been used infrequently in Python’s setup.py or requirements.txt in the past. If you have ever generated your requirements.txt file with pip freeze > requirements.txt to ensure that you’ll always install the same version of your dependencies, you should be familiar with the problem the lock file solves. While pip freeze works most of the time, it’s not a great solution and it’s prone to version conflicts between projects, which may require manual intervention.

If you are interested in reading more about the story behind pyproject.toml, I recommend reading this blog from Brett Cannon.

Install Poetry

To install Poetry on Mac OS, Linux or Windows(bash) the recommended method is to use the below command on your system

curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python

For convenience, Poetry is also available via pip but it’s not the recommended method to install it. I usually reserve that for when I need to install it within a Docker container: pip install poetry.

Manage Python dependencies and virtual environment with Poetry

Below is a simple pyproject.toml file to keep track of the dependencies for a project named mypythonproject.
This file can either be generated manually or Poetry can help you to generate it with poetry init.

[tool.poetry]
name = "mypythonproject"
version = "0.1.0"
description = "My awesome Python project"
authors = ["NTC <info@networktocode.com>"]

[tool.poetry.dependencies]
python = "^3.6"
click = "^7.1.1"

Taking a closer look at the file, the first section [tool.poetry] contains information about the project itself and the second section [tool.poetry.dependencies] defines the list of dependencies for the project, including both the Python version and the list of external dependencies that would usually be in a requirements.txt file.

The pyproject.toml file should be at the root of your project (here it’s the only file in my directory). Poetry will automatically install all the dependencies with poetry install (this replaces for pip install -r requirements.txt, python setup.py install, pip install ., or pip install -e .)

➜  mypythonproject# ll
total 8
-rw-r--r--  1 damien  staff   203B May 13 09:17 pyproject.toml
➜  mypythonproject#
➜  mypythonproject# poetry install
The currently activated Python version 2.7.16 is not supported by the project (^3.6).
Trying to find and use a compatible version.
Using python3 (3.7.7)
Creating virtualenv mypythonproject-0zMZkBqq-py3.7 in /Users/damien/Library/Caches/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies... (0.2s)

Writing lock file

Package operations: 1 install, 0 updates, 0 removals

  - Installing click (7.1.2)
➜  mypythonproject#

During the installation, Poetry automatically generates the poetry.lock file to track the exact version of the dependencies that have been install on my system. If the poetry.lock file was already present, it would have installed the exact version of click defined in the lock file, instead of trying to install the latest one from PyPi.

➜  mypythonproject# ll
total 16
-rw-r--r--  1 damien  staff   606B May 13 09:43 poetry.lock
-rw-r--r--  1 damien  staff   203B May 13 09:17 pyproject.toml

➜  mypythonproject# cat poetry.lock
[[package]]
category = "main"
description = "Composable command line interface toolkit"
name = "click"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
version = "7.1.2"

[metadata]
content-hash = "1876b927e070ae12d1e9090f5ea6bcdd2bb35f09269fc2182bcb9399c5e1be2a"
python-versions = "^3.6"

[metadata.files]
click = [
    {file = "click-7.1.2-py2.py3-none-any.whl", hash = "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc"},
    {file = "click-7.1.2.tar.gz", hash = "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a"},
]

Both the pyproject.toml and the poetry.lock should be tracked in source control (git). Notice the hash values in the lock file, these values ensure the package installed locally is exactly the same as intended.

Also, during poetry install, Poetry created a new virtual environment for my project because it detected that no virtual environment was already associated with the project. Poetry is able to manage multiple environments per project and provides some commands to easily manage these virtual environments.

  • poetry env info to list the existing env
  • poetry shell to activate the default virtualenv (replaces source <path to venv>/activate, or workon <project> if you use virtualenvwrapper )
  • poetry run to run a command within the default virtual environment without activating it
➜  mypythonproject# poetry env info
 
Virtualenv
Python:         3.7.7
Implementation: CPython
Path:           /Users/damien/Library/Caches/pypoetry/virtualenvs/mypythonproject-0zMZkBqq-py3.7
Valid:          True

System
Platform: darwin
OS:       posix
Python:   /usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7

➜  mypythonproject# poetry shell
The currently activated Python version 2.7.16 is not supported by the project (^3.6).
Trying to find and use a compatible version.
Using python3 (3.7.7)
Spawning shell within /Users/damien/Library/Caches/pypoetry/virtualenvs/mypythonproject-0zMZkBqq-py3.7
➜  mypythonproject . /Users/damien/Library/Caches/pypoetry/virtualenvs/mypythonproject-0zMZkBqq-py3.7/bin/activate
(mypythonproject-0zMZkBqq-py3.7) ➜  mypythonproject# 

It’s possible to disable the virtual environment management in Poetry with poetry config virtualenvs.create false if you want to manage your virtual environment on your own or if you don’t want to use a virtual environment at all.

Add a new dependency to your project

Poetry provides a method to easily install and track a new dependency for your project: poetry add <python package>

In the example below, I’m adding jinja2 as a dependency to my project:

(mypythonproject-0zMZkBqq-py3.7) ➜  mypythonproject# poetry add jinja2
Using version ^2.11.2 for jinja2

Updating dependencies
Resolving dependencies... (0.2s)

Writing lock file

Package operations: 2 installs, 0 updates, 0 removals

  - Installing markupsafe (1.1.1)
  - Installing jinja2 (2.11.2)

Poetry automatically updated the pyproject.toml and the poetry.lock file in the process:

[tool.poetry]
name = "mypythonproject"
version = "0.1.0"
description = "My awesome Python project"
authors = ["NTC <info@networktocode.com>"]

[tool.poetry.dependencies]
python = "^3.6"
click = "^7.1.1"
jinja2 = "^2.11.2"

Poetry can also maintain a list of dependencies specific to your development environment. To add a new dependency to the development dependencies list you need to add the option -D: poetry add -D pytest. This will create a new section [tool.poetry.dev-dependencies] in the pyproject.toml file.

[tool.poetry.dependencies]
python = "^3.6"
click = "^7.1.1"
jinja2 = "^2.11.2"

[tool.poetry.dev-dependencies]
pytest = "^5.4.2"

Managing Python package with Poetry

As mentioned in the introduction, Poetry can also manage your Python package.
By default, Poetry will look for a directory with the name of the project and it will try to install it. In my example, since my project is named mypythonproject in the pyproject.toml, Poetry will automatically look for a directory with this name and install it.

I created a very simple file named cli.py in the directory mypythonproject

# mypythonproject/cli.py 

def main():
    print("hi there")

Here is how the project looks on my file system.

(mypythonproject-0zMZkBqq-py3.7)  ➜  mypythonproject#
.
├── mypythonproject
│   └── cli.py
├── poetry.lock
└── pyproject.toml

Running poetry install again will automatically install the delta between the pyproject.toml file and my environment, here the only delta is the library mypythonproject itself.

(mypythonproject-0zMZkBqq-py3.7) ➜  mypythonproject# poetry install
Installing dependencies from lock file

No dependencies to install or update

  - Installing mypythonproject (0.1.0)

Once installed, I can access my code from anywhere as long as I’m still within the same virtual environment. In the example below, I moved outside of the project directory and imported the function main() in Python with from mypythonproject.cli import main

(mypythonproject-0zMZkBqq-py3.7) ➜  mypythonproject# cd /
(mypythonproject-0zMZkBqq-py3.7) ➜  /
(mypythonproject-0zMZkBqq-py3.7) ➜  / python
Python 3.7.7 (default, Mar 10 2020, 15:43:33)
[Clang 11.0.0 (clang-1100.0.33.17)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from mypythonproject.cli import main
>>> main()
hi there

We can also check the list of installed packages within the virtual environment with pip list:

(mypythonproject-0zMZkBqq-py3.7) ➜  / pip list | grep mypythonproject
mypythonproject       0.1.0      /Users/damien/projects/mypythonproject

If the name of your directory does not match the name of your project, you need to tell Poetry from which directory to install using packages key as part of the main [tool.poetry] section of the pyproject.toml:

[tool.poetry]
name = "mypythonproject"
version = "0.1.0"
description = "My awesome Python project"
authors = ["NTC <info@networktocode.com>"]
packages = [
    { include = "mylibraryname" },
]

Creating command line programs with Poetry

Another feature that is extremely useful in Poetry is the ability to easily turn a Python function into an executable/program that will be available in your PATH.

Building on the previous example, I can convert my function main() into a CLI tool with if __name__ == "__main__":. At this point I can execute it as a script as long as I know its exact location.

def main():
    print("hi there")

if __name__ == "__main__":
    main()
(mypythonproject-0zMZkBqq-py3.7) ➜  mypythonproject# python mypythonproject/cli.py
hi there

By leveraging [tool.poetry.scripts] feature, I can automatically turn my function main() into an executable, here called myawesomecli:

[tool.poetry.scripts]
myawesomecli = "mypythonproject.cli:main"

After reinstalling the library with poetry install, I now have access to a new executable myawesomecli:

(mypythonproject-0zMZkBqq-py3.7) ➜  mypythonproject# myawesomecli
hi there

(mypythonproject-0zMZkBqq-py3.7) ➜  mypythonproject# which myawesomecli
/Users/damien/Library/Caches/pypoetry/virtualenvs/mypythonproject-0zMZkBqq-py3.7/bin/myawesomecli

Conclusion

I hope this introduction to Poetry convinced you to give it a try, I know it’s hard to change our habits when it comes to tools and development environment sometimes. I wish I had tried Poetry a long time ago instead of waiting months before transitioning.
Poetry actually does even more than what we covered in this article, so I encourage you to check out the official documentation!

-Damien (@damgarros)



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

The State of Network Operation Through Automation / NetDevOps Survey 2019

Blog Detail

Network automation has become prevalent in the network industry over the last few years and yet we have little data on the state of the market today. There is a lot of discussion about Ansible and Python but beyond that there is not a good source for those seeking to understand what tools are being used by different companies, what operations people are automating the most/least, or even how long it is taking on average to learn network automation.

The NetDevOps Survey project was started in 2016 to address these questions and more. The idea was to start a survey about the network automation industry to help bring clarity to these questions. Network automation is deeply rooted in open source, and it was decided to make the project open and collaborative, following the best practices from open source projects. The intention was to have the survey be both anonymous and vendor neutral.

When the initiative started in 2016, 20 of us came together to define the first set of questions. At the time, I was working at Juniper and Jason Edelman was on the early days of Network to Code, but we worked together collaboratively on the project.

After a few years of inactivity, the second edition of the survey was released in October 2019. This is in big part thanks to Francois Caen who pushed for it to come back, and provided the help to organize this new edition.

As we worked on updating the survey for the 2019 edition, we tried to reuse the same questions as much as possible to allow us to compare the evolution of the responses over time. We also added a completely new section to understand how organizations and individuals are transitioning into network automation. This section was suggested by the community and was a welcome addition–the insights we are getting from it have been very interesting.

Participants in the 2019 NetDevOps Survey

The 2019 edition resulted in 300 responses which is about the same number that as the first edition in 2016.

The first set of questions was designed to give a better understanding of the type of networks and environments the participants come from.

Looking at the graphs below, there is a good distribution of participants both in term of types of environment and network sizes with an average around 1000 devices. It’s interesting to note that while there is a lot of coverage (blogs/podcasts/press) around network automation, most sources are focused on data centers. 60% of the participants in this survey are also managing Campus and/or WAN networks, but the data center is still the environment mentioned by most participants (~75%). This number has declined slightly since 2016, when data centers were mentioned by 80% of participants. These numbers are in line with the migration to the cloud that we here at Network to Code have observed with our customers.

env-type_tool_perc
netdevops_survey_2019_env-nbr-devices_bar_perc
netdevops_survey_env-type_compare

State of Network Automation Through Automation

The main section of the survey is meant to understand which day-to-day operations are currently automated and which tools are used for each use case. We put together a list of 13 of the most common operations spanning topics such as configuration management, troubleshooting, and software upgrades.

While the 3 main operations that are automated today are focused on configuration management, it is interesting to see a significant increase around compliance check and pre-post changes. At the bottom of the graph we are also seeing a noticeable increase in responses on troubleshooting and software qualification.

netdevops_survey_operation-automated_compare

Configuration Management

If we look specifically at configuration management, it’s interesting to see that 60% of the participants are using Ansible and roughly the same percentage are also using some scripts at different levels of abstraction. Nornir and Saltstack are both used by ~10% of the participants, an impressive achievement for these 2 open source projects that have been mainly driven/promoted by the community. Kudos to David Barroso, Mircea Ulinic, Kirk Byers, and Dmitry Figol.

the graph is a little bit misleading because we split scripts in 2 categories this year but if we add #2 and #3 we are close to 60%.

config-gen-deploy_tool_perc

Interestingly, on average, participants selected more than 2 responses to this question, which means that a lot of participants are using more than 2 solutions to generate and deploy configuration. This fact got me curious, so I decided to dive deeper into the responses to understand which tools people are mostly using in addition to Ansible.
In the graph below, I narrowed down the responses to only the participants that selected Ansible. It is interesting to note that 12% of them are also using Nornir and more than 60% are using some scripts in addition to Ansible. There is not enough information to truly explain the reasoning behind these results but it is something I think it would be interesting to investigate deeper in the next edition.

config-gen-deploy_tool_sub_answer

As a side note, there are a lot of interesting analytics that haven’t been done yet on the data, such as diving deeper into each response or exploring how certain groups of participants respond to specific questions. If you are interested in doing some analysis on your own, the database and some tools are available in GitHub.

Maturity level / Automated Changes

At Network to Code, we often refer to network automation as a journey, which takes couple of years on average. As a part of the survey, I was personally interested in understanding the current level of maturity of our industry. How fast or how slowly is the market evolving? In the graph below, we can see that 37% of the participants have been leveraging automation in a significant way for less than 1 year and another 29% have been for 1 to 2 years. These numbers will be interesting to monitor year over year.

transition-team-how-long_pie

Another way to measure the level of maturity is to look at how manual and automated changes are coexisting, or not, within an organization. Usually, in the most advanced environment, manual changes are completely forbidden. There are two questions in the survey that give some good insight on this topic:

  • Do you allow configuration to be manually changed in the CLI in addition to automated deployment?
  • Have you automated the decision to deploy a new configuration?

To the first question, 14.5% of the participants indicated that they don’t allow manual changes in addition to automated deployment. This marks a significant increase from 2016, where only 8.8% of the participants responded “No”. And 46% of the participants indicated that they have fully or partially automated the decision to deploy a new configuration.

config-decide-changes_pie

Anomaly Detection / Telemetry & Analytics

There has been an increase in conversations and projects surrounding telemetry and analytics in the last few years. A lot of my friend and colleagues working for webscale companies have reported using or building new telemetry and analytics stacks that are becoming an integral part of automation platforms.

Interestingly, the two questions related to anomaly detection/telemetry and analytics are showing a different picture. The majority of participants are still leveraging traditional monitoring solutions based on SNMP/Syslog and leveraging mostly Up/Down signals to detect issues in the network. With only 40% of the participants leveraging flows data and 10% using end to end probes.

anomaly-detection-sources_tool_perc
anomaly-detection-signal_tool_perc

My personal take-away is that today, telemetry and analytics is where network automation was 3-4 years ago with a significant disconnect between the most advanced companies and traditional enterprises.
A few years ago, network automation was not even a topic for most enterprise engineers, while a handful of companies were already all-in. At the pace at which the industry is moving these days, I think telemetry and analytics will make some progress in the enterprise space in the next couple of years.

Transition to Network Automation

As mentioned earlier, based on the input of the community we added a new section to understand how both organizations and individuals are transitioning to network automation, how long is it taking, what strategies are they adopting and more.

Team / Org

The results to the question what actions did you team take to transition to network automation show that most enterprises don’t have a concrete strategy and are relying on their existing staff to learn on their own or are just sending them to training. Less than 20% of the participants mentioned hiring a dedicated resource for network automation and less than 10% mentioned working with a consulting firm to help them in their automation journey.

transition-team-actions_tool_perc

Individual

As individuals, most participants (81%) estimated that it took them less than 1000 hours to learn network automation and 25% even estimated less than 200 hours. The majority of participants had to invest some personal time to learn new skills, while 40% where able to learn on the job either part-time or full-time.

Overall 34% of the participants mentioned that it took them less than 1 year to make the transition and another 45% estimated the transition at 1 to 2 years.

transition-self-nbr-hours_pie
transition-self-how-long_bar_perc
transition-self-find-time_tool_perc

The last section of the survey focuses on trends. What topics and tools are, or are not, top of mind right now? For this section, we selected a dozen tools and another dozen topics. For each of them we asked the participants if they are:

  • Already using them in production (dark green)
  • Currently evaluating them (green)
  • Thinking about it (light green)
  • Not interested (grey)
  • No idea (orange)

There is a of information in the graphs below, so it is hard to cover everything but my personal takeaways are:

  • 35% of the participants are already using a Source of Truth (SoT) in production and another 50% are either evaluating one or thinking about it. In our experience at Network to Code, a SoT (or SoT strategy) is a critical component of a network automation strategy and it often seems like the topic is not getting enough attention. It’s very encouraging to see such high level of interest in this topic.
  • The level of adoption for ChatOps is still relatively low, with only ~15% of the participants using them in production and almost 30% of the participants expressing no interest. At NTC, we are seeing a lot of interesting use cases that can be solved with ChatOps and we are expecting this technology to get adopted more broadly in the future.
  • DevOps, Infrastructure as Code (IaC), and CI/CD are getting a lot of interest and are getting used in production more and more.
trend-topics_stack

On the tools side, there is even more going on. My personal takeaways are:

  • Git and Ansible are used in production at a massive scale–both solutions are used in production by ~70% of the participants.
  • Modern monitoring tools like ELK, Grafana, Prometheus & Influx are used in production by more than 30% of the participants. These numbers are encouraging but don’t necessarily align with the previous responses to the anomaly detection questions. This could be explained if both new and legacy solutions are coexisting right now and the new solutions are still mostly used for visibility but are not used yet for alerting.
  • Nornir and Network Verification Software (Batfish, Forward Networks, etc. ) have a disproportionate ratio of production deployment compared to the level of participants evaluating or considering them. These two technologies will be interesting to monitor in the upcoming months/years.
trend-tools_stack

Evolution over Time

Another interesting way to look at these results is to examine the evolution of the responses between 2016 and 2019. I selected a few below that I found the most interesting/surprising.

Looking at Git and Ansible, it’s interesting to see that for both technologies the level of interest was already very high in 2016 but the deployment in production were significantly lower. Both have gained significant market share in the last few years.

On the other side, solutions like Chef and Puppet have followed the opposite trajectory with a significant decrease in interest and deployment in production from the participants over the last three years.

trend-tools_trend-tools-ansible_compare

The results surrounding event driven automation are surprising because, while the level of interest was already very high in 2016, the number of deployments in production has not significantly increased between 2016 and 2019. One explanation could be that EDA requires a higher level of maturity and expertise to be properly deployed in production. Based on the previous results, with 2/3 of the participants using automation for less than 2 years, it’s likely that the market has not reached this level of maturity yet.

trend-tools_trend-tools-chef_compare

Last but not least, it’s interesting to visualize the progression of Infrastructure as Code, CI/CD, and NAPALM over the last few years. Increased interest in these topics confirms what we are witnessing every day with our customers.

trend-topics_trend-topics-event-driven_compare
trend-topics_trend-topics-ci-cd_compare
trend-tools_trend-tools-napalm_compare

More graphs are available in Github

NetDevOps Survey

If you’re interested in learning more about about the NetDevOps Survey project, you can find the project on Github or join the conversation in the #netdevops_survey channel in the Network to Code slack channel.

All the results are available in Github in different formats:

The plan is to start working on the 2020 Edition around August 2020 to have it ready to accept responses by October 2020.

How to help

If you’re interested in helping with the project or providing feedback, the best way to reach us is to open an issue on GitHub or join us in Slack.

At this point one of our biggest concerns is increasing the visibility of the project. The more participants we can get for the next edition, the deeper the insights and the better the project. Being community driven, we’ve been lacking marketing support to reach a broader and more diverse audience. Anything you can do to help here would go a long way.


Conclusion

Thanks for reading all the way to the end and for your interest in this project. If you are interested in diving deeper, the complete results of the 2019 Edition are available online.
I am personally looking forward to reading more analysis and hearing more perspectives on these results. I’m also looking forward to the next edition.

-Damien (@damgarros)



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!