Introducing Nautobot v2

Blog Detail

Nautobot v2.0 was recently released, and we’re excited to share the new features and important changes it brings to the network automation community! We’re currently hard at work on the next release (v2.1), and by the end of 2023 we will provide some insight into what it and the rest of the v2 release train will bring.

IPAM Enhancements

  • Namespaces: Namespaces have been introduced to provide unambiguous uniqueness boundaries to the IPAM data model. Prefixes and VRFs are now assigned to Namespaces, which allows for a variety of data tracking use cases, but primarily targets overlapping or duplicate address space needs.
  • Prefix and IP Address Relationships: In Nautobot v2, the Prefix and IP Address hierarchy now relies on concrete parent/child relationships. In Nautobot v1, these relationships were calculated dynamically and often led to inconsistent or confusing hierarchies, especially with overlapping address space. This change ensures an always consistent data set and offers several performance improvements in the UI and API.
  • IP Address Interface Assignments: Stemming from the other data model changes, IP Addresses can now be assigned to multiple interfaces to more easily track topologies where this is required. In the past, there were special concessions for Anycast needs; but in v2, you can now intuitively deal with other situations like duplicate loopback addresses or cookie-cutter network segment deployments.

Unified Role Model

The new release consolidates existing role models into a singular, streamlined approach, akin to the handling of Statuses. This change simplifies the management of user-defined roles across DCIM, IPAM, and other areas of the network data model. Like Statuses, users now define the roles they want and which models those roles apply to, in one central location.

Location Model Consolidation

Nautobot v2 phases out the Site and Region models, integrating their functionalities into the Location model. This consolidation streamlines data management and reduces complexity. The Location model allows users to define a hierarchy of Location Types that is specific to their organization. Location Types also define what types of objects can be assigned to those parts of the hierarchy, such as Devices or Racks. The consolidated Location model allows for modeling physical, logical, or a mix of both types of entities. Examples might be tracking countries with assets, or defining logical areas in a data center DMZ.

CSV Import/Export

Updates to CSV functionality include consistent headers across different modules and more relevant data for managing relationships, making data import/export tasks more intuitive and efficient. Nautobot v2.1 will move export operations (CSV and Export Templates) to a system-provided background Job, which will mean users can export large data sets without worry that the operation might timeout.

REST API Improvements

  • Depth Control: This provides enhanced control over query depth in the REST API, which allows API consumers to specify the amount of data and context they need in a given request. This replaces the ?brief query parameter in the API.
  • Version Defaults: New Nautobot v2 installs will now default to the latest version of the REST API, which means consumers can always take advantage of new features by default. Administrators retain the ability to specify a specific version, where required.

Application Development APIs

  • Code Namespace Consolidation: The apps code namespace has been reorganized for better clarity and manageability. Most app development dependencies can now be imported from the nautobot.apps module.
  • Code Reorganization: As part of cleaning up the apps namespace, many items related to apps and in the core project have been relocated, but most things app developers need can be found in the nautobot.apps module.
  • Developer Documentation: We have made several improvements to the overall structure of the developer documentation and will continue to put significant effort into this area throughout the v2 release train and beyond.

Jobs Updates

  • Logging Control: Logging statements in Jobs have changed to offer authors better flexibility and control. Most notably, logging is achieved with the Python standard logging facilities, with special arguments to specify whether the log message should be saved to the JobResult (displayed in the UI) or simply logged to the console.
  • Atomic Transaction Changes: In Nautobot v2, Jobs are no longer run inside an atomic transaction context manager. This means authors now have the choice to make their Job atomic or not, by implementing the context manager themselves. A common dry-run interface is provided, but it is up to the author to implement support, much like Ansible modules.
  • State Management: Similar to the atomic transaction changes, Job authors now have full control over the state of job executions. This means authors are now responsible for explicitly failing a Job, based on their desired logic.
  • File Output and Downloads: Nautobot v2.1 will introduce the capability to generate and output files from Jobs and allow users to download those files in the JobResult’s UI. This capability, built to support the export functionality explained earlier, will be offered to Job authors as an official API.

Revamped User Interface

Nautobot v2.1 will see a facelift of the Nautobot web UI to align with a more modern look and feel. We also hope you will enjoy the navigation moving to a sidebar.

Looking Beyond 2.0

While we have touched on a few important features in the upcoming v2.1 release, the entire v2 release train will remain focused on several network data model enhancements and exciting new automation features. Some of the things we have planned include:

  • More device metadata like software and hardware family
  • Cloud Networking models
  • Device modules
  • Breakout cables
  • External Integrations configuration management
  • Jobs workflows

Conclusion

We hope you are as excited as we are about the future of Nautobot and invite you to try it out in our demo environments. demo.nautobot.com is the current stable release (v2.0, as of this publication) and next.demo.nautobot.com is the next release we are working on (v2.1, as of this publication).

-John Anderson (@lampwins)



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Introducing the Nautobot Data Validation Engine Plugin

Blog Detail

Data, data, and more data, but is the data good data?

Coinciding with the coming release of Nautobot v1.1.0, the team is excited to announce the public release of the Data Validation Engine Plugin! This Nautobot plugin offers data validation and enforcement rule logic that utilizes the custom data validators functionality within the platform. Data validators allow for custom business logic to be enforced within Nautobot when changes to data occurs. This gives organizations the ability to better integrate Nautobot into their existing ecosystems and to put guardrails around its use to ensure that network data housed within Nautobot can be trusted. One of Nautobot’s core missions is to serve as a single Source of Truth (SoT) for network data, from which automation solutions can be engineered. For network automation to be successful, the data that drives that automation must be trusted, and for the data to be trusted, there must exist constraints within the data model to enforce its correctness.

The problem lies in the understanding that all organizations operate differently and each will have nuanced ways in which they manage and configure their networks that ultimately dictate constraints in their network SoT data. Something as simple as naming a device can quickly devolve into endless debate and 16 different organizational standards. As such, it is impossible for Nautobot to try to derive and implement any such direct data constraints that would apply to all users. Therefore, the custom data validators feature set exists, to empower end users to codify their business logic and be able to enforce it within Nautobot.

Data Validation Engine Plugin

So where, then, does the plugin fit? The custom data validators API is a raw Python interface that hooks into the data models’ clean() methods, allowing for ValidationErrors to be raised based on defined business logic when model instances are created or updated. If that doesn’t mean anything to you, the Data Validation Engine Plugin is for you!

The plugin offers a Web UI (and REST API) for creating and managing no-code business rules for validation. In this initial release, there are two types of supported rules, regular expressions, and min/max numeric based rules.

Regular Expression Rules

Regular expressions define search patterns for matching text and are quite pervasive in the industry for a variety of use cases. They are often used to validate that text conforms to a pattern. Here, we use them to define rules for constraining text based fields in Nautobot to user defined expressions.

regex-rules-list

Each rule defines the Nautobot model and the text based field on that model to which the validation should apply. A custom error message can be defined, else a default message will indicate validation against the regular expression has failed. The rule may also be toggled on and off in real time in the Web UI.

regex-rules-edit

When a rule has been create and enabled, it is enforced when an instance of the applicable model is either created or updated in the Web UI or REST API. Here we can see what happens when a user attempts to create a device that does no conform to the hostname standard that was defined above.

regex-rules-enforcement

Min/Max Numeric Rules

While regular expression rule work on text based fields, min/max rules work on numeric model fields.

min-max-rules-list

As the name implies, users have the ability to constrain the minimum and/or maximum values of a number-based field, and the rules are defined in the same way as regular expression rules.

min-max-rules-edit

As you might expect, the enforcement is also the same, and in this example, an organization wishes to enforce that no VLANs with ID greater than or equal to 4000 get deployed in their environment, so they create a min/max rule targeting the vid field on the VLAN model.

min-max-rules-enforcement

Install

The plugin is available as a Python package on PyPI and can be installed with pip, following the full instructions on GitHub.

Final Thoughts

Data is key to network automation and trusted, correct data is key to successful network automation. Enforcing your organization’s specific business logic constraints is an important step in building a network automation platform and Nautobot offers the feature set to enable you. The Data Validation Engine Plugin goes one step further in providing a user friendly, no-code solution to common data validation use cases. Give it a try and let us know what you think!



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

NFD21 – Network Automation Architecture

Blog Detail

This past week I had the honor and privilege of traveling out to Santa Clara, CA with some of my esteemed colleagues to participate in Networking Field Day 21 on behalf of Network to Code. My contribution to our joint presentation was an overview of the various components that go into building a successful network automation platform. While this was only one section of our overall presentation, the delegates proved to be very engaged with these concepts. At Network to Code, we try not to focus on individual technologies, and instead focus on transformational ideas and workflows which bring value to our clients. To that end, this section dealt with the higher level concepts and components that go into building a successful network automation platform. I want to call out a couple of sections and points here, but be sure to checkout the full video from NFD21 that goes into even more detail and covers other architectural components such as Configuration Management, Orchestration, and Verification & Testing.

Human & Business Interaction

The tools and technologies that fall into this section deal with directly exposing interfaces to the automation that we build for the network. These are things like our IT Service Management (ITSM) ticketing applications, but also chat and communication platforms. ChatOps is a term used a lot in the industry and is continuing to pick up steam in network automation. Integrations with chat platforms allow a business to push network data and workflows into channels where conversations are already taking place. Perhaps more importantly, these avenues allow our network automation to be exposed to business stakeholders outside of the networking teams directly.

Data Management

If I were to pick a single section in my talk to call out as the most important, it would be this one. In terms of network automation, the industry is not talking about data enough. As with any other vertical in the tech space, data underpins everything we do, and network automation is no different. As the network automation community has grown, so has understanding in the concept of using a Source of Truth (SoT), which is an authoritative system of record for a particular data domain. That last part is key, because we can actually (and realistically do) have multiple sources of truth that do not overlap. For example, our IPAM and DCIM can be different systems because they control different domains of data. This is valid as long as we do not have more than one IPAM or DCIM tool, as this is where the phrase “Single Source of Truth” comes from, not that there is only one total system.

Still though, having many different systems creates problems of its own. At first pass, each system in our network automation toolbox would potentially need to reference many different systems to get the data needed to perform automation. More importantly, this tightly couples the client to the data source and format. To combat this, we strive to implement an aggregation layer between the sources of truth and the systems consuming their data. This aggregation serves a couple of important use cases.

First, it creates a single pane of glass for accessing all of the data from our various authoritative systems, thus allowing our tooling to reference a single place. Second, the aggregator implements a data translation layer which transforms the data coming from each source of true into an abstracted data model. This data model intentionally strips away any features of the data or its structure which make it identifiable with any vendor or source implementation.

In doing so, we segway into the third point, which is that the aggregator interacts with the various source of true systems in a pluggable way. By implementing an adapter, the aggregator understands the data and format coming from an individual source of true and how to transform the data to conform to the abstracted data model. This allows the aggregator to easily implement support for different source of true platforms, such that they can be swapped out at anytime. If you want to switch IPAM vendors, all you have to do is create (or engage with NTC) an adaptor for the aggregator that understands what the data looks like coming out of the new IPAM.

Monitoring, Alerting, and Visibility

It may seem a bit odd to be talking about monitoring and alerting in the context of network automation, but there is more to what we do than just configuration management. In fact, the core of this topic is centered around the concept of “closed loop automation,” or manufacturing a feedback loop into the automation platform that we build. In the video, you will hear me talk about the automation platform as a stack, and on one side we travel down the stack to the actual network infrastructure, but on the other side, events come out of the network and travel back up the stack. Indeed those events come in the traditional forms of SNMP polling, syslog messages, etc. They can also come in new forms such as time series metrics, and streaming telemetry. We have also revisited the storage and processing layer to implement more modern time series databases which allow for tagging data points with metadata labels which opens the door to more intelligent querying and visualization. Speaking of visualization, we want to empower business stakeholders with network data, and we want to do it in a self service fashion through modern dashboarding tools. Again, this is a case of bringing the network data and workflows to the business.

Back to the storage engine, we need to get those network events out of their monitoring silos, and fed back into the automation platform. We do this with the aid of rules processing engines which assert business logic based on the metadata which is attached to the collected data points. Once the event streams have been plumbed back into our network automation platform, our feedback loop begins to take shape. We can now build automated remediation workflows which allow engineers to focus on actual engineering and architecture, and less on troubleshooting and remediating known, repeated events. In situations where human involvement is needed, the platform can at least collect important context from the network as a form of first level triage, obviating the need for manual data collection and reducing the overall time necessary to respond to network events.


Conclusion

The final topic I want to bring up is the idea that no one network automation platform will ever suit the needs of all networks and organizations. The more important takeaways from this talk are the various components that go into architecting the platform that best suits your needs. It is true that company A may be able to purchase an off the shelf automation product from a vendor and be perfectly happy, while company B may require an entirely custom solution over a longer evolution timeline. In all cases, Network to Code is here to provide training, enablement, and implementation services.

-John Anderson (@lampwins)



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!