Introducing the New Device Onboarding App

Blog Detail

As Network Automation becomes more popular and companies decide on Nautobot to fit the Network Source of Truth (NSoT) component within their reference architecture, the next crucial problem to solve is data population. This allows organizations quick automation wins that upper management wants to see! The starting point of population for most organizations is “Devices.” Up until now that process was probably a mix of manual populations, CSV imports, nautobot-app-device-onboarding, and most likely the Python library “network-importer” to extend that data further. All these methods have their own pros and cons, but one of the most common asks was to make onboarding devices to Nautobot easier and more flexible. Introducing the Device Onboarding app 4.0!

This revamp of the Onboarding app exposes two new SSoT jobs to simplify the device onboarding process. The first job will onboard basic device information from an IP address. The second job extends the data gathered by pulling in Interface data including VLANs, VRFs, IP addresses (creating prefixes if needed), descriptions, and more! Onboarding 4.0 retains the original implementation for users who are making use of that framework, so you can try out the new features while retaining your existing workflow. I will discuss the new release in more detail throughout this blog post.

Why?

Populating a device inventory into Nautobot takes time. The time commitment is multiplied by the need for a number of different methods, applications, and libraries just to get a decent level of metadata assigned to devices. Onboarding 4.0 addresses these and additional concerns as outlined below.

  • The original OnboardingTask job in the plugin was capable of getting only basic device data into Nautobot.
  • Setting up network-importer as an external program felt disjointed and required additional infrastructure resources.
    • The dependency on Batfish was a challenge, as it required Batfish and its dependency on Docker to be able to be run in the environment.
    • The diffsync dependency didn’t have access to many of the new “contrib” features that nautobot-app-ssot exposes.
  • Adding new support for additional operating systems and data was difficult.
    • Extending an existing platform’s capabilities required additional Python modules to be installed into the environment.
      • The same challenge existed for adding new platform support.
  • The original Onboarding extension framework required a custom app and/or Python library to be available in the environment, which, depending on the deployment method used, can result in delays and complications.

What About the Original Extension Framework?

The original OnboardingTask job and its extension framework will remain available in Onboarding 4.0. We understand that this application has been around since the release of Nautobot, and many users have invested resources into extending the application using the original framework. A deprecation of the OnboardingTask job is planned for the future; but for now, the only change users of the original extension framework need to be aware of is that this job is now hidden by default.

To find the hidden job, simply navigate to Jobs–>Jobs. Click on the Filter button and select “hidden=Yes”.

Revealing the hidden job will allow you to run it and edit job attributes as usual.

First enable the job.

Next, feel free to override the property of the job to un-hide it by overriding the default.

The New SSoT Jobs Explained

The biggest change implemented in the 4.0 release is the use of the Single Source of Truth (SSoT) framework. The SSoT app (nautobot-app-ssot) uses a combination of diffsync, SSoT contrib, and other tools to diff inputs from disparate data sources and then sync data between those systems. This allows us to not only onboard device data but compare and update as needed. There are two new SSoT jobs to accomplish this.

  • Sync devices from network – Mimics what the original onboarding task did, including creation of device(s), serial number, MGMT IP, and interface.
  • Sync data from network – Mimics what the old NTC library network-importer did—syncs interfaces, their MTU, description, IP address, type, status, etc. There is a toggle option to sync VRFs and add them to interfaces as well as a toggle for VLANs that can sync VLANs and add tagged/untagged VLANs to ports.

How It Works

This section will describe the newer SSoT jobs that this App exposes and how they work.

Frameworks in Use

  • Nautobot SSoT – Utilizing the existing Nautobot SSoT framework allows a common pattern to be reused and offers a path forward to add additional support and features.
  • Nautobot App Nornir – Utilized for Nornir Inventory plugins for Nautobot (specifically for Sync Network Data Job).
  • Nornir Netmiko – Used to execute commands and return results.
  • jdiff – Used to simplify parsing required data fields out of command outputs returned from command parser libraries like textFSM. Specifically extract_data_from_json method.
  • Parsers – Initially NTC Templates via textFSM, but support for pyATS, TTP, etc. is planned for the future.

YAML Definition DSL

The key extensibility feature in the new release is the ability to add new platform support by creating a single YAML definition file. The application comes with some logical defaults, but these can be overloaded and new platforms can be added via Git repositories.

File Format

Let’s review a few of the components of the file:

  • ssot job name – Name of the job to define the commands and metadata needed for that job. (choices: sync_devices or sync_network_data)
  • root key data name – Is fully defined in the schema definition.
  • commands – List of commands to execute in order to get the required data.
  • command – Actual show command to execute.
  • parser – Whether to use a parser (TextFSM, pyATS, TTP, etc.). Alternatively, none can be used if the platform supports some other method to return structured data, e.g., | display json or an equivalent.
  • jpath – The JMESPath (specifically jdiff’s implementation) to extract the data from the parsed JSON returned from parser.
  • post_processor – Jinja2-capable code to further transform the returned data post jpath extraction.
  • iterable_type – A optional value to enforce type casting.

As an example:

---
sync_devices:
  hostname:
    commands:
      - command: "show version"
        parser: "textfsm"
        jpath: "[*].hostname"
        post_processor: ""
..omitted..

How the SSoTSync Devices From NetworkJob Works

  1. The job is executed with inputs selected.
    • List of comma-separated IP/DNS names is provided.
    • Other required fields are selected in the job inputs form.
  2. The SSoT framework loads the Nautobot adapter information.
  3. The SSoT frameworks network adapter load() method calls Nornir functionality.
    • The job inputs data is passed to the InitNornir initializer. Because we only have basic information, a custom EmptyInventory Nornir inventory plugin is packaged with the App. This gets initialized in the InitNornir function, but actually initializes a true inventory that is empty.
    • Since Platform information may need to be auto-detected before adding a Nornir Host object to the inventory, a create_inventory function is executed that uses the SSH-Autodetect via Netmiko to try to determine the platform so it can be injected into the “Host” object.
    • Finally, all the platform-specific commands to run plus all the JPath post_processor information loaded from the platform-specific YAML files must be injected into the Nornir data object to be accessible later in the extract/transform functions.
  4. Within the code block of a Nornir with_processor context manager, call the netmiko_send_commands Nornir task.
    • Access the loaded platform-specific YAML data and deduplicate commands to avoid running the same command multiple times; e.g., multiple required data attributes come from the same Show command.
  5. Utilize native Nornir Processor to overload functionality on task_instance_completed() to run command outputs through extract and transformation functions.
    • This essentially is our “ET” portion of an “ETL” (Extract, Transform, Load) process.
    • Next, the JSON result from the show command after the parser executes, e.g., Textfsm, gets run through the jdiff function extract_data_from_json() with the data and the jpath from the YAML file definition.
    • Finally, an optional post_processor Jinja2-capable execution can further transform the data for that command before passing it to finish the SSoT synchronization.

How the SSoTSync Network Data From NetworkJob Works

For those looking to deep dive into the technical details or troubleshooting, here is how it works:

  1. The job is executed with inputs selected.
    • One or multiple device selection.
    • Other required fields are selected in the job inputs form.
    • Toggle certain metadata booleans to True if you want more data synced.
  2. The SSoT framework loads the Nautobot adapter information.
  3. The SSoT framework’s network adapter load() method calls Nornir functionality.
    • The job inputs data is passed to the InitNornir initializer. Because devices now exist in Nautobot, we use NautobotORMInventory. Nornir inventory plugin comes from nautobot-plugin-nornir.
    • Finally, all the platform-specific commands to run plus all the jpath post_processor information loaded from the platform-specific YAML files must be injected into the Nornir data object to be accessible later in the extract/transform functions.
  4. Within the code block of a Nornir with_processor context manager call the netmiko_send_commands Nornir task.
    • Access the loaded platform-specific YAML data and deduplicate commands to avoid running the same command multiple times; e.g., multiple required data attributes come from the same Show command.
  5. Utilize native Nornir Processor to overload functionality on subtask_instance_completed() to run command outputs through extract and transformation functions.
    • This essentially is our “ET” portion of an “ETL” (Extract, Transform, Load) process.
    • Next, the JSON result from the show command after the parser executes, e.g., Textfsm, gets run through the jdiff function extract_data_from_json() with the data and the jpath from the YAML file definition.
    • Finally, an optional post_processor Jinja2-capable execution can further transform the data for that command before passing it to finish the SSoT synchronization.

Extending Platform Support

Adding support can be done by adding a file that parses data into the proper schema. There is a new Git datasource exposed that allows the included YAML files to be overwritten or new platform support to be added for maximum flexibility.

For simplicity, a merge was not implemented for the Git repository functionality. Any file loaded in from a Git repo is preferred. If a file in the repo exists that matches what the app exposes by default, e.g., cisco_ios.yml, the entire file from the repo becomes preferred. So keep in mind if you’re going to overload a platform exposed by the app, you must overload the full file! No merge will happen between two files that are named the same. Additionally, Git can be used to add new support. For example, if you have Aruba devices in your environment, and you want to add that functionality to device onboarding, this can be done with a custom YAML file. Simply create a Git repo and create the YAML file (name it aruba_osswitch.yml), and you’ve just added support for Aruba in your environment.

The filenames must be named <network_driver_name>.yml. See configured choices in the Nautobot UI under a platform definition.

Even better if you follow that up with a PR into the main application!


Conclusion

As the device onboarding application continues to mature, we expect to add further platform support to the defaults the app exposes. We hope the new DSL- and YAML-based extension framework makes it quick and easy to add support and load it in via Git.

Happy automating!

-Jeff, David, Susan



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Network Automation Architecture – Automation Engine

Blog Detail

In the previous blogs of this series about Network Automation Architecture, 12, and 3, we have presented the key architectural components and their respective functions. This blog will be expanding on the role of the Automation Engine. The automation engine is the component that contains all the tasks that interact with the network, to change the state of the network via configuration management processes.

Introduction

The automation engine is flexible and can be built on top of many different languages and frameworks. A few examples are: Python, Golang, Rust; they can be further broken down into specific frameworks such as: Ansible, Salt, Nornir, NAPALM, Netmiko which is the scope of Python, Terraform for cloud, Scrapligo in the cases of Golang, or even validation focused tools like Batfish. The automation engine attempts to achieve tasks such as: configuration backups/rendering/compliance/provisioning, Zero Touch Provisioning (ZTP), as well as NetDevOps principles such as Continuous Integration/Continuous Delivery (CI/CD).

The automation engine is the component that manages the network state and performs network tasks. This component will actively make changes to the network state and the connectivity between the automation engine and in-scope devices should be permitted by security policy.

Automation Engine

There are some challenges that the automation engine has to solve; interacting with network devices is complicated. Command line interfaces (CLIs) have been the main interface for network engineers to modify and manage network equipment. More recent trends involve an API to interact with specific devices, e.g. Arista eAPI. Alternatively, some vendors are moving toward element managers that offer APIs and handle device connections within their own frameworks. Finally, YANG via NETCONF/RESTCONF/gNMI was developed to attempt to solve vendor independent automation, but is still working towards gaining mass adoption.

CLIs were not built for automation; but over the last many years there have been many projects that have been built and open sourced to help solve these problems. Some of these were mentioned in the introduction; however for the sake of clarity, Nornir, Scrapli(go), NAPALM, Netmiko are all examples that provide frameworks to interact with CLIs and automate these tasks.

These projects generally require a few pieces of metadata:

  • Device platform – which is used to map the platform (or OS) to the network driver for the given framework in use.
  • Device credentials – how the automation engine authenticates to the network device.
  • Management IP address – IP address/FQDN that the automation engine can use to reach a network device.

Note: These are the bare minimum attributes, and they should be stored within the Source of Truth (SoT) component. The automation engine should have a method to query for the information.

While APIs have helped aid the adoption of automation and made the interaction with these devices simpler, each vendors API is implemented differently. The automation engine must provide a flexible interface that is capable of manipulating parameters and reading multiple returned data formats E.g (XML, JSON).

Main Challenges

While considering configuration management and the automation engine in general some of the key challenges are listed below. This is not an exhaustive list.

  • Configuration Management:
    • Configuration Rendering: A few topics to consider; full configuration rendering, partial configuration rendering, secrets interpolation.
      • Secrets Management: How to pull secrets from an external secrets management system, Ansible Vault, other?
    • Configuration Remediation: It’s one thing to do a diff and understand what is extra and what is missing. (As an example, this is solved in Nautobot Golden Config App.) It’s a completely different challenge to remediate those configurations.
    • Configuration Deployment: The process of deploying a rendered configuration onto an element.
  • Configuration Provisioning: Creating objects, such as creating an EC2 instance, Network Functions Virtualization (NFV) Appliance, or network service (such as an AWS IGW).
  • System Load Distribution:
    • What security posture do we need to adhere to?
    • Only certain subnets can speak to management networks?
    • Only certain communication protocols are allowed?
  • Operational Actions:
    • Rebooting a device.
    • Reset IPSEC tunnel.
    • Bounce a interface.
    • Bounce a BGP neighbor.
  • Operational Compliance and Checks: What operational data should be collected, how should the data be transformed?

For some of the more advanced topics mentioned above the next section serves to provide addition details and considerations.

Challenges Clarified

Let’s deep dive into several of nuances of some of these topics.

  • Full vs Partial configuration deployments: This challenge may seem simple but it’s actually quite complex. Before you can push a configuration you must be able to render the configuration; before you can render it you must have the source of truth data. This is truly a crawl, walk, run situation. What are some things you need to consider?
    • Merge vs. Replace
      • Replace at what level? Full config is generally easier that partial configuration merge, Junos allows stanza level replacements, but most OS’s do not.
    • How to push a subset of the configuration. Identify configuration snippets that are least impactful, but provide a great Return on Investment (ROI).
    • How to validate a configuration deployment via a CI/CD pipeline (Fail Fast).
      • This is also an iterative approach. Start simple and grow the complexity.
      • Check out Batfish.
  • Secrets interpolation: There are configuration lines in most vendors that require credential/secret values to be populated. The rendering of configuration by the automation engine must be flexible and secure enough to do this without exposing the secrets to unintended audiences.
  • Remediating a configuration: Remediation of a configuration based on a diff of actual and intended state comes with some business requirements around what the business’s confidence level is, e.g., remediating the configuration completely (including removing “extras”) vs just adding the “missing” configuration elements.
    • An engine like hier_config can provide a remediation plan.

As you can see from the challenges above, there are many questions you must answer. Once these questions are answered, it becomes much easier to try to choose an automation engine that fits your organization’s goals.

Choosing an Automation Engine

One of the biggest challenges with the Automation Engine component of this architecture is picking the right tool(s) for the job. There is no shortage of open source tools that fit this component of the architecture; furthermore, there is an ever-expanding catalog of closed source / vendor specific tools that aim to accomplish the tasks.

This is an interesting topic. Throughout the years, NTC has engaged with many customers. Even customers at the most basic entry into their network automation journey are already using the automation engine element. A simple one-off script that goes and collects data off of a device fits this component. Since this component in most cases is one of the first to be selected, it’s not always easy convincing a client that other options exist.

For these and many other reasons, we’ve found that most of the automation engine options available can achieve great results if you have the rest of the automation architecture in place. Selecting the right engine for your business comes down to skill set, previous adoption, willingness to learn, and in some cases having product support, which many large enterprises rely on today.

Regardless of the application/framework in use, the automation engine communicates with network devices. And as mentioned in Network Automation Architecture – The Components, it’s important to understand the automation engine not as an isolated component, but as the final executor of the outcome of the other components.

Furthermore; there will be situations where a single automation engine does not meet the business requirements, in these circumstances multiple automation engines can be used, but a level of effort should be exhausted to keep the number of different automation engines to a minium; otherwise, the learning curve and skill set to operate/maintain this component gets too complex and leads to slowed adoption.

Some of the characteristics to consider are mentioned below:

  • Does the tool have an API?
    • Most Automation Engines have an API, but is it robust? Is it RESTful and support all the CRUD operations? Is there other types of APIs like GraphQL?
  • Does the tool integrate with the SoT?
  • Does the tool have a User Interface (UI)?
  • Is the tool flexible enough to accomplish RBAC requirements for the customer.
  • Credential Management
  • The ability to create rich and complex Forms
  • Job Isolation
  • Network Device Support
  • Secrets Integration
  • Scheduler
  • Traceability / Logging

Advanced Concepts

One of the biggest challenges related to the automation engine is the connectivity conundrum that exists in enterprises. The security of networks continues to grow in complexity; the management control plane of network devices is no different. In many cases centralized applications aren’t allowed to connect to network devices. Whether that is due to DMZ design, Geo location issues, or mergers and acquisitions, the automation engine must be flexible enough to run inside those pods.

Here are some of the existing solutions to this problem.

Automation EngineSolution
AnsibleExecution Environments
PythonCelery, Redis (RQ), Taskmaster
SaltstackMaster/Minion

Closing

To close out this blog, I want to show what a release process with validation steps might look like in a high-level diagram, this diagram came directly from one of the Webinars Ken Celenza and I did Community Webinar: Using Batfish for Network & Routing Verification.


Conclusion

Keep an eye out for the remaining parts of this series!

Cheers, -Jeff



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!

Developing Batfish – Converting Config Text into Structured Data (Part 3)

Blog Detail

This is part 3 of a blog series to help learn how to contribute to Batfish.

The previous posts in this series:

In this post I will be covering how to take the parsed data and apply that “text” data into a vendor-specific (VS) datamodel. I will also demonstrate how to take the extraction test we created in part 2 and extend it to test the extraction logic.

Basic Steps

  1. Create or Enhance a Datamodel
  2. Extract Text to Datamodel
  3. Add Extraction Testing

What Is the Vendor-Specific Datamodel?

The title of this blog post is Converting Config Text into Structured Data. Throughout this blog post I will be talking about the vendor-specific (VS) datamodel, which is the schema for the structured data. Modeling data is complicated; fortunately, the maturity of the Batfish project offers an extensive number of datamodels that already exist in the source code that can help with enhancing the datamodel I need to extract the route target (RT) data for EVPN/VxLAN.

The VS datamodel is used to map/model a feature based on how a specific vendor has implemented a technology. These datamodels tend to line up closely with how that vendor’s configuration stanzas line up for that technology.

As far as terminology, within Batfish I’ve noticed the names datamodel and representation are used somewhat freely and interchangeably. I will stick to datamodel throughout the blog post to avoid confusion.

Create or Enhance a Datamodel

As I finished up part 2 of this blog series, we had updated the parsing tree to support three new commands. We added simple parsing Testconfig files to ensure that ANTLR could successfully parse the new commands. In this post I will build upon what we did previously. I will start with extending the switch-options datamodel to support the features we added parsing for. To rehash, the commands we added parsing for are below:

set switch-options vrf-target target:65320:7999999
set switch-options vrf-target auto
set switch-options vrf-target import target:65320:7999999
set switch-options vrf-target export target:65320:7999999

The current switch-options model is comprised of:

public class SwitchOptions implements Serializable {

  private String _vtepSourceInterface;
  private RouteDistinguisher _routeDistinguisher;

  public String getVtepSourceInterface() {
    return _vtepSourceInterface;
  }

  public RouteDistinguisher getRouteDistinguisher() {
    return _routeDistinguisher;
  }

  public void setVtepSourceInterface(String vtepSourceInterface) {
    _vtepSourceInterface = vtepSourceInterface;
  }

  public void setRouteDistinguisher(RouteDistinguisher routeDistinguisher) {
    _routeDistinguisher = routeDistinguisher;
  }
}

This file is located in the representation directory.

The datamodel is describing what Batfish supports within the Junos switch-options configuration stanza. I need to extend this to support and add vrf-target. To do this, I need to define the type of the data and create getters and setters.

The next step is to identify how to use this data and the best way to represent the data. The easiest of these would be the auto. This command will either be on or off. If we parse the configuration and we have the ANTLR token for auto, we can set that in the datamodel as true; otherwise we would have it set to false and would expect to see one of the other commands. The other commands would be of type ExtendedCommunity, which is already defined as part of the Batfish vendor-independent datamodel.

In this command stanza the auto keyword can be used OR the community can be provided. For this I will create an representation for ExtendedCommunityorAuto which has already been created for this exact scenario in the Cisco NX-OS representations.

Enhance the Datamodel

Before I can extract the text data from the parsing tree and apply it to a model, the datamodel must be updated to support the additional feature set. For this example I will be adding a support for vrf-target and the three different options that are possible. The result of the update is shown below:

public class SwitchOptions implements Serializable {

  private String _vtepSourceInterface;
  private RouteDistinguisher _routeDistinguisher;
  private ExtendedCommunityOrAuto _vrfTargetCommunityorAuto;
  private ExtendedCommunity _vrfTargetImport;
  private ExtendedCommunity _vrfTargetExport;

  public String getVtepSourceInterface() {
    return _vtepSourceInterface;
  }

  public RouteDistinguisher getRouteDistinguisher() {
    return _routeDistinguisher;
  }

  public ExtendedCommunityOrAuto getVrfTargetCommunityorAuto() {
    return _vrfTargetCommunityorAuto;
  }

  public ExtendedCommunity getVrfTargetImport() {
    return _vrfTargetImport;
  }

  public ExtendedCommunity getVrfTargetExport() {
    return _vrfTargetExport;
  }

  public void setVtepSourceInterface(String vtepSourceInterface) {
    _vtepSourceInterface = vtepSourceInterface;
  }

  public void setRouteDistinguisher(RouteDistinguisher routeDistinguisher) {
    _routeDistinguisher = routeDistinguisher;
  }

  public void setVrfTargetCommunityorAuto(ExtendedCommunityOrAuto vrfTargetCommunityorAuto) {
    _vrfTargetCommunityorAuto = vrfTargetCommunityorAuto;
  }

  public void setVrfTargetImport(ExtendedCommunity vrfTargetImport) {
    _vrfTargetImport = vrfTargetImport;
  }

  public void setVrfTargetExport(ExtendedCommunity vrfTargetExport) {
    _vrfTargetExport = vrfTargetExport;
  }
}

In this example I have added a getter and a setter for each new dataset. This will give me the ability to extract the data from the configuration and instantiate the switch-option vendor-specific object. One important thing to notice is the use of ExtendedCommunityOrAuto. This did not exist in the Junos representation, since it existed in Cisco NX-OS I used the same representation code.

This representation is shown below:

public final class ExtendedCommunityOrAuto implements Serializable {

  private static final ExtendedCommunityOrAuto AUTO = new ExtendedCommunityOrAuto(null);

  public static ExtendedCommunityOrAuto auto() {
    return AUTO;
  }

  public static ExtendedCommunityOrAuto of(@Nonnull ExtendedCommunity extendedCommunity) {
    return new ExtendedCommunityOrAuto(extendedCommunity);
  }

  public boolean isAuto() {
    return _extendedCommunity == null;
  }

  @Nullable
  public ExtendedCommunity getExtendedCommunity() {
    return _extendedCommunity;
  }

  //////////////////////////////////////////
  ///// Private implementation details /////
  //////////////////////////////////////////

  private ExtendedCommunityOrAuto(@Nullable ExtendedCommunity ec) {
    _extendedCommunity = ec;
  }

  @Override
  public boolean equals(Object o) {
    if (this == o) {
      return true;
    } else if (!(o instanceof ExtendedCommunityOrAuto)) {
      return false;
    }
    ExtendedCommunityOrAuto that = (ExtendedCommunityOrAuto) o;
    return Objects.equals(_extendedCommunity, that._extendedCommunity);
  }

  @Override
  public int hashCode() {
    return Objects.hashCode(_extendedCommunity);
  }

  @Nullable private final ExtendedCommunity _extendedCommunity;
}

This allows for the VS model to have one field, and when it is set to auto or a specific community, it clears the other by changing the value of that single field.

Extract Text to Datamodel

In this section I will explain how to extract data from the parsing tree and assign it to the vendor-specific datamodel. This work is completed within the ConfigurationBuilder.java file.

ConfigurationBuilder.java is located in the grammar directory.

Note: For hierarchical configurations (Junos OS and PanOS) it’s ConfigurationBuilder.java. For most other vendors it’s actually <vendor>ControlPlaneExtractor.java. In order to see this, visit the <vendor>ControlPlaneExtractor.java (CPE) file within the grammar directory mentioned above.

The first extraction I’m going to focus on is the vrf-target auto command. In order to extract this command I need to create a Java method that takes the parser context as an input, and I will extract and analyze the data in order to assign it to the datamodel I enhanced earlier.

The first step is to import the parsing tree context.

import org.batfish.grammar.flatjuniper.FlatJuniperParser.Sovt_autoContext;

Next we can create an enter or an exit rule to extract and assign the data.

@Override
public void exitSovt_auto(Sovt_autoContext ctx) {
  if (ctx.getText() != null) {
    _currentLogicalSystem.getOrInitSwitchOptions().setVrfTargetCommunityorAuto(ExtendedCommunityOrAuto.auto());
  }
}

In this method I am accessing the Sovt_autoContext from the parser. And if the ctx variables getText() method is not null, I’m assigning the value of VrfTargetCommunityorAuto in the switch-option model to auto, meaning that feature is turned on.

This is something that confused me when I was initially learning how the conversions worked. I had to sit back and remember that in cases like set switch-options vrf-target auto, it will either exist in the configuration or it won’t; therefore, the parsing context would be null when it does not exist in the configuration.

It is also worth mentioning that this is an exit rule, which is the most common. If some processing is needed (e.g., set variable values) before the child rules are processed, an enter rule can be used.

To expand on an enter rule, imagine a similar configuration stanza in Junos, which is set protocols evpn vni-options vni 11009 vrf-target target:65320:11009. In this case I’d need to set a variable for the VNI that is being configured so that I can reference it later when I need to assign the route target for the VNI. This is an example where an enter rule could be used to assign the VNI as a variable that the child rules can use.

These concepts are followed in a similar manner for each extraction you need. I will not cover every different extraction for the commands in the post in order to keep it as terse as possible; however, below is an example of the extraction created for the set switch-option vrf-target target:65320:7999999 command.

The interesting data from this command is the route target community. In order to extract that, I have the following method:

import org.batfish.grammar.flatjuniper.FlatJuniperParser.Sovt_community_targetContext;

@Override
public void exitSovt_community(Sovt_communityContext ctx) {
  if (ctx.extended_community() != null) {
    _currentLogicalSystem
        .getOrInitSwitchOptions()
        .setVrfTargetCommunityorAuto(ExtendedCommunityOrAuto.of(ExtendedCommunity.parse(ctx.extended_community().getText())));
  }
}

First I validate the context is not null. Then I set the vrfTargetCommunity to the value that was parsed. One thing to notice in the code snippet above is that since my datamodel set VrfTargetCommunityorAuto to the type of ExtendedCommunity, I’m parsing the getText() value into an ExtendedCommunity. For the remaining few commands, the extraction methods will be very similar; so I will not be showing the remaining two conversions for import and export targets.

Add Extraction Testing

Now that I have the conversions written, I need to update the tests that I wrote in part 2 of this blog series. The test I created to validate the parsing of the Testconfig file is shown below:

@Test
public void testSwitchOptionsVrfTargetAutoExtraction() {
  parseJuniperConfig("juniper-so-vrf-target-auto");
}

Now I must extend this test in order to test the extraction of the vrf-target auto configuration. The test as shown above simply validates that the configuration line can be parsed by ANTLR. It does not validate the code snippets we wrote in the previous section that are taking the “text” data and saving it to the datamodel. The test I want to write is to validate that the context extraction is working and I can assert that when the command is found it is set to auto.

@Test
public void testSwitchOptionsVrfTargetAutoExtraction() {
  JuniperConfiguration juniperConfiguration = parseJuniperConfig("juniper-so-vrf-target-auto");
  ExtendedCommunityOrAuto targetOrAuto = juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetCommunityorAuto();
  assertThat(ExtendedCommunityOrAuto.auto(), equalTo(targetOrAuto));
  assertThat(true, equalTo(targetOrAuto.isAuto()));
  assertThat(juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetExport(), nullValue());
  assertThat(
      juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetImport(), nullValue());
}

In order to test the conversion, I’m using the same function and just extending it to pull data out of the parsed Juniper configuration. For this Testconfig I only have the set switch-options vrf-target auto command. As seen in the extraction test, I’m asserting that isAuto is true, and that the value of targetOrAuto is ExtendedCommunityOrAuto.auto(). The remaining options are not located in that Testconfig file, and therefore I am asserting their values are null.

Since I also created and explained the vrfTargetCommunity, the test for this extraction is shown below:

@Test
public void testSwitchOptionsVrfTargetTargetExtraction() {
  JuniperConfiguration juniperConfiguration = parseJuniperConfig("juniper-so-vrf-target-target");
  ExtendedCommunityOrAuto extcomm = juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetCommunityorAuto();
  assertThat(ExtendedCommunity.parse("target:65320:7999999"), equalTo(extcomm.getExtendedCommunity()));
  assertThat(false, equalTo(extcomm.isAuto()));
  assertThat(
      juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetImport(), nullValue());
  assertThat(
      juniperConfiguration.getMasterLogicalSystem().getSwitchOptions().getVrfTargetExport(), nullValue());
}

The logic I’m using is very similar. In this case I’m testing that the extracted ExtendedCommunity matches what I have in the Testconfig file, but I’m also validating that the rest of the switch-options that do not exist in the Testconfig files are null. For the remaining import and export rules, I created similar tests to validate the extraction of those ExtendedCommunity values.

Note: Batfish developers tend to use more Matchers in their tests, they almost never use assertTrue/assertNull; often it’s assertThat(getFoo(), nullValue()). Hamcrest Matchers tend to do a better job of explaining the mismatches than JUnit (e.g., assertThat(someList(), hasSize(5)) will be much better than assertTrue(someList().size() == 5).

Summary

In this post I provided more details on what a vendor-specific datamodel is and how it fits within the Batfish application. I identified that the switch-options datamodel/representation needs to be extended to support the new variables I needed. Next, I wrote and explained how to extract the “text” data and assign it to the datamodel. And finally, I explained and showed how to write some extraction tests to validate the extractions are working as intended.


Conclusion

The last post in the series will be coming soon.

  • Developing Batfish – Converting Vendor-Specific to Vendor-Independent (Part 4)

-Jeff



ntc img
ntc img

Contact Us to Learn More

Share details about yourself & someone from our team will reach out to you ASAP!