How to Save and Backup Network Configuration Files Using Ansible

  • April 30, 2016

Ever wish you could easily perform a backup of network device configs or maybe just do a copy run start across a number of devices because you forgot if you saved the config after your change? If, so you are not alone.

This tutorial walks through using an Ansible module called ntc_save_config. By default, it does a copy run start (or commit) based on the platform. You also have the option to specify a remote filename to do the equivalent of copy run <filename>. Additionally, you can specify a local filename, and when present, it backs up the running configuration to the Ansible control host.

The ntc_save_config currently supports Cisco IOS, Cisco Nexus, Arista EOS, and Juniper Junos devices.

Create Ansible Inventory

We are going to create an Ansible inventory called inventory with 4 groups and then define two group variables per group. It is one group per device type and then a variable for the platform, that will be used as an input to the Ansible module.

[all:vars]
un=ntc
pwd=ntc123

[nxos]
nxos-spine1

[eos]
eos-spine1

[ios]
csr1

[junos]
vmx1

[nxos:vars]
net_platform=cisco_nxos_nxapi

[ios:vars]
net_platform=cisco_ios_ssh

[eos:vars]
net_platform=arista_eos_eapi

[junos:vars]
net_platform=juniper_junos_netconf

Create Backups Directory

We'll create a new directory called backups - it should exist in the same directory as the inventory file.

ntc@ntc:~/demo$ ls
inventory
ntc@ntc:~/demo$ mkdir backups
ntc@ntc:~/demo$ 
ntc@ntc:~/demo$ ls
backups  inventory

Create Playbook to Backup & Save Configs

At this point, our inventory is setup, and the playbook needs to be built. We will have a single task to perform the save and backup across four devices. Our playbook is svaed as backup-configs.yml.

---
  - name: PLAY - BACKUP
    hosts: all
    gather_facts: no
    connection: local

    tasks:

      - name: SAVE AND BACKUP CONFIGS
        ntc_save_config:
          local_file={{ filename }}
          platform={{ net_platform }}
          host={{ inventory_hostname }}
          username={{ un }}
          password={{ pwd }}

You can see the module only has a few parameters and we are using the optional parameter called local_file because we want to do a backup of each config. If this parameter was omitted, a save on the device is still performed (just no backup is performed).

Execute Playbook

Now we are ready to execute the playbook and backup our configs.

ntc@ntc:~/demo$ ansible-playbook -i inventory backup-configs.yml 

PLAY [PLAY - BACKUP] ********************************************************** 

TASK: [SAVE AND BACKUP CONFIGS] *********************************************** 
changed: [eos-spine1]
changed: [vmx1]
changed: [csr1]
changed: [nxos-spine1]

PLAY RECAP ******************************************************************** 
csr1                       : ok=1    changed=1    unreachable=0    failed=0   
eos-spine1                 : ok=1    changed=1    unreachable=0    failed=0   
nxos-spine1                : ok=1    changed=1    unreachable=0    failed=0   
vmx1                       : ok=1    changed=1    unreachable=0    failed=0   

Note: this module is not idempotent.

Verify Backups

With a quick check, we can verify all backups were successful.

ntc@ntc:~/demo$ ls backups/
csr1.conf  eos-spine1.conf  nxos-spine1.conf  vmx1.conf

Note: there are built-in Ansible variables for date and time, so they can easily be added to the filename. Should you want to you could add to this play and version control each config as well.

-Jason

  • michael

    having some issues running this playbook from my onsite ansible control vm, is “ntc_save_config” part of ansible 2.0.1? Error I was getting initially was “ERROR! no action detected in task.” error pointing to “- name: SAVE AND BACKUP CONFIGS

    I went ahead and installed “pyntc” as noted on github for module “ntc_save_config”, but now I am getting “AttributeError: ‘module’ object has no attribute ‘_vendor'”

    thats where I’m at now, still working through, it may be by control vm…been trying to use direct access via ssh to external IP addresses provided via Dashboard (which I may add is an exceptional feature, being able to directly test API call an such!) Thanks!