In Part 3 of this series we looked at macros and filters with Jinja2 templating. Now it’s time to ramp it up to the next level and look at something a little bit more like what you would see in the real world. In this post we will cover how to use complex Ansible inventories to create a hierarchy of configuration variables, and how they are inherited down into the resulting configuration. Using variable inheritance and host groups gives you the ability to flexibly assign variables to devices in a dynamic fashion based on group membership. We will see this when we get to the example at the end, where we assign variables at different levels in the hierarchy. This way we will be able to set common variables at a high level, and get more specific or override those variables at a lower point the tree.
Ansible Inventory
In order to understand variable inheritance we will use a common example of a network with multiple regions, having sites within those regions. Ansible has the concept of host groups, which can consist of hosts or other groups. These groups provide the ability to run jobs or assign specific configuration parameters to specific groups of devices. For more advanced Ansible inventory documentation, check out the Ansible docs on inventory. Most of the advanced topics of Ansible inventory, such as dynamic inventory and using multiple inventory sources, are beyond the scope of this blog post.
We’ll start off with an inventory structure set up like this (for visualization):
└── regions
├── central
│ └── sites
│ ├── chicago
│ ├── dallas
│ └── minneapolis
├── eastern
│ └── sites
│ └── newyork
└── western
└── sites
└── phoenix
Here we can see that regions is the top-level “grouping”. Central, eastern, and western are the regions, with sites underneath each of those. We can then build an Ansible inventory in yaml that looks like this:
# inventory
all:
children:
regions:
children:
central:
children:
sites:
children:
chicago:
hosts:
chi-router1:
dallas:
hosts:
dal-router1:
minneapolis:
hosts:
min-router1:
eastern:
children:
sites:
children:
newyork:
hosts:
new-router1:
western:
children:
sites:
children:
phoenix:
hosts:
phe-router1:
Now, this looks a little scary, but it is just expanded because of the children keywords that denote child groups of the parent groups. This adds a few extra lines, but should still be readable. In a production environment it would be best to set up a file structure similar to the one in the Ansible docs – inventory where group variables go in their own files named for the group they are assigned to versus all in the same file. Another option is to use dynamic inventories, or Golden Configuration-style jobs within Nautobot. This is because this yaml file gets unwieldy rather quickly once you add a few sites and variables. In Nautobot (Golden Config plugin), you’re able to create a small yaml “inventory” (variable). These inventory files are pulled into “config contexts”, and linked to various objects in Nautobot like regions, tenants, sites, etc., with metadata. See config context docs and Golden Config Docs for more information on this. For brevity and simplicity, we will stick with a single inventory file in our examples in this post. One note: be careful with spacing/indentation in yaml; it is very important. Now that we have the basic inventory structure, we can move into assigning variables to various groupings.
Variable Assignment
In our example we are going to assign (via the vars
key) a aaa server for all regions, dns and ntp servers per region, and a syslog server per site to each of the routers in our inventory. Then, in Chicago we have a different dns server that we want those devices to use instead of the regional one. Again, this is a pretty basic example, but you should be able to begin to see the ways this can be used and adapted for different use cases and environments. One thing to note here as you get into more advanced variable structures, is that there is a merge that happens in a specific order for variable scopes in Ansible. This is beyond the scope of this post, but you can read more on that here Ansible Variables – Merging. All we will need to know for this example is that the lower down the tree a variable is assigned, the higher preference it is given; and it will override variables assigned at a higher level. In our example with the dns_server
assigned at the central
region, will be overridden for hosts at the chicago
site.
# inventory
all:
children:
regions:
vars:
aaa_server: 4.4.4.4
children:
central:
vars:
ntp_server: 1.1.1.1
dns_server: 1.1.2.1
children:
sites:
children:
chicago:
vars:
syslog_server: 1.1.1.2
dns_server: 1.1.2.2
hosts:
chi-router1:
dallas:
vars:
syslog_server: 1.1.1.3
hosts:
dal-router1:
minneapolis:
vars:
syslog_server: 1.1.1.4
hosts:
min-router1:
eastern:
vars:
ntp_server: 2.2.2.2
dns_server: 2.2.3.2
children:
sites:
children:
newyork:
vars:
syslog_server: 2.2.2.3
hosts:
new-router1:
western:
vars:
ntp_server: 3.3.3.3
dns_server: 3.3.4.3
children:
sites:
children:
phoenix:
vars:
syslog_server: 3.3.3.4
hosts:
phe-router1:
We can run ansible-inventory --inventory=inventory.yaml --list
to validate our inventory file and also view how the variables will get collapsed/merged, then applied to hosts. We can see in the hostvars
section what actual variables and values will be assigned to each host in the inventory based on the inherited values. We can see that chi-router
has the same ntp_server
as the other central
region devices, but the dns_server
is different. This is because the dns_server
variable assigned to the chicago
site overrides the one set by the central
region. We can also see the aaa_server
value is the same across all devices because it was assigned in the regions
host group.
{
"_meta": {
"hostvars": {
"chi-router1": {
"aaa_server": "4.4.4.4",
"ntp_server": "3.3.3.3",
"dns_server": "1.1.2.2",
"syslog_server": "1.1.1.2"
},
"dal-router1": {
"aaa_server": "4.4.4.4",
"ntp_server": "3.3.3.3",
"dns_server": "3.3.4.3",
"syslog_server": "1.1.1.3"
},
"min-router1": {
"aaa_server": "4.4.4.4",
"ntp_server": "3.3.3.3",
"dns_server": "3.3.4.3",
"syslog_server": "1.1.1.4"
},
"new-router1": {
"aaa_server": "4.4.4.4",
"ntp_server": "3.3.3.3",
"dns_server": "3.3.4.3",
"syslog_server": "2.2.2.3"
},
"phe-router1": {
"aaa_server": "4.4.4.4",
"ntp_server": "3.3.3.3",
"dns_server": "3.3.4.3",
"syslog_server": "3.3.3.4"
}
}
}
# extra lines omitted for brevity. Here you can see other information on how inventory is grouped.
}
Templating with Inherited Variables
Now that we have our variable and inventory structure created, we can start to look at using that inventory to create configuration sections with Jinja2 templating. We will use the same playbook from blog Part 3 (with inventory.yaml
file above), to generate a configuration snippet. We will make one minor change to the playbook, which is that we’re using hosts: regions
, which will target the regions
host group. If we had another group at the same level (under the all
group) as regions, we could target the hosts in those groups separately. If we had an inventory file like the one below, the playbook would only run against hosts inside the regions
group, not the datacenters
group.
# extra_groups_inventory.yaml
all:
children:
regions:
{ommitted for brevity...}
datacenters:
{omitted for brevity...}
Now, to our example playbook, where we will target the regions
group and generate configurations for the devices within that group.
<span role="button" tabindex="0" data-code="# playbook.yaml
– name: Template Generation Playbook
hosts: regions
# playbook.yaml
- name: Template Generation Playbook
hosts: regions <--- This is where we can limit to specific groups
gather_facts: false
tasks:
- name: Generate template
ansible.builtin.template:
src: ./template.j2
dest: ./configs/.cfg
delegate_to: localhost
We will use a simple template to generate the configuration lines to set the dns server, ntp server, and logging server for a Cisco device.
# template.j2
hostname {{ inventory_hostname }}
ip name-server {{ dns_server }}
logging host {{ syslog_server }}
ntp server {{ ntp_server }}
tacacs-server host {{ aaa_server }} key mysupersecretkey
We can now run the playbook with our template with the command ansible-playbook -i inventory.yaml playbook.yaml
. First, we see below how the file structure looks prior to running the playbook. Then we see the output of running the playbook with the command above.
# before running the playbook
.
├── configs
├── inventory.yaml
├── playbook.yaml
(base) {} ansible_inventory ansible-playbook -i inventory.yaml playbook.yaml
PLAY [Template Generation Playbook] **************************************************************************************************
TASK [Generate template] **************************************************************************************************
changed: [phe-router1 -> localhost]
changed: [new-router1 -> localhost]
changed: [min-router1 -> localhost]
changed: [chi-router1 -> localhost]
changed: [dal-router1 -> localhost]
PLAY RECAP **************************************************************************************************
chi-router1: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
dal-router1: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
min-router1: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
new-router1: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
phe-router1: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
We can see the playbook generates three files in the ./configs
folder, one for each router. Here we see the new file structure, and the resulting configuration files that are generated from our playbook.
# after running the playbook
.
├── configs
│ ├── chi-router1.cfg
│ ├── dal-router1.cfg
│ ├── min-router1.cfg
│ ├── new-router1.cfg
│ └── phe-router1.cfg
├── inventory.yaml
├── playbook.yaml
# chi-router1.cfg
hostname chi-router1
ip name-server 1.1.2.2
logging host 1.1.1.2
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
# min-router1.cfg
hostname min-router1
ip name-server 3.3.4.3
logging host 1.1.1.4
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
# dal-router1.cfg
hostname dal-router1
ip name-server 3.3.4.3
logging host 1.1.1.3
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
# phe-router1.cfg
hostname phe-router1
ip name-server 3.3.4.3
logging host 3.3.3.4
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
# new-router1.cfg
hostname new-router1
ip name-server 3.3.4.3
logging host 2.2.2.3
ntp server 3.3.3.3
tacacs-server host 4.4.4.4 key mysupersecretkey
{ other tacacs config ommitted for brevity... }
In the results we can see that the routers have each inherited their syslog_server
and ntp_server
values from their regional variables, and chicago has overridden with the site-specific value. The aaa_server
variable is shared across all regions due to its being assigned at the regions
group level. We could override this for a specific host or site if we assigned the same aaa_server
variable lower in the hierarchy, or if we had the datacenters
group like we mentioned at the beginning of the post, we could have a different aaa server assigned to those devices by setting the variable there. Also, note how we didn’t even have to modify the variable calls in the Jinja2 template, because Ansible handles which values get applied via the variable merge process we mentioned above. This makes templates simple and clean, so they don’t require a lot of if/else logic to say “if it’s a device in this region, apply dns server X, but if it’s in another region apply dns server Y”.