This example is simplistic in nature, but should demonstrate the capabilities, and perhaps match a future use case.
-Ken
Ansible’s ability to merge variables from host/group vars in an intelligent way and futher allow those variables to be accessed per host truly is a great feature. However, you will likely run into a scenario where you need to run a task against one host, and access variables from another host. One use case could be gathering a spine IP address dynamically. How can this be done in Ansible?
One of Ansible’s magic variables, hostvars
(not to be confused with the folder designation of host_vars
, which is distinctly different) variables provide access to variables defined for every host in your inventory. These variables could be defined anywhere in your Ansible Variable Precedence including being registered in a previous task. We can demonstrate these variables by running a debug task on hostvars.
Here is the sample inventory file, that presumes your device names are resolvable:
[all]
nyc-sp01 loopback_ip=10.0.0.1
nyc-sp02 loopback_ip=10.0.0.2
nyc-lf01
nyc-lf02
[all:vars]
username=admin
password=password
ansible_network_os=ios
[spine]
nyc-sp01
nyc-sp02
[spine:vars]
role=spine
[leaf]
nyc-lf01
nyc-lf02
[leaf:vars]
role=leaf
Here is the sample playbook:
---
- name: "EXAMINE HOSTVARS OF SPINE DEVICES"
hosts: "spine"
gather_facts: "no"
tasks:
- name: "DEBUG HOSTVARS"
debug:
var: "hostvars"
Here are the results from the playbook:
PLAY [EXAMINE HOSTVARS OF SPINE DEVICES] *******************************************************************************************************************************************************************
TASK [DEBUG HOSTVARS] **************************************************************************************************************************************************************************************
ok: [nyc-sp01] => {
"hostvars": {
"nyc-lf01": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_inventory_sources": [
"/tmp/hostvars-inventory"
],
"ansible_network_os": "ios",
"ansible_playbook_python": "/usr/bin/python",
"ansible_run_tags": [
"all"
],
"ansible_skip_tags": [],
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.7.2",
"major": 2,
"minor": 7,
"revision": 2,
"string": "2.7.2"
},
"group_names": [
"leaf"
],
"groups": {
"all": [
"nyc-sp01",
"nyc-sp02",
"nyc-lf01",
"nyc-lf02"
],
"leaf": [
"nyc-lf01",
"nyc-lf02"
],
"spine": [
"nyc-sp01",
"nyc-sp02"
],
"ungrouped": []
},
"inventory_dir": "/tmp",
"inventory_file": "/tmp/hostvars-inventory",
"inventory_hostname": "nyc-lf01",
"inventory_hostname_short": "nyc-lf01",
"omit": "__omit_place_holder__dd460eebd4708cf10b36ee2e1228676b772ec0dd",
"password": "password",
"playbook_dir": "/tmp",
"role": "leaf",
"username": "admin"
},
"nyc-lf02": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_inventory_sources": [
"/tmp/hostvars-inventory"
],
"ansible_network_os": "ios",
"ansible_playbook_python": "/usr/bin/python",
"ansible_run_tags": [
"all"
],
"ansible_skip_tags": [],
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.7.2",
"major": 2,
"minor": 7,
"revision": 2,
"string": "2.7.2"
},
"group_names": [
"leaf"
],
"groups": {
"all": [
"nyc-sp01",
"nyc-sp02",
"nyc-lf01",
"nyc-lf02"
],
"leaf": [
"nyc-lf01",
"nyc-lf02"
],
"spine": [
"nyc-sp01",
"nyc-sp02"
],
"ungrouped": []
},
"inventory_dir": "/tmp",
"inventory_file": "/tmp/hostvars-inventory",
"inventory_hostname": "nyc-lf02",
"inventory_hostname_short": "nyc-lf02",
"omit": "__omit_place_holder__dd460eebd4708cf10b36ee2e1228676b772ec0dd",
"password": "password",
"playbook_dir": "/tmp",
"role": "leaf",
"username": "admin"
},
"nyc-sp01": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_inventory_sources": [
"/tmp/hostvars-inventory"
],
"ansible_network_os": "ios",
"ansible_playbook_python": "/usr/bin/python",
"ansible_run_tags": [
"all"
],
"ansible_skip_tags": [],
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.7.2",
"major": 2,
"minor": 7,
"revision": 2,
"string": "2.7.2"
},
"group_names": [
"spine"
],
"groups": {
"all": [
"nyc-sp01",
"nyc-sp02",
"nyc-lf01",
"nyc-lf02"
],
"leaf": [
"nyc-lf01",
"nyc-lf02"
],
"spine": [
"nyc-sp01",
"nyc-sp02"
],
"ungrouped": []
},
"inventory_dir": "/tmp",
"inventory_file": "/tmp/hostvars-inventory",
"inventory_hostname": "nyc-sp01",
"inventory_hostname_short": "nyc-sp01",
"loopback_ip": "10.0.0.1",
"omit": "__omit_place_holder__dd460eebd4708cf10b36ee2e1228676b772ec0dd",
"password": "password",
"playbook_dir": "/tmp",
"role": "spine",
"username": "admin"
},
"nyc-sp02": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_inventory_sources": [
"/tmp/hostvars-inventory"
],
"ansible_network_os": "ios",
"ansible_playbook_python": "/usr/bin/python",
"ansible_run_tags": [
"all"
],
"ansible_skip_tags": [],
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.7.2",
"major": 2,
"minor": 7,
"revision": 2,
"string": "2.7.2"
},
"group_names": [
"spine"
],
"groups": {
"all": [
"nyc-sp01",
"nyc-sp02",
"nyc-lf01",
"nyc-lf02"
],
"leaf": [
"nyc-lf01",
"nyc-lf02"
],
"spine": [
"nyc-sp01",
"nyc-sp02"
],
"ungrouped": []
},
"inventory_dir": "/tmp",
"inventory_file": "/tmp/hostvars-inventory",
"inventory_hostname": "nyc-sp02",
"inventory_hostname_short": "nyc-sp02",
"loopback_ip": "10.0.0.2",
"omit": "__omit_place_holder__dd460eebd4708cf10b36ee2e1228676b772ec0dd",
"password": "password",
"playbook_dir": "/tmp",
"role": "spine",
"username": "admin"
}
}
}
ok: [nyc-sp02] => {
"hostvars": {
"nyc-lf01": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_inventory_sources": [
"/tmp/hostvars-inventory"
],
"ansible_network_os": "ios",
"ansible_playbook_python": "/usr/bin/python",
"ansible_run_tags": [
"all"
],
"ansible_skip_tags": [],
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.7.2",
"major": 2,
"minor": 7,
"revision": 2,
"string": "2.7.2"
},
"group_names": [
"leaf"
],
"groups": {
"all": [
"nyc-sp01",
"nyc-sp02",
"nyc-lf01",
"nyc-lf02"
],
"leaf": [
"nyc-lf01",
"nyc-lf02"
],
"spine": [
"nyc-sp01",
"nyc-sp02"
],
"ungrouped": []
},
"inventory_dir": "/tmp",
"inventory_file": "/tmp/hostvars-inventory",
"inventory_hostname": "nyc-lf01",
"inventory_hostname_short": "nyc-lf01",
"omit": "__omit_place_holder__dd460eebd4708cf10b36ee2e1228676b772ec0dd",
"password": "password",
"playbook_dir": "/tmp",
"role": "leaf",
"username": "admin"
},
"nyc-lf02": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_inventory_sources": [
"/tmp/hostvars-inventory"
],
"ansible_network_os": "ios",
"ansible_playbook_python": "/usr/bin/python",
"ansible_run_tags": [
"all"
],
"ansible_skip_tags": [],
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.7.2",
"major": 2,
"minor": 7,
"revision": 2,
"string": "2.7.2"
},
"group_names": [
"leaf"
],
"groups": {
"all": [
"nyc-sp01",
"nyc-sp02",
"nyc-lf01",
"nyc-lf02"
],
"leaf": [
"nyc-lf01",
"nyc-lf02"
],
"spine": [
"nyc-sp01",
"nyc-sp02"
],
"ungrouped": []
},
"inventory_dir": "/tmp",
"inventory_file": "/tmp/hostvars-inventory",
"inventory_hostname": "nyc-lf02",
"inventory_hostname_short": "nyc-lf02",
"omit": "__omit_place_holder__dd460eebd4708cf10b36ee2e1228676b772ec0dd",
"password": "password",
"playbook_dir": "/tmp",
"role": "leaf",
"username": "admin"
},
"nyc-sp01": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_inventory_sources": [
"/tmp/hostvars-inventory"
],
"ansible_network_os": "ios",
"ansible_playbook_python": "/usr/bin/python",
"ansible_run_tags": [
"all"
],
"ansible_skip_tags": [],
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.7.2",
"major": 2,
"minor": 7,
"revision": 2,
"string": "2.7.2"
},
"group_names": [
"spine"
],
"groups": {
"all": [
"nyc-sp01",
"nyc-sp02",
"nyc-lf01",
"nyc-lf02"
],
"leaf": [
"nyc-lf01",
"nyc-lf02"
],
"spine": [
"nyc-sp01",
"nyc-sp02"
],
"ungrouped": []
},
"inventory_dir": "/tmp",
"inventory_file": "/tmp/hostvars-inventory",
"inventory_hostname": "nyc-sp01",
"inventory_hostname_short": "nyc-sp01",
"loopback_ip": "10.0.0.1",
"omit": "__omit_place_holder__dd460eebd4708cf10b36ee2e1228676b772ec0dd",
"password": "password",
"playbook_dir": "/tmp",
"role": "spine",
"username": "admin"
},
"nyc-sp02": {
"ansible_check_mode": false,
"ansible_diff_mode": false,
"ansible_facts": {},
"ansible_forks": 5,
"ansible_inventory_sources": [
"/tmp/hostvars-inventory"
],
"ansible_network_os": "ios",
"ansible_playbook_python": "/usr/bin/python",
"ansible_run_tags": [
"all"
],
"ansible_skip_tags": [],
"ansible_verbosity": 0,
"ansible_version": {
"full": "2.7.2",
"major": 2,
"minor": 7,
"revision": 2,
"string": "2.7.2"
},
"group_names": [
"spine"
],
"groups": {
"all": [
"nyc-sp01",
"nyc-sp02",
"nyc-lf01",
"nyc-lf02"
],
"leaf": [
"nyc-lf01",
"nyc-lf02"
],
"spine": [
"nyc-sp01",
"nyc-sp02"
],
"ungrouped": []
},
"inventory_dir": "/tmp",
"inventory_file": "/tmp/hostvars-inventory",
"inventory_hostname": "nyc-sp02",
"inventory_hostname_short": "nyc-sp02",
"loopback_ip": "10.0.0.2",
"omit": "__omit_place_holder__dd460eebd4708cf10b36ee2e1228676b772ec0dd",
"password": "password",
"playbook_dir": "/tmp",
"role": "spine",
"username": "admin"
}
}
}
PLAY RECAP *************************************************************************************************************************************************************************************************
nyc-sp01 : ok=1 changed=0 unreachable=0 failed=0
nyc-sp02 : ok=1 changed=0 unreachable=0 failed=0
With this information, we should be able to access any hostvar using hostvars["inventory_hostname_of_device"]
format. For example, hostvars["nyc-sp01"]
provides access to all variables assigned to hostvars[“nyc-sp01”], so hostvars["nyc-sp01"]["ansible_network_os"]
and ansible_network_os
is a reference to the same variable. Let’s give this a try with accessing the role
defined in our inventory. We will use the debug command to print out the current host being iterated over and the other devices role
.
Here is the playbook:
---
- name: "ACCESS ANOTHER DEVICE'S VARIABLE"
hosts: "spine"
connection: "local"
gather_facts: "no"
tasks:
- name: "PRINT DEVICE ROLE"
debug:
msg: "WHILE ACCESSING CAN STILL DETERMINE THAT HAS A ROLE OF: "
with_items:
- "nyc-sp01"
- "nyc-sp02"
- "nyc-lf01"
- "nyc-lf02"
when: "item != inventory_hostname"
The results show that we did successfully access the other host’s role
variable:
PLAY [ACCESS ANOTHER DEVICE'S VARIABLE] ********************************************************************************************************************************************************************
TASK [PRINT DEVICE ROLE] ***********************************************************************************************************************************************************************************
skipping: [nyc-sp01] => (item=nyc-sp01)
ok: [nyc-sp02] => (item=nyc-sp01) => {
"msg": "WHILE ACCESSING nyc-sp02 CAN STILL DETERMINE THAT nyc-sp01 HAS A ROLE OF: spine"
}
skipping: [nyc-sp02] => (item=nyc-sp02)
ok: [nyc-sp01] => (item=nyc-sp02) => {
"msg": "WHILE ACCESSING nyc-sp01 CAN STILL DETERMINE THAT nyc-sp02 HAS A ROLE OF: spine"
}
ok: [nyc-sp02] => (item=nyc-lf01) => {
"msg": "WHILE ACCESSING nyc-sp02 CAN STILL DETERMINE THAT nyc-lf01 HAS A ROLE OF: leaf"
}
ok: [nyc-sp01] => (item=nyc-lf01) => {
"msg": "WHILE ACCESSING nyc-sp01 CAN STILL DETERMINE THAT nyc-lf01 HAS A ROLE OF: leaf"
}
ok: [nyc-sp02] => (item=nyc-lf02) => {
"msg": "WHILE ACCESSING nyc-sp02 CAN STILL DETERMINE THAT nyc-lf02 HAS A ROLE OF: leaf"
}
ok: [nyc-sp01] => (item=nyc-lf02) => {
"msg": "WHILE ACCESSING nyc-sp01 CAN STILL DETERMINE THAT nyc-lf02 HAS A ROLE OF: leaf"
}
PLAY RECAP *************************************************************************************************************************************************************************************************
nyc-sp01 : ok=1 changed=0 unreachable=0 failed=0
nyc-sp02 : ok=1 changed=0 unreachable=0 failed=0
In order to dynamically gather the spine loopback IP address, we can use a combination of hostvars
and hostname parsing to get the data. The naming standard in this scenario presumes the first 4 characters will be the same (nyc-), and then append the spine indicator (spXX), and add finally the loopback_ip
variable.
Here is the playbook:
---
- name: "GET LOOPBACK IP OF SPINE"
hosts: "leaf"
gather_facts: "no"
tasks:
- name: "DEBUG SPINE01 LOOPBACK"
debug:
msg: "{{ inventory_hostname }} SHOULD PEER WITH {{ hostvars[ inventory_hostname[0:4] ~ 'sp01']['loopback_ip'] }}"
- name: "DEBUG SPINE02 LOOPBACK"
debug:
msg: "{{ inventory_hostname }} SHOULD PEER WITH {{ hostvars[ inventory_hostname[0:4] ~ 'sp02']['loopback_ip'] }}"
Which will result in:
PLAY [GET LOOPBACK IP OF SPINE] ****************************************************************************************************************************************************************************
TASK [DEBUG SPINE01 LOOPBACK] ******************************************************************************************************************************************************************************
ok: [nyc-lf01] => {
"msg": "nyc-lf01 SHOULD PEER WITH 10.0.0.1"
}
ok: [nyc-lf02] => {
"msg": "nyc-lf02 SHOULD PEER WITH 10.0.0.1"
}
TASK [DEBUG SPINE02 LOOPBACK] ******************************************************************************************************************************************************************************
ok: [nyc-lf01] => {
"msg": "nyc-lf01 SHOULD PEER WITH 10.0.0.2"
}
ok: [nyc-lf02] => {
"msg": "nyc-lf02 SHOULD PEER WITH 10.0.0.2"
}
PLAY RECAP *************************************************************************************************************************************************************************************************
nyc-lf01 : ok=2 changed=0 unreachable=0 failed=0
nyc-lf02 : ok=2 changed=0 unreachable=0 failed=0
This example is simplistic in nature, but should demonstrate the capabilities, and perhaps match a future use case.
-Ken
Share details about yourself & someone from our team will reach out to you ASAP!