Ansible Builder, Runner, and Execution Environments
Over the past few weeks I’ve been on two projects that involved Ansible AWX. One being an upgrade from an old version with virtual environments(venv) in use, the other being a new deployment where specific environments are required. Since the upgrade was going to 19+ and the new deployment would be 19+ as well, it quickly became apparent that Ansible Execution Environments would solve the problem.
So what was the problem? Well, the existing documentation is either thoroughly outdated, confusing, or doesn’t go into enough detail to really delineate what goals can be achieved.
In this blog post I hope to cover Ansible Builder, Runner, and Execution Environments in a way that is verbose enough to understand them, and also provides enough detail to get started with using the tools.
Ansible Builder
Ansible Builder is simply a tool that can be used to create Execution Environments(EEs). It takes in a yaml file that defines the requirements for the EE. These requirements include Python dependencies, Ansible collections, and system-level dependencies. Together these create a container image with all the dependencies included. It’s important to note that EE are replacing virtual environments within Ansible AWX/Tower, but EEs are NOT restricted to AWX/Tower deployments. EEs can just as easily be used from the command line via Ansible Runner, which will be covered in the next section.
More details on Ansible Builder.
Let’s create an EE for Nautobot Ansible. It needs to include pynautobot
as a Python dependency and the networktocode.nautobot
Ansible collection.
In order to run ansible-builder
we need three files.
execution-environment.yml
– This is the default fileansible-builder
looks for to create a build context.requirements.txt
– Used to provide Python dependencies. Can be frompip freeze
.requirements.yml
– Used to install the required Ansible collections.
# execution-environment.yml
---
version: 1
dependencies:
galaxy: requirements.yml
python: requirements.txt
additional_build_steps:
prepend: |
RUN pip install --upgrade pip setuptools
append:
- RUN ls -la /etc
# requirements.txt
pynautobot==1.0.3
# requirements.yml
---
collections:
- networktocode.nautobot
You can see the execution-environment.yml
file references the other two dependency files. The two files will not be covered in detail in this blog since the formats are well known.
Now that the files are created, we can build our EE. These two commands are identical except for specifying the execution file. By default it’s looking for execution-environment.yml
in the cwd.
(builder) [jeff@centos ~]$ ansible-builder build --tag jeff/nautobot-ee --container-runtime docker -f execution-environment.yml
Running command:
docker build -f context/Dockerfile -t jeff/nautobot-ee context
Complete! The build context can be found at: /home/jeff/context
(builder) [jeff@centos ~]$ ansible-builder build --tag jeff/nautobot-ee --container-runtime docker
Running command:
docker build -f context/Dockerfile -t jeff/nautobot-ee context
Complete! The build context can be found at: /home/jeff/context
The default container-runtime is podman; therefore, we pass docker instead.
The builder does two things: it creates a build context, which above is in /home/jeff/context
, and it also creates the image by running docker build
.
context/
├── _build
│ ├── requirements.txt
│ └── requirements.yml
└── Dockerfile
1 directory, 3 files
On a high level, ansible-builder
is an abstraction to create a Dockerfile
that can then be used to create the container. In this case, the builder build
command also creates the container image seen below.
(builder) [jeff@centos ~]$ docker image ls | grep nautobot
jeff/nautobot-ee latest d0ec0cd93578 27 hours ago 623MB
Next, if you’re like me, you want to validate the container has the correct dependencies installed. This can be done using docker run
.
docker run -it --rm jeff/nautobot-ee /bin/bash
bash-4.4# ansible --version
ansible [core 2.11.6.post0]
Ansible core and runner are coming from the base image used.
bash-4.4# pip freeze | grep pynautobot
pynautobot==1.0.3
The container has the correct pynautobot version we specified!
bash-4.4# ansible-galaxy collection list
# /usr/share/ansible/collections/ansible_collections
Collection Version
---------------------- -------
networktocode.nautobot 3.1.0
The Ansible collection was also successfully installed into the container!
Finally the base image shows:
bash-4.4# cat /etc/redhat-release
CentOS Linux release 8.4.2105
What if you need a specific Ansible version? It’s as simple as changing the base image, this can be added into execution-environment.yml
.
build_arg_defaults:
EE_BASE_IMAGE: 'quay.io/ansible/ansible-runner:stable-2.9-latest'
Once the ansible-builder build
command is rerun and the container image is created, the Ansible version is now based on the base image.
(builder) [jeff@centos ~]$ docker run -it --rm jeff/nautobot-ee /bin/bash
bash-4.4# ansible --version
ansible 2.9.27.post0
Now that we have the container image locally, it can be used locally via ansible-runner, or it can be pushed up to a container registry to be used with AWX/Tower.
Ansible Runner
To find out more about Ansible Runner, see the documentation. I will say, the documentation is quite a bit out of date, and that is honestly the reason I started thinking about this blog post. Multiple command references are no longer valid; that, coupled with every other blog/article I found using those no-longer-existent commands, caused some headache in my initial discovery.
With that out of the way, Ansible Runner is not new—AWX/Tower have been using ansible-runner
to execute jobs well before EEs came out. Ansible Runner is an awesome tool to run Ansible via Python, and it’s a great resource to be able to test EEs and/or use EEs locally.
Since this blog is still focused on EEs, I will focus on testing the EE locally. In order to accomplish this, I will be using ansible-runner
with the command-line tool.
As expected, there is a standard directory structure that ansible-runner
is expecting. This is documented here.
For this example, I have this structure:
.
├── inventory
│ └── hosts
└── project
└── test.yml
3 directories, 3 files
with a simple test playbook:
---
- hosts: all
connection: local
tasks:
- debug:
msg: "Simple Test Debug for EE"
with an even simpler inventory:
localhost ansible_connection=local ansible_python_interpreter=""
Now that I have a simple playbook and inventory specified, I can execute the playbook utilizing the ansible-runner
command-line tool.
WARNING: This is where all the docs and blog posts show command arguments of ‘adhoc’ or ‘playbook’. Those no longer exist.
ansible-runner run --process-isolation --process-isolation-executable docker --container-image jeff/nautobot-ee -p test.yml example/
The program execution results in the output into STDOUT shown below:
(builder) [jeff@centos ~]$ ansible-runner run --process-isolation --process-isolation-executable docker --container-image jeff/nautobot-ee -p test.yml example/
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "Simple Test Debug for EE"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
In order to limit the command-line args being specified, they can be defined in env/settings
.
# env/settings
---
container_image: jeff/nautobot-ee
process_isolation_executable: docker # or podman
process_isolation: true
After the successful execution, you will see the magic of ansible-runner
—it saves all the execution information. Look at how the tree has changed.
.
├── artifacts
│ └── d53583ca-330c-4589-ade4-b86649383cbf
│ ├── ansible_version.txt
│ ├── collections.json
│ ├── command
│ ├── env.list
│ ├── fact_cache
│ │ └── localhost
│ ├── job_events
│ │ ├── 1-d0c542b2-574c-4a17-b223-a1f5f3ce4bf5.json
│ │ ├── 2-0242ac11-0002-d90d-c5e7-000000000006.json
│ │ ├── 3-0242ac11-0002-d90d-c5e7-00000000000c.json
│ │ ├── 4-8fc529a6-77e4-47a8-9a23-ecc40e7d1375.json
│ │ ├── 5-4b31fa85-4adb-4e2e-bf3f-a64a75f4bed8.json
│ │ ├── 6-0242ac11-0002-d90d-c5e7-000000000008.json
│ │ ├── 7-77acb95e-9123-476c-b5e8-042b84aef01b.json
│ │ ├── 8-b04bba60-6a5e-41c9-bc56-749772238696.json
│ │ └── 9-199721de-2db8-4056-9f2e-35aa4c550793.json
│ ├── rc
│ ├── status
│ ├── stderr
│ └── stdout
├── env
│ └── settings
├── inventory
│ └── hosts
└── project
└── test.yml
7 directories, 21 files
This is fantastic—we now have all the information we could ever need from the playbook execution!
Each time this runner is executed a new artifact is created that stores all the same data. This can be incredibly helpful to have, whether you have an external tool you want to feed the data or another orchestration engine using Ansible.
Now that the EE we created is working properly locally, we can be confident that it will work when it is used by Ansible AWX/Tower.
Ansible Execution Environments (EEs)
The preceding sections cover a lot of content. At a high level, an EE is simply a locked-down environment where specific dependencies are installed and predictable. EEs are reusable and shareable. Once the container is pushed to a container registry, it can be used in any environment and the execution will be identical. This is power, and much more flexible than venvs were in previous releases of AWX/Tower.
Using an EE in AWX
Once an EE is created in the AWX UI, it can be tied to an organization or can be available globally. When creating a project in AWX, the EE can be assigned to the entire project or, alternatively, attached directly to a template.
A few additional notes to wrap up the blog. I’m running AWX on K3S. I ran some templates and was looking at the logs and couldn’t find logs relating to the download of the EE from the registry. It wasn’t until I looked into the events via kubectl get events
that I realized it was actually spinning up a new pod with that container to do the work.
[root@localhost ~]# kubectl get event
LAST SEEN TYPE REASON OBJECT MESSAGE
20s Normal Scheduled pod/automation-job-68-x2jgm Successfully assigned default/automation-job-68-x2jgm to localhost.localdomain
19s Normal Pulling pod/automation-job-68-x2jgm Pulling image "docker.io/<redact>/nautobot-ee:latest"
19s Normal Pulled pod/automation-job-68-x2jgm Successfully pulled image "docker.io/<redact>/nautobot-ee:latest" in 500.593958ms
18s Normal Created pod/automation-job-68-x2jgm Created container worker
18s Normal Started pod/automation-job-68-x2jgm Started container worker
The pod is only used to execute the job and then it is destroyed.
Conclusion
Hope this helps explain some of the nuances that I’ve found while researching and attempting to use the newer tools that Ansible offers. As EEs get more mature, I’m hoping to write some follow-up blog posts to go into more detail.
-Jeff
Tags :
Contact Us to Learn More
Share details about yourself & someone from our team will reach out to you ASAP!