Skip to main content

Cloud

Custom Ansible Execution Environments

GCP is migrating from Container Registry to Artifact Registry

Ansible Tower users are gearing up for a big migration to Ansible Automation Platform 2. Ansible Tower 3.8 is technically AAP 1.2, which sunsets in September of 2023. AAP has a few usability updates like the improved Job search, which now lets you search in the web UI for specific job details like the limit which is welcome, also the stdout and stderr in the UI is more readable. A Private Automation Hub is available which acts as a combination container registry and on-LAN Ansible Galaxy server. Automation hub brings frequently used collections closer to the controllers which can speed up job execution by removing the need to pull collections directly from the internet. It also hosts the execution environments which AAP uses to execute the ansible plays in containerized runtimes. It’s one of the biggest fundamental difference between AAP and the outgoing Tower product. Where Tower relies on python virtual environments to organize competing python dependencies, its replacement uses container runtimes. The execution environments are more portable than the old virtual environments which must be created or recreated for each developer. Having a container image which runs ansible jobs means developers can pull what they need and get to writing their automation and configuration instead of wrangling their different python environments.

This post will walk through a typical python virtual environment and a simple execution environment creation and execution. At the end I’ll demonstrate the Ansible’s text-based user interface (TUI) with ansible-navigator. There is a lot more to the tool than what I talk about here. This post also assumes a basic familiarity of Python, Red Hat Enterprise Linux, and container tools like Docker or Podman. I’d encourage anyone working with AAP or AWX to also look into ansible-builder for building execution environments. For now, we’re just customizing containers and using them to run ansible jobs.

Traditional Python Virtual Environments

Until recently, developing ansible automation has been pretty straightforward once you get the hang of it. For the most part it’s just a matter of writing some yaml, double-checking your spacing, maybe installing a collection or two and testing. But what if one of your collection requires a python package outside of the standard library? Your best bet is usually to create python virtual environments that contain everything you need for your plays. A python virtual environment is just that. It’s a way to isolate your python development environment with packages that are only available when the environment is active and keeps unnecessary python packages from polluting your system-python environment.

To create a virtual environment just run

python3 -m venv ansible-test

Then activate it by sourcing the activate script

. ansible-test/bin/activate (note the leading dot .)

With the virtual environment active, you can now install any prerequisite packages you might need to support a galaxy collection and the collection itself. For instance the MySQL collection has both system and python dependencies. For Red Hat based distros the system packages are: gcc, python-devel and mysql-devel. Additionally there are also two python package requirements: PyMySQL and mysqlclient. We’ll want to install the system packages with dnf/yum, then pip install our python dependencies. Here’s our barebones requirements.txt for pip packages and requirements.yml for galaxy content – we’ll use these later in the execution environment as well:

requirements.txt

PyMySQL
mysqlclient

requirements.yml

collections:
  - name: community.mysql
    version: 3.6.0

Now we Install our System Dependencies with:

dnf install gcc mysql-devel python-devel -y

And Our Python Dependencies with:

pip install -r requirements.txt && ansible-galaxy collection install -r requirements.yml

So, not too bad. Just setup a virtual environment, activate it, install your requirements and get to developing. But what if you don’t want to install system packages? Maybe it conflicts with something else installed on your development machine. How do you collaborate with with your team? How do you keep your virtual environments in sync? Of course you can use ansible to create a virtual environment with the pip module, but there might be a better way altogether using a containerized execution environment.

Containerized Execution Environments and Ansible Content Navigator

If you’re just getting started with ansible today or if your organization is using AAP or AWX you might want to look at the latest Ansible Content Navigator tool: ansible-navigator. Ansible-navigator combines a lot of the break-out CLI commands listed earlier into a single executable and provides an optional TUI interface to drill-down into playbook execution. More, it eliminates the need for a python virtual environment and swaps it for a more portable and modern containerized execution environment (ee). It’s still on the developer to customize the execution environment, but the upside is you can now push the whole environment to a registry and  the ansible content you write will run the same from anywhere that container can run. This is how Red Hat’s AAP and the upstream AWX work, so if you’re using one of those you’ll want to be sure your dev environment is consistent with your production automation platform. Developing automation using the same container image that the controller uses is the trick.

AAP comes with a few standard execution environments out of the box that automation developers can pull to their dev box. Each image is an ansible aware container with some base collections to run your playbooks. The base image I’m using in this example is at quay.io/ansible/creator-ee. It’s got a few base collections, ansible-core, and an ansible-runner to execute the plays. All the same container customizations apply here as they would with any other container image. Future posts might go into using ansible-builder, but for today I’m sticking to plain vanilla container customization.

Lets take that MySQL example for instance. Here’s an example Containerfile that we might use to get started to run our community.mysql plays:

MySQLContainerfile

FROM quay.io/ansible/creator-ee

COPY requirements.txt .
COPY requirements.yml .
RUN microdnf install gcc python3-devel mysql-devel -y
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN ansible-galaxy collection install -r requirements.yml -p /usr/share/ansible/collections

Note: Here we’ve offloaded those system-wide packages to the container instead of our own system. Also I’ve instructed ansible-galaxy to install the collection to the container’s collections directory inside the image. This ensures the collection persists beyond the initial image creation. It’s where the rest of the default collections like ansible.posix and kubernetes.core are, so it’s good enough for me for now.

Save that to a containerfile called MySQLContainerfile (or whatever you want to call it) and build your image. I’m using podman here, but feel free to use docker if that’s your jam.

podman build -t registry.example.com/ansible-demo/mysql-ee:0.0.1 -f MySQLContainerfile

Now we can create and test our plays using the new execution environment we just created and if all goes well, we’ll push the image to our registry to be used by other developers or make it available to AAP.

podman push registry.example.com/ansible-demo/mysql-ee:0.0.1

Lets start with a simple play that installs mariadb (an open fork of mysql), initializes a new schema, and adds a user that can connect with a password on the localhost and from our ansible controller.

Here’s the playbook itself:

mysqldb.yml

---
- name: Install and initialize a mysql database
  hosts: db
  become: true

  vars_files:
    - secrets.yml

  tasks:
    - name: Install SQL packages
      ansible.builtin.package:
        name: "{{ db_packages }}"
        state: present

    - name: Open Host Firewall for SQL Connections
      ansible.posix.firewalld:
        service: mysql
        permanent: true
        immediate: true
        state: enabled

    - name: Start SQL Server Service
      ansible.builtin.service:
        name: "{{ db_service }}"
        state: started

    - name: Create .my.cnf
      ansible.builtin.template:
        src: templates/my.j2
        dest: "{{ db_super_path }}.my.cnf"

    - name: Create Database
      community.mysql.mysql_db:
        login_unix_socket: /var/lib/mysql/mysql.sock
        name: "{{ db_name }}"
        state: present

    - name: Add user to {{ db_name }}
      community.mysql.mysql_user:
        login_unix_socket: /var/lib/mysql/mysql.sock
        name: "{{ db_user }}"
        password: "{{ db_pass }}"
        priv: '{{ db_name }}.*:ALL'
        host: "{{ item }}"
        state: present
      loop: "{{ db_hosts }}"

And my secrets.yml file that I’m decrypting by passing –vault-id to the ansible-navigator command. More on that in just a bit.

Here’s my secrets.yml with swapped out passwords. Please don’t use passwords this crummy 👎.

secrets.yml

db_user: “demo_user”
db_pass: "password123"
db_super_pass: “$uper$ecure”

Finally, we just have a simple template file to create root’s .my.cnf credential file in the fourth task.

templates/my.j2

[client]
user=root
password={{ db_super_pass }}

I’m including the secrets here because using ansible-vault with ansible-navigator can be a little tricky but easy to demonstrate.

For the decryption, I’ve just set an environment variable

export ANSIBLE_VAULT_PASSWORD=myverysecurepassword

And have a simple bash script that I pass to navigator that just echoes that back out.

vault-pass.sh

#!/bin/bash

echo ${ANSIBLE_VAULT_PASSWORD}

Now, we can run our playbook with the following command using a containerized execution environment and make available to AAP.

ansible-navigator run mysqldb.yml --eei registry.example.com/ansible-demo/mysql-ee:0.0.1 --vault-id "vault-pass.sh" -m stdout

PLAY [Install and initialize a mysql database] *********************************

TASK [Gathering Facts] *********************************************************
ok: [fedora1]

TASK [Install MySQL package] ***************************************************
changed: [fedora1]

TASK [Open Host Firewall for SQL Connections] **********************************
changed: [fedora1]

TASK [Start SQL Server Service] ************************************************
changed: [fedora1]

TASK [Copy .my.cnf] ************************************************************
changed: [fedora1]

TASK [Create Database] *********************************************************
changed: [fedora1]

TASK [Add user to demo_db] *****************************************************
changed: [fedora1] => (item=192.168.64.2)
changed: [fedora1] => (item=localhost)

PLAY RECAP *********************************************************************
fedora1                    : ok=8    changed=7    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

 

Passing -m stdout displays the results in the traditional ansible format. To use the TUI interface instead, leave off the -m stdout.

The TUI interface allows you to drill into your play’s execution.

Just type the number of the row you want to look into (use esc to go back).

  Play name.                                  Ok Changed  Unreachable  Failed  Skipped  Ignored  In progress  Task count   Progress
0│Install and initialize a mysql database      7       6            0       0        0        0            0           7   Complete

^b/PgUp page up         ^f/PgDn page down         ↑↓ scroll         esc back         [0-9] goto         :help help      Successful

 

Typing 0 here will take us into the play execution below:

  Result   Host      Number   Changed   Task                                           Task action                      Duration
0│Ok       fedora1        0   False     Gathering Facts                                gather_facts                           1s
1│Ok       fedora1        1   True      Install MySQL package                          ansible.builtin.package                9s
2│Ok       fedora1        2   True      Open Host Firewall for SQL Connections         ansible.posix.firewalld                0s
3│Ok       fedora1        3   True      Start SQL Server Service                       ansible.builtin.service                3s
4│Ok       fedora1        4   True      Copy .my.cnf                                   ansible.builtin.template               1s
5│Ok       fedora1        5   True      Create Database                                community.mysql.mysql_db               0s
6│Ok       fedora1        6   True      Add user to demo_db                            community.mysql.mysql_user             1s


^b/PgUp page up         ^f/PgDn page down         ↑↓ scroll         esc back         [0-9] goto         :help help      Successful

 

Now let’s look at line 5 to see more information on the Create Database task.

The output below shows us all of the parameters and results of a given task in YAML format:

Play name: Install and initialize a mysql database:5
Task name: Create Database
CHANGED: fedora1
 0│---                                                                                                                             
 1│duration: 0.392973                                                                                                              
 2│end: '2023-05-02T07:32:19.647181'                                                                                               
 3│event_loop: null                                                                                                                
 4│host: fedora1                                                                                                                   
 5│play: Install and initialize a mysql database                                                                                   
 6│play_pattern: db                                                                                                                
 7│playbook: /home/lxadmin/mysql/mysqldb.yml                                                                                       
 8│remote_addr: fedora1                                                                                                            
 9│res:                                                                                                                            
10│  _ansible_no_log: null                                                                                                         
11│  changed: true                                                                                                                 
12│  db: demo_db
13│  db_list:
14│  - demo_db
15│  executed_commands:
16│  - CREATE DATABASE `demo_db`
17│  invocation:
18│    module_args:
19│      ca_cert: null
20│      chdir: null
21│      check_hostname: null
22│      check_implicit_admin: false
23│      client_cert: null
24│      client_key: null
25│      collation: ''
^b/PgUp page up    ^f/PgDn page down    ↑↓ scroll    esc back    - previous    + next    [0-9] goto    :help help       Successful

Finally, lets test the mysql connection from our ansible controller using our provisioned user to connect to the new database:

mysql -u demo_user -p -h fedora1
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 5.5.5-10.5.18-MariaDB MariaDB Server

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use demo_db;
Database changed

So there’s a little more setup on the front-end to get started using execution environments over the old python virtual environments, but I think it’s worth trying out. Especially for teams who are really digging into what AAP and AWX and want that consistent development environment.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Jon Gambino

Jon Gambino is a Lead Technical Consultant at Perficient with over 20 years of experience in small, medium, and enterprise organizations. He works with the US Cloud Platform team and specializes in Automation, Linux, and Python. Jon also holds a Bachelor of science in Computer Science as well as Red Hat Delivery Specialist accreditations for Automation I and II.

More from this Author

Follow Us
TwitterLinkedinFacebookYoutubeInstagram