Ansible libvirt dynamic inventory

In this video I configure a dynamic inventory in Ansible for quem kvm guest using community.libvirt

https://docs.ansible.com/ansible/3/collections/community/libvirt/libvirt_qemu_connection.html

Some notes on this plugin are.

  • currently DOES NOT work with selinux set to enforcing in the VM.
  • Requires the qemu-agent installed in the VM.
  • Requires access to the qemu-ga commands guest-exec, guest-exec-status, guest-file-close, guest-file-open, guest-file-read, guest-file-write.

This works with remote host and linux containers but in the video I did this all local.

First I installed the collection.

ansible-galaxy collection install community.libvirt

Created a dynamic inventory

$ cat kvm.yml
# Connect to qemu
plugin: community.libvirt.libvirt
uri: 'qemu:///system'

Note the uri would change for lxc or remote connections.

After the inventory set up test it

ansible-inventory --inventory kvm.yml --list




You should see the info about your guest. If you get red or an error verify that the guest agent is running and that it has access to the guest-exec and guest-file commands.

A good way to test the guest agent is with a guest execute of ls

virsh qemu-agent-command “Name of your Guest VM” '{"execute": "guest-exec", "arguments": { "path": "/usr/bin/ls", "arg": [ "/" ], "capture-output": true }}'

If you have issues take a look at my previous video where I provisioned the guest https://thenathan.net/2022/09/29/virt-builder/.

Once your inventory is working you can connect with the console.

ansible-console --inventory kvm.yml -l “Name of your Guest VM”

I used -l to limit because I only had the one host running.

Now we are in the console and can run some commands to test

ls

cat /etc/redhat-release

whoami
exit

This is the very simple playbook I used to test playbooks running against the inventory. t

$ cat   dnf_update_reboot.yml 
---
- hosts: alma8
  gather_facts: false
  become: true
  any_errors_fatal: yes
  tasks:

  - name: DNF update the system
    dnf:
      name:  "*"
      state: latest

  - name: Install the latest version of yum-utils
    dnf:
      name: yum-utils
      state: latest

  - name: Reboot required
    command: "/usr/bin/needs-restarting -r"
    register: reboot_required
    ignore_errors: True
    changed_when: False
    failed_when: reboot_required.rc == 2

  - name: Rebooting
    reboot:
      post_reboot_delay: 60
    throttle: 1
    when: reboot_required.rc == 1 and ansible_facts ['virtualization_role'] != 'host'boot_required.rc == 1 and ansible_facts ['virtualization_role'] != 'host'


You can run the playbook with the ansible-playbook command 
ansible-playbook -i kvm.yml dnf_update_reboot.yml




ansible-console –inventory kvm.yml -l “Name of your Guest VM”

virt-builder

The man page for virt-builder is a must read and its packed full of info.

man virt-builder

You can see all the images available with

virt-builder --list

Its important to read the notes about the image

virt-builder --notes alma-8.5

Lets build a virtual server.

time sudo virt-builder alma-8.5 --hostname alma8_server --install "@Server,qemu-guest-agent" --update --ssh-inject root:file:y.pub --firstboot-command 'nmcli device connect ens3' --edit '/etc/selinux/config: s/^SELINUX=.*/SELINUX=disabled/' --edit '/etc/sysconfig/qemu-ga: s/^BLACKLIST_RPC.*/#BLACKLIST_RPC/' --size 10G -o alma8_server.img

Once virt-builer has completed we can install the image.

sudo virt-install –name alma8_server –ram 4096 –vcpus=2 –disk path=alma8_server.img –import –noautoconsole –channel unix,mode=bind,target_type=virtio,name=org.qemu.guest_agent.0

If it all worked correct we can get the IP from virsh.

virsh net-dhcp-leases default

We can test that qemu agent is working and get the ip with with

virsh qemu-agent-command alma8 '{"execute":"guest-network-get-interfaces"}

virt-builder Ansible or Bash

In this video I use a Ansible playbook and a bash script to deploy a guest virtual machine on quem kvm and review some of the pros and cons of each.

Script and playbook I used:

build_alma8.sh

#!/bin/bash
builderimage=alma-8.5
serverimage=alma8_server.img
servername=alma8_server
clientimage=alma8_client.img
clientname=alma8_client
workingdirectory=~/kvm_images/

cd $workingdirectory

#Build Server
sudo virt-builder $builderimage --hostname $servername --install "@Server,qemu-guest-agent" --ssh-inject root:file:y.pub --firstboot-command 'nmcli device connect ens3'  --edit '/etc/selinux/config: s/^SELINUX=.*/SELINUX=disabled/' --edit '/etc/sysconfig/qemu-ga: s/^BLACKLIST_RPC.*/#BLACKLIST_RPC/' --size 10G -o $serverimage

#Virt Install Server
sudo virt-install --name $servername --ram 4096 --vcpus=2 --disk path=$serverimage  --import --noautoconsole --channel unix,mode=bind,target_type=virtio,name=org.qemu.guest_agent.0

#Virsh List
virsh list


build_alma8.yml

---
- hosts: localhost
  connection: local
  gather_facts: false
  vars:
    builderimage: alma-8.5
    serverimage: alma8_server.img
    servername: alma8_server
    clientimage: alma8_client.img
    clientname: alma8_client
    workingdirectory: ~/kvm_images/
    size: 10G
  tasks:

  - name: Build Server
    shell: |
        virt-builder "{{builderimage}}" --hostname "{{servername}}" --install "@Server,qemu-guest-agent" --ssh-inject root:file:y.pub --firstboot-command 'nmcli device connect ens3'  --edit '/etc/selinux/config: s/^SELINUX=.*/SELINUX=disabled/' --edit '/etc/sysconfig/qemu-ga: s/^BLACKLIST_RPC.*/#BLACKLIST_RPC/' --size "{{size}}" -o "{{serverimage}}"
    become: true
    args:
      chdir: "{{workingdirectory}}"
      creates: "{{serverimage}}"
    tags: virtbuilder

  - name: Virt Install Server
    shell: |
            virt-install --name "{{servername}}" --ram 4096 --vcpus=2 --disk path='{{serverimage}}'  --import --noautoconsole --channel unix,mode=bind,target_type=virtio,name=org.qemu.guest_agent.0
    become: true
    args:
      chdir: "{{workingdirectory}}"
    tags: virtinstall
   
  - name: List only running VMs
    community.libvirt.virt:
      command: list_vms
      state: running
    register: running_vms

  - name: Print running VM's
    debug:
      msg: "Runnign VMs {{running_vms}}"
    when: running_vms is defined