Ansible Roles Simplified

In this video I go over Ansible roles with a example installing and configuring apache http server.

requirements.yml

#ansible-galaxy  install -r requirements.yml  --roles-path ansible/roles/

roles:
  - src: https://github.com/thenathan-dot-net/ansibleroles.git
    name: apache

collections:
  # Install a collection from Ansible Galaxy.
  - name: community.libvirt
    source: https://galaxy.ansible.com
  - name: ansible.posix
    source: https://galaxy.ansible.com
  - name: community.general
    source: https://galaxy.ansible.com

Ansible libvirt dynamic inventory

In this video I configure a dynamic inventory in Ansible for quem kvm guest using community.libvirt

https://docs.ansible.com/ansible/3/collections/community/libvirt/libvirt_qemu_connection.html

Some notes on this plugin are.

  • currently DOES NOT work with selinux set to enforcing in the VM.
  • Requires the qemu-agent installed in the VM.
  • Requires access to the qemu-ga commands guest-exec, guest-exec-status, guest-file-close, guest-file-open, guest-file-read, guest-file-write.

This works with remote host and linux containers but in the video I did this all local.

First I installed the collection.

ansible-galaxy collection install community.libvirt

Created a dynamic inventory

$ cat kvm.yml
# Connect to qemu
plugin: community.libvirt.libvirt
uri: 'qemu:///system'

Note the uri would change for lxc or remote connections.

After the inventory set up test it

ansible-inventory --inventory kvm.yml --list




You should see the info about your guest. If you get red or an error verify that the guest agent is running and that it has access to the guest-exec and guest-file commands.

A good way to test the guest agent is with a guest execute of ls

virsh qemu-agent-command “Name of your Guest VM” '{"execute": "guest-exec", "arguments": { "path": "/usr/bin/ls", "arg": [ "/" ], "capture-output": true }}'

If you have issues take a look at my previous video where I provisioned the guest https://thenathan.net/2022/09/29/virt-builder/.

Once your inventory is working you can connect with the console.

ansible-console --inventory kvm.yml -l “Name of your Guest VM”

I used -l to limit because I only had the one host running.

Now we are in the console and can run some commands to test

ls

cat /etc/redhat-release

whoami
exit

This is the very simple playbook I used to test playbooks running against the inventory. t

$ cat   dnf_update_reboot.yml 
---
- hosts: alma8
  gather_facts: false
  become: true
  any_errors_fatal: yes
  tasks:

  - name: DNF update the system
    dnf:
      name:  "*"
      state: latest

  - name: Install the latest version of yum-utils
    dnf:
      name: yum-utils
      state: latest

  - name: Reboot required
    command: "/usr/bin/needs-restarting -r"
    register: reboot_required
    ignore_errors: True
    changed_when: False
    failed_when: reboot_required.rc == 2

  - name: Rebooting
    reboot:
      post_reboot_delay: 60
    throttle: 1
    when: reboot_required.rc == 1 and ansible_facts ['virtualization_role'] != 'host'boot_required.rc == 1 and ansible_facts ['virtualization_role'] != 'host'


You can run the playbook with the ansible-playbook command 
ansible-playbook -i kvm.yml dnf_update_reboot.yml




ansible-console –inventory kvm.yml -l “Name of your Guest VM”

virt-builder

The man page for virt-builder is a must read and its packed full of info.

man virt-builder

You can see all the images available with

virt-builder --list

Its important to read the notes about the image

virt-builder --notes alma-8.5

Lets build a virtual server.

time sudo virt-builder alma-8.5 --hostname alma8_server --install "@Server,qemu-guest-agent" --update --ssh-inject root:file:y.pub --firstboot-command 'nmcli device connect ens3' --edit '/etc/selinux/config: s/^SELINUX=.*/SELINUX=disabled/' --edit '/etc/sysconfig/qemu-ga: s/^BLACKLIST_RPC.*/#BLACKLIST_RPC/' --size 10G -o alma8_server.img

Once virt-builer has completed we can install the image.

sudo virt-install –name alma8_server –ram 4096 –vcpus=2 –disk path=alma8_server.img –import –noautoconsole –channel unix,mode=bind,target_type=virtio,name=org.qemu.guest_agent.0

If it all worked correct we can get the IP from virsh.

virsh net-dhcp-leases default

We can test that qemu agent is working and get the ip with with

virsh qemu-agent-command alma8 '{"execute":"guest-network-get-interfaces"}

virt-builder Ansible or Bash

In this video I use a Ansible playbook and a bash script to deploy a guest virtual machine on quem kvm and review some of the pros and cons of each.

Script and playbook I used:

build_alma8.sh

#!/bin/bash
builderimage=alma-8.5
serverimage=alma8_server.img
servername=alma8_server
clientimage=alma8_client.img
clientname=alma8_client
workingdirectory=~/kvm_images/

cd $workingdirectory

#Build Server
sudo virt-builder $builderimage --hostname $servername --install "@Server,qemu-guest-agent" --ssh-inject root:file:y.pub --firstboot-command 'nmcli device connect ens3'  --edit '/etc/selinux/config: s/^SELINUX=.*/SELINUX=disabled/' --edit '/etc/sysconfig/qemu-ga: s/^BLACKLIST_RPC.*/#BLACKLIST_RPC/' --size 10G -o $serverimage

#Virt Install Server
sudo virt-install --name $servername --ram 4096 --vcpus=2 --disk path=$serverimage  --import --noautoconsole --channel unix,mode=bind,target_type=virtio,name=org.qemu.guest_agent.0

#Virsh List
virsh list


build_alma8.yml

---
- hosts: localhost
  connection: local
  gather_facts: false
  vars:
    builderimage: alma-8.5
    serverimage: alma8_server.img
    servername: alma8_server
    clientimage: alma8_client.img
    clientname: alma8_client
    workingdirectory: ~/kvm_images/
    size: 10G
  tasks:

  - name: Build Server
    shell: |
        virt-builder "{{builderimage}}" --hostname "{{servername}}" --install "@Server,qemu-guest-agent" --ssh-inject root:file:y.pub --firstboot-command 'nmcli device connect ens3'  --edit '/etc/selinux/config: s/^SELINUX=.*/SELINUX=disabled/' --edit '/etc/sysconfig/qemu-ga: s/^BLACKLIST_RPC.*/#BLACKLIST_RPC/' --size "{{size}}" -o "{{serverimage}}"
    become: true
    args:
      chdir: "{{workingdirectory}}"
      creates: "{{serverimage}}"
    tags: virtbuilder

  - name: Virt Install Server
    shell: |
            virt-install --name "{{servername}}" --ram 4096 --vcpus=2 --disk path='{{serverimage}}'  --import --noautoconsole --channel unix,mode=bind,target_type=virtio,name=org.qemu.guest_agent.0
    become: true
    args:
      chdir: "{{workingdirectory}}"
    tags: virtinstall
   
  - name: List only running VMs
    community.libvirt.virt:
      command: list_vms
      state: running
    register: running_vms

  - name: Print running VM's
    debug:
      msg: "Runnign VMs {{running_vms}}"
    when: running_vms is defined

Yum and Dnf update and reboot with Ansible

In this video I cover some play books I have written to patch my RedHat based CentOS VM’s. The playbooks will enable EPEL, verify some packages/applications I use are installed, run a Yum or DNF update and reboot if a reboot is required.

The playbooks can be downloaded from below

nathan@thenathan:~/ansible$ cat enable_epel.yml
---
- hosts: all
  gather_facts: True
  become: true
  strategy: free
  tasks:
  - name: Enable EPEL Repository on CentOS 8
    dnf:
      name: epel-release
      state: latest
    when: ansible_facts['os_family'] == 'RedHat' and ansible_facts ['distribution_major_version'] >= '8'

  - name: Enable EPEL Repository on CentOS 7
    yum:
      name: epel-release
      state: latest
    when: ansible_facts['os_family'] == 'RedHat' and ansible_facts ['distribution_major_version'] == '7'
nathan@thenathan:~/ansible$ cat std_packages.yml
---
- import_playbook: enable_epel.yml
- hosts: all
  gather_facts: false
  become: true
  strategy: free
  tasks:

  #RHEL based OS version 7 stuff
  - name: Packages major_version 7
    when: ansible_facts['distribution_major_version'] == "7"
    package:
      name: ['nmap-ncat', 'curl', 'rsync', 'sysstat', 'bind-utils', 'wget', 'bash-completion', 'mlocate', 'lsof', 'htop', 'sharutils', 'python2-psutil', 'yum-utils', 'ps_mem' ]
      state: present

  #RHEL based OS version 6 stuff
  - name: Packages major_version 6
    when: ansible_facts['distribution_major_version'] == "6"
    package:
      name: ['nc', 'curl', 'rsync', 'sysstat', 'bind-utils', 'wget', 'bash-completion', 'libselinux-python', 'lsof' ]
      state: present
nathan@thenathan:~/ansible$ cat yum_update_reboot.yml
---
- import_playbook: std_packages.yml
- hosts: all
  gather_facts: false
  become: true
  serial: 1
  any_errors_fatal: yes
  vars_prompt:
    name: "confirmation"
    prompt: "Are you sure you want to Update with reboots? Answer with 'YES'"
    default: "NO"
    private: no
  tasks:

  - name: Check Confirmation
    fail: msg="Playbook run confirmation failed"
    when: confirmation != "YES"

  - name: DNF update the system
    dnf:
      name:  "*"
      state: latest
    when: ansible_facts['os_family'] == 'RedHat' and ansible_facts  ['distribution_major_version'] >= '8'

  - name: Yum update the system
    yum:
      name: "*"
      state: latest
    when: ansible_facts['os_family'] == 'RedHat' and ansible_facts ['distribution_major_version'] <= '7'

  - name: Reboot required
    command: "/usr/bin/needs-restarting -r"
    register: reboot_required
    ignore_errors: True
    changed_when: False
    failed_when: reboot_required.rc == 2
    when: ansible_facts['distribution_major_version'] == "7"

  - name: Rebooting
    reboot:
      post_reboot_delay: 60
    throttle: 1
    when: reboot_required.rc == 1 and ansible_facts ['virtualization_role'] != 'host'

  - debug:
      var: reboot_required.rc
      verbosity: 2

  - name: Check the uptime post reboot
    shell: uptime
    register: UPTIME_POST_REBOOT
    when: reboot_required.rc == 1

  - debug: msg={{UPTIME_POST_REBOOT.stdout}}
    when: reboot_required.rc == 1

  - name: Wait for port  443 to become open on the host, don't start checking for 60 seconds
    wait_for:
      port: 443
      host: 0.0.0.0
      delay: 60
    when: "'web' in inventory_hostname"

Docker Compose and TIG stack

In this video I use Docker-Compos to set up a Tig stack (Telegraf, InfluxDB, and Grafana)

(See download docker-compose.txt for docker-compose.yml because spacing is off on the below.)

$ cat docker-compose.yml
version: “2”
services:

influxdb:
container_name: influxdb
image: influxdb:latest
ports:
– “8086:8086”
user: “1000”
volumes:
– /home/nathan/tig_fun/tig-stack/volumes/influxdb:/var/lib/influxdb
restart: always

grafana:
container_name: grafana
image: grafana/grafana:latest
ports:
– “3000:3000”
environment:
GF_SECURITY_ADMIN_PASSWORD: “secure”
GF_PATHS_DATA: “/var/lib/grafana”
GF_PATHS_LOGS: “/var/log/grafana”
user: “1000”
volumes:
– /home/nathan/tig_fun/tig-stack/volumes/grafana:/var/lib/grafana
– /home/nathan/tig_fun/tig-stack/volumes/grafana/plugins:/var/lib/grafana/plugins
– /home/nathan/tig_fun/tig-stack/logs/grafana:/var/log/grafana
– /home/nathan/tig_fun/tig-stack/conf/grafana_custom.ini,target=/etc/grafana/grafana.ini
links:
– influxdb
restart: always

telegraf:
container_name: telegraf
image: telegraf:latest
network_mode: “host”
user: “1000”
volumes:
– /home/nathan/tig_fun/tig-stack/conf/telegraf.conf:/etc/telegraf/telegraf.conf
– /var/run/docker.sock:/var/run/docker.sock
restart: always

conf/telegraf.conf can be found at https://raw.githubusercontent.com/influxdata/telegraf/master/etc/telegraf.conf

conf/grafana_custom.ini can be fond at https://raw.githubusercontent.com/grafana/grafana/master/conf/defaults.ini


	

Introduction to Ansible

In this video I will cover what is Ansible. How it works. Pros and Cons and where to find out more information.

So what is Ansible? Ansible is an agentless automation and configuration tool. Ansible can also be used for workflow automation. It can do anything from provision servers to run one off commands on many host or host groups in parallel.

Ansible is written in python and uses ssh to execute commands on different systems. Ansible uses a inventory to group and nest systems.

In Ansible you uses modules to create task that do things. You put task in playbooks which are written YAML.

An example of a one off command would be

ansible server1 -a "hostname"

Note: the default module is shell. I also already have my .ansible.cfg and inventory set up which Ill cover in a future video.


Now an example of a playbook I have written todo the same thing

nathan@thenathan:~$ cat ansible/hostname.yml


– name: Get Hostname
gather_facts: False
hosts: all
tasks:
– name: Run Shell command hostname
shell: “hostname”

nathan@thenathan:~$ ansible-playbook ansible/hostname.yml
server1 | SUCCESS | rc=0 >>
server1.thenathan.net

Lets cover some of the pros and cons. A pro and a con is its againtless. Its nice that you can use ssh and not have to for the most part install software or make firewall changes for off ports. Thats also a con because if a system is down or times out the change or command wont be applied unless you rerun it.

Ansible inventory is fantastic. Its very powerfull in the way you can group and nest systems which makes it a great tool if you want to run adhock shell commands. Which also makes Ansible a good fit for running along side other tools like chef.

Ansible does a good job of gathering information (AKA Facts) off systems which you can use to customize commands or configs.

Playbooks use yaml. Its up to you if that’s a pro or con.

Ansible has great documentation which is another pro. I find the documentation and examples better then salt stack and chef. You can find the documentation at https://docs.ansible.com/ This is the first in a series of videos I want to do on Ansible. For the code and command I used in this video check out my blog post on thenathan.net Pleas like, share and subscribe.

SSH Keys Simplified

In this video I try to simplify ssh keys, ssh login with/without a password and ssh agent.

.bashrc

#check thats its not sftp
if [ “$SSH_TTY” ]; then
#set up ssh agent for keys.
if [ -z “$SSH_AUTH_SOCK” ] ; then
eval `ssh-agent -s`
ssh-add
fi
fi

.bash_logout

eval $(ssh-agent -k)

CentOS 7 Minimal Install

In this video I install CentOS 7 Minimal from ISO in QEMU/KVM.

After the install I configure the network to start on boot, bring up the network interface do a OS update and install some necessary packages like bind-utils, net-tools and bash-completion.

I also config tuned with the virtual-guest profile, enable sshd at boot and set sudo no password.