ansible-playbook 2.9.27 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 2.7.5 (default, Nov 14 2023, 16:14:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] Using /etc/ansible/ansible.cfg as config file [WARNING]: running playbook inside collection fedora.linux_system_roles Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: tests_swap.yml ******************************************************* 1 plays in /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml PLAY [Test management of swap] ************************************************* TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:2 Tuesday 22 July 2025 08:36:48 -0400 (0:00:00.272) 0:00:00.272 ********** ok: [managed-node12] META: ran handlers TASK [Include role to ensure packages are installed] *************************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:10 Tuesday 22 July 2025 08:36:52 -0400 (0:00:04.012) 0:00:04.285 ********** TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Tuesday 22 July 2025 08:36:52 -0400 (0:00:00.219) 0:00:04.505 ********** included: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node12 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Tuesday 22 July 2025 08:36:53 -0400 (0:00:00.698) 0:00:05.203 ********** skipping: [managed-node12] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Tuesday 22 July 2025 08:36:53 -0400 (0:00:00.591) 0:00:05.795 ********** skipping: [managed-node12] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node12] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node12] => (item=CentOS_7.yml) => { "ansible_facts": { "__storage_blivet_diskvolume_mkfs_option_map": { "ext2": "-F", "ext3": "-F", "ext4": "-F" }, "blivet_package_list": [ "python-enum34", "python-blivet3", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_7.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_7.yml" } skipping: [managed-node12] => (item=CentOS_7.9.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS_7.9.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Tuesday 22 July 2025 08:36:54 -0400 (0:00:00.855) 0:00:06.651 ********** ok: [managed-node12] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Tuesday 22 July 2025 08:36:56 -0400 (0:00:02.175) 0:00:08.826 ********** ok: [managed-node12] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Tuesday 22 July 2025 08:36:57 -0400 (0:00:00.268) 0:00:09.095 ********** ok: [managed-node12] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Tuesday 22 July 2025 08:36:57 -0400 (0:00:00.084) 0:00:09.179 ********** ok: [managed-node12] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Tuesday 22 July 2025 08:36:57 -0400 (0:00:00.186) 0:00:09.366 ********** included: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node12 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Tuesday 22 July 2025 08:36:58 -0400 (0:00:00.748) 0:00:10.114 ********** ok: [managed-node12] => { "changed": false, "rc": 0, "results": [ "python-enum34-1.0.4-1.el7.noarch providing python-enum34 is already installed", "1:python2-blivet3-3.1.3-3.el7.noarch providing python-blivet3 is already installed", "libblockdev-crypto-2.18-5.el7.x86_64 providing libblockdev-crypto is already installed", "libblockdev-dm-2.18-5.el7.x86_64 providing libblockdev-dm is already installed", "libblockdev-lvm-2.18-5.el7.x86_64 providing libblockdev-lvm is already installed", "libblockdev-mdraid-2.18-5.el7.x86_64 providing libblockdev-mdraid is already installed", "libblockdev-swap-2.18-5.el7.x86_64 providing libblockdev-swap is already installed", "libblockdev-2.18-5.el7.x86_64 providing libblockdev is already installed" ] } TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Tuesday 22 July 2025 08:37:03 -0400 (0:00:05.535) 0:00:15.650 ********** ok: [managed-node12] => { "storage_pools | d([])": [] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Tuesday 22 July 2025 08:37:03 -0400 (0:00:00.237) 0:00:15.887 ********** ok: [managed-node12] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Tuesday 22 July 2025 08:37:04 -0400 (0:00:00.336) 0:00:16.224 ********** ok: [managed-node12] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Tuesday 22 July 2025 08:37:06 -0400 (0:00:01.968) 0:00:18.193 ********** included: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node12 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Tuesday 22 July 2025 08:37:06 -0400 (0:00:00.497) 0:00:18.690 ********** TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Tuesday 22 July 2025 08:37:06 -0400 (0:00:00.128) 0:00:18.819 ********** skipping: [managed-node12] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Tuesday 22 July 2025 08:37:06 -0400 (0:00:00.106) 0:00:18.925 ********** TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Tuesday 22 July 2025 08:37:06 -0400 (0:00:00.073) 0:00:18.998 ********** ok: [managed-node12] => { "changed": false, "rc": 0, "results": [ "kpartx-0.4.9-136.el7_9.x86_64 providing kpartx is already installed" ] } TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Tuesday 22 July 2025 08:37:08 -0400 (0:00:01.142) 0:00:20.141 ********** ok: [managed-node12] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "arp-ethers.service": { "name": "arp-ethers.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "brandbot.service": { "name": "brandbot.service", "source": "systemd", "state": "inactive", "status": "static" }, "chrony-dnssrv@.service": { "name": "chrony-dnssrv@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "console-shell.service": { "name": "console-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.import1.service": { "name": "dbus-org.freedesktop.import1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "static" }, "dbus-org.freedesktop.machine1.service": { "name": "dbus-org.freedesktop.machine1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "running", "status": "static" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dmraid-activation.service": { "name": "dmraid-activation.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "unknown" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "halt-local.service": { "name": "halt-local.service", "source": "systemd", "state": "inactive", "status": "static" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "iprdump.service": { "name": "iprdump.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "iprinit.service": { "name": "iprinit.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "iprupdate.service": { "name": "iprupdate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-lvmetad.service": { "name": "lvm2-lvmetad.service", "source": "systemd", "state": "running", "status": "static" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "lvm2-pvscan@.service": { "name": "lvm2-pvscan@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "messagebus.service": { "name": "messagebus.service", "source": "systemd", "state": "active", "status": "static" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "netconsole": { "name": "netconsole", "source": "sysv", "state": "stopped", "status": "disabled" }, "network": { "name": "network", "source": "sysv", "state": "running", "status": "enabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "unknown" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-config.service": { "name": "nfs-config.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-idmap.service": { "name": "nfs-idmap.service", "source": "systemd", "state": "inactive", "status": "static" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-lock.service": { "name": "nfs-lock.service", "source": "systemd", "state": "inactive", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-rquotad.service": { "name": "nfs-rquotad.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-secure.service": { "name": "nfs-secure.service", "source": "systemd", "state": "inactive", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs.service": { "name": "nfs.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfslock.service": { "name": "nfslock.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-halt.service": { "name": "plymouth-halt.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-kexec.service": { "name": "plymouth-kexec.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-poweroff.service": { "name": "plymouth-poweroff.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "plymouth-quit.service": { "name": "plymouth-quit.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "plymouth-read-write.service": { "name": "plymouth-read-write.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "plymouth-reboot.service": { "name": "plymouth-reboot.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "plymouth-switch-root.service": { "name": "plymouth-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "postfix.service": { "name": "postfix.service", "source": "systemd", "state": "running", "status": "enabled" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rhel-autorelabel-mark.service": { "name": "rhel-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-autorelabel.service": { "name": "rhel-autorelabel.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-configure.service": { "name": "rhel-configure.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-dmesg.service": { "name": "rhel-dmesg.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-domainname.service": { "name": "rhel-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-import-state.service": { "name": "rhel-import-state.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-loadmodules.service": { "name": "rhel-loadmodules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-readonly.service": { "name": "rhel-readonly.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-rquotad.service": { "name": "rpc-rquotad.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpcgssd.service": { "name": "rpcgssd.service", "source": "systemd", "state": "inactive", "status": "static" }, "rpcidmapd.service": { "name": "rpcidmapd.service", "source": "systemd", "state": "inactive", "status": "static" }, "rsyncd.service": { "name": "rsyncd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyncd@.service": { "name": "rsyncd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-policy-migrate-local-changes@.service": { "name": "selinux-policy-migrate-local-changes@.service", "source": "systemd", "state": "unknown", "status": "static" }, "selinux-policy-migrate-local-changes@targeted.service": { "name": "selinux-policy-migrate-local-changes@targeted.service", "source": "systemd", "state": "stopped", "status": "unknown" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "unknown" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "static" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-plymouth.service": { "name": "systemd-ask-password-plymouth.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bootchart.service": { "name": "systemd-bootchart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-importd.service": { "name": "systemd-importd.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-machined.service": { "name": "systemd-machined.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-nspawn@.service": { "name": "systemd-nspawn@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-readahead-collect.service": { "name": "systemd-readahead-collect.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-readahead-done.service": { "name": "systemd-readahead-done.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "systemd-readahead-drop.service": { "name": "systemd-readahead-drop.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "systemd-readahead-replay.service": { "name": "systemd-readahead-replay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill@.service": { "name": "systemd-rfkill@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-shutdownd.service": { "name": "systemd-shutdownd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "tuned.service": { "name": "tuned.service", "source": "systemd", "state": "running", "status": "enabled" }, "wpa_supplicant.service": { "name": "wpa_supplicant.service", "source": "systemd", "state": "inactive", "status": "disabled" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Tuesday 22 July 2025 08:37:09 -0400 (0:00:01.653) 0:00:21.794 ********** ok: [managed-node12] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Tuesday 22 July 2025 08:37:09 -0400 (0:00:00.099) 0:00:21.894 ********** TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Tuesday 22 July 2025 08:37:09 -0400 (0:00:00.060) 0:00:21.954 ********** ok: [managed-node12] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Tuesday 22 July 2025 08:37:10 -0400 (0:00:00.720) 0:00:22.675 ********** skipping: [managed-node12] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Tuesday 22 July 2025 08:37:10 -0400 (0:00:00.103) 0:00:22.778 ********** ok: [managed-node12] => { "changed": false, "stat": { "atime": 1753187042.328937, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "4db69458c23204aa354c1fce8c724ba0713d6623", "ctime": 1718881114.40265, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 131078, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1718881114.40265, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1207, "uid": 0, "version": "18446744072852913878", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Tuesday 22 July 2025 08:37:11 -0400 (0:00:00.613) 0:00:23.392 ********** skipping: [managed-node12] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Tuesday 22 July 2025 08:37:11 -0400 (0:00:00.120) 0:00:23.513 ********** TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Tuesday 22 July 2025 08:37:11 -0400 (0:00:00.113) 0:00:23.626 ********** ok: [managed-node12] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Tuesday 22 July 2025 08:37:11 -0400 (0:00:00.218) 0:00:23.845 ********** ok: [managed-node12] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Tuesday 22 July 2025 08:37:12 -0400 (0:00:00.180) 0:00:24.026 ********** ok: [managed-node12] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Tuesday 22 July 2025 08:37:12 -0400 (0:00:00.128) 0:00:24.154 ********** TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Tuesday 22 July 2025 08:37:12 -0400 (0:00:00.174) 0:00:24.329 ********** skipping: [managed-node12] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Tuesday 22 July 2025 08:37:12 -0400 (0:00:00.151) 0:00:24.480 ********** TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Tuesday 22 July 2025 08:37:12 -0400 (0:00:00.132) 0:00:24.613 ********** TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Tuesday 22 July 2025 08:37:12 -0400 (0:00:00.195) 0:00:24.808 ********** skipping: [managed-node12] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Tuesday 22 July 2025 08:37:12 -0400 (0:00:00.173) 0:00:24.982 ********** ok: [managed-node12] => { "changed": false, "stat": { "atime": 1753187375.3991745, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1718879272.062, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 131079, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1718879026.308, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "18446744072852913879", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Tuesday 22 July 2025 08:37:13 -0400 (0:00:00.779) 0:00:25.762 ********** TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Tuesday 22 July 2025 08:37:13 -0400 (0:00:00.089) 0:00:25.852 ********** ok: [managed-node12] TASK [Mark tasks to be skipped] ************************************************ task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:14 Tuesday 22 July 2025 08:37:14 -0400 (0:00:01.090) 0:00:26.942 ********** ok: [managed-node12] => { "ansible_facts": { "storage_skip_checks": [ "blivet_available", "packages_installed", "service_facts" ] }, "changed": false } TASK [Get unused disks for swap] *********************************************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:22 Tuesday 22 July 2025 08:37:14 -0400 (0:00:00.054) 0:00:26.996 ********** included: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml for managed-node12 TASK [Ensure test packages] **************************************************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Tuesday 22 July 2025 08:37:15 -0400 (0:00:00.092) 0:00:27.088 ********** ok: [managed-node12] => { "changed": false, "rc": 0, "results": [ "util-linux-2.23.2-65.el7_9.1.x86_64 providing util-linux is already installed" ] } TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Tuesday 22 July 2025 08:37:16 -0400 (0:00:01.309) 0:00:28.398 ********** ok: [managed-node12] => { "changed": false, "disks": "Unable to find unused disk", "info": [ "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"ext4\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"ext4\" LOG-SEC=\"512\"", "Disk [/dev/xvda] attrs [{'fstype': '', 'type': 'disk', 'ssize': '512', 'size': '268435456000'}] size is greater than requested" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Tuesday 22 July 2025 08:37:17 -0400 (0:00:01.228) 0:00:29.626 ********** ok: [managed-node12] => { "changed": false, "cmd": "set -x\nexec 1>&2\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC\njournalctl -ex\n", "delta": "0:00:00.040453", "end": "2025-07-22 08:37:18.509023", "rc": 0, "start": "2025-07-22 08:37:18.468570" } STDERR: + exec + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda1" TYPE="part" SIZE="268434390528" FSTYPE="ext4" LOG-SEC="512" + journalctl -ex -- Logs begin at Tue 2025-07-22 08:23:47 EDT, end at Tue 2025-07-22 08:37:18 EDT. -- Jul 22 08:23:50 localhost.localdomain systemd[1]: Starting Reload Configuration from the Real Root... -- Subject: Unit initrd-parse-etc.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-parse-etc.service has begun starting up. Jul 22 08:23:50 localhost.localdomain systemd[1]: Reloading. Jul 22 08:23:51 localhost.localdomain systemd[1]: Started Reload Configuration from the Real Root. -- Subject: Unit initrd-parse-etc.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-parse-etc.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:51 localhost.localdomain systemd[1]: Reached target Initrd File Systems. -- Subject: Unit initrd-fs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-fs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:51 localhost.localdomain systemd[1]: Reached target Initrd Default Target. -- Subject: Unit initrd.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:51 localhost.localdomain systemd[1]: Starting dracut pre-pivot and cleanup hook... -- Subject: Unit dracut-pre-pivot.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-pre-pivot.service has begun starting up. Jul 22 08:23:51 localhost.localdomain systemd[1]: Started dracut pre-pivot and cleanup hook. -- Subject: Unit dracut-pre-pivot.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-pre-pivot.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:51 localhost.localdomain systemd[1]: Starting Cleaning Up and Shutting Down Daemons... -- Subject: Unit initrd-cleanup.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-cleanup.service has begun starting up. Jul 22 08:23:51 localhost.localdomain systemd[1]: Starting Plymouth switch root service... -- Subject: Unit plymouth-switch-root.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-switch-root.service has begun starting up. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped dracut pre-pivot and cleanup hook. -- Subject: Unit dracut-pre-pivot.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-pre-pivot.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Initrd Default Target. -- Subject: Unit initrd.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Basic System. -- Subject: Unit basic.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit basic.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Slices. -- Subject: Unit slices.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit slices.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target System Initialization. -- Subject: Unit sysinit.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sysinit.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopping udev Kernel Device Manager... -- Subject: Unit systemd-udevd.service has begun shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd.service has begun shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Local File Systems. -- Subject: Unit local-fs.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit local-fs.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Paths. -- Subject: Unit paths.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit paths.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped Apply Kernel Variables. -- Subject: Unit systemd-sysctl.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-sysctl.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Sockets. -- Subject: Unit sockets.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sockets.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Timers. -- Subject: Unit timers.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit timers.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Remote File Systems. -- Subject: Unit remote-fs.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Remote File Systems (Pre). -- Subject: Unit remote-fs-pre.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs-pre.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped dracut initqueue hook. -- Subject: Unit dracut-initqueue.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-initqueue.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped udev Coldplug all Devices. -- Subject: Unit systemd-udev-trigger.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udev-trigger.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped target Swap. -- Subject: Unit swap.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit swap.target has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped udev Kernel Device Manager. -- Subject: Unit systemd-udevd.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped dracut pre-udev hook. -- Subject: Unit dracut-pre-udev.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-pre-udev.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped dracut cmdline hook. -- Subject: Unit dracut-cmdline.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-cmdline.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped Create Static Device Nodes in /dev. -- Subject: Unit systemd-tmpfiles-setup-dev.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-tmpfiles-setup-dev.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Stopped Create list of required static device nodes for the current kernel. -- Subject: Unit kmod-static-nodes.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kmod-static-nodes.service has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Closed udev Control Socket. -- Subject: Unit systemd-udevd-control.socket has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd-control.socket has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Closed udev Kernel Socket. -- Subject: Unit systemd-udevd-kernel.socket has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd-kernel.socket has finished shutting down. Jul 22 08:23:51 localhost.localdomain systemd[1]: Starting Cleanup udevd DB... -- Subject: Unit initrd-udevadm-cleanup-db.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-udevadm-cleanup-db.service has begun starting up. Jul 22 08:23:51 localhost.localdomain systemd[1]: Started Cleaning Up and Shutting Down Daemons. -- Subject: Unit initrd-cleanup.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-cleanup.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:51 localhost.localdomain systemd[1]: Started Cleanup udevd DB. -- Subject: Unit initrd-udevadm-cleanup-db.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-udevadm-cleanup-db.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:51 localhost.localdomain systemd[1]: Reached target Switch Root. -- Subject: Unit initrd-switch-root.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-switch-root.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:51 localhost.localdomain systemd[1]: Started Plymouth switch root service. -- Subject: Unit plymouth-switch-root.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-switch-root.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:51 localhost.localdomain systemd[1]: Starting Switch Root... -- Subject: Unit initrd-switch-root.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-switch-root.service has begun starting up. Jul 22 08:23:51 localhost.localdomain systemd[1]: Switching root. Jul 22 08:23:51 localhost.localdomain unknown[175]: Journal stopped -- Subject: The journal has been stopped -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- The system journal process has shut down and closed all currently -- active journal files. Jul 22 08:23:54 localhost.localdomain systemd-journal[346]: Runtime journal is using 8.0M (max allowed 180.0M, trying to leave 270.0M free of 1.7G available → current limit 180.0M). Jul 22 08:23:54 localhost.localdomain systemd-journald[175]: Received SIGTERM from PID 1 (systemd). Jul 22 08:23:54 localhost.localdomain kernel: random: crng init done Jul 22 08:23:54 localhost.localdomain kernel: type=1404 audit(1753187032.118:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 Jul 22 08:23:54 localhost.localdomain kernel: SELinux: 2048 avtab hash slots, 112757 rules. Jul 22 08:23:54 localhost.localdomain kernel: SELinux: 2048 avtab hash slots, 112757 rules. Jul 22 08:23:54 localhost.localdomain kernel: SELinux: 8 users, 14 roles, 5049 types, 316 bools, 1 sens, 1024 cats Jul 22 08:23:54 localhost.localdomain kernel: SELinux: 130 classes, 112757 rules Jul 22 08:23:54 localhost.localdomain kernel: SELinux: Completing initialization. Jul 22 08:23:54 localhost.localdomain kernel: SELinux: Setting up existing superblocks. Jul 22 08:23:54 localhost.localdomain kernel: type=1403 audit(1753187032.550:3): policy loaded auid=4294967295 ses=4294967295 Jul 22 08:23:54 localhost.localdomain systemd[1]: Successfully loaded SELinux policy in 468.957ms. Jul 22 08:23:54 localhost.localdomain kernel: ip_tables: (C) 2000-2006 Netfilter Core Team Jul 22 08:23:54 localhost.localdomain systemd[1]: Inserted module 'ip_tables' Jul 22 08:23:54 localhost.localdomain systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 8.758ms. Jul 22 08:23:54 localhost.localdomain kernel: EXT4-fs (xvda1): re-mounted. Opts: (null) Jul 22 08:23:54 localhost.localdomain systemd-journal[346]: Journal started -- Subject: The journal has been started -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- The system journal process has started up, opened the journal -- files for writing and is now ready to process requests. Jul 22 08:23:53 localhost.localdomain systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Jul 22 08:23:53 localhost.localdomain systemd[1]: Detected virtualization xen. Jul 22 08:23:53 localhost.localdomain systemd[1]: Detected architecture x86-64. Jul 22 08:23:53 localhost.localdomain systemd[1]: Set hostname to . Jul 22 08:23:53 localhost.localdomain systemd[1]: Initializing machine ID from random generator. Jul 22 08:23:53 localhost.localdomain systemd[1]: Installed transient /etc/machine-id file. Jul 22 08:23:54 localhost.localdomain systemd[1]: Starting Flush Journal to Persistent Storage... -- Subject: Unit systemd-journal-flush.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-journal-flush.service has begun starting up. Jul 22 08:23:54 localhost.localdomain systemd[1]: Started udev Coldplug all Devices. -- Subject: Unit systemd-udev-trigger.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udev-trigger.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Flush Journal to Persistent Storage. -- Subject: Unit systemd-journal-flush.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-journal-flush.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Configure read-only root support. -- Subject: Unit rhel-readonly.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-readonly.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Reached target Local File Systems. -- Subject: Unit local-fs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit local-fs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Starting Import network configuration from initramfs... -- Subject: Unit rhel-import-state.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-import-state.service has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd-udevd[366]: starting version 219 Jul 22 08:23:55 localhost.localdomain systemd[1]: Starting Commit a transient machine-id on disk... -- Subject: Unit systemd-machine-id-commit.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-machine-id-commit.service has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd[1]: Starting Preprocess NFS configuration... -- Subject: Unit nfs-config.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nfs-config.service has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd[1]: Starting Tell Plymouth To Write Out Runtime Data... -- Subject: Unit plymouth-read-write.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-read-write.service has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd[1]: Starting Load/Save Random Seed... -- Subject: Unit systemd-random-seed.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-random-seed.service has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd-udevd[366]: Network interface NamePolicy= disabled on kernel command line, ignoring. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Load/Save Random Seed. -- Subject: Unit systemd-random-seed.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-random-seed.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started udev Kernel Device Manager. -- Subject: Unit systemd-udevd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Commit a transient machine-id on disk. -- Subject: Unit systemd-machine-id-commit.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-machine-id-commit.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Preprocess NFS configuration. -- Subject: Unit nfs-config.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nfs-config.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Tell Plymouth To Write Out Runtime Data. -- Subject: Unit plymouth-read-write.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-read-write.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Import network configuration from initramfs. -- Subject: Unit rhel-import-state.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-import-state.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Starting Create Volatile Files and Directories... -- Subject: Unit systemd-tmpfiles-setup.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-tmpfiles-setup.service has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Create Volatile Files and Directories. -- Subject: Unit systemd-tmpfiles-setup.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-tmpfiles-setup.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Mounting RPC Pipe File System... -- Subject: Unit var-lib-nfs-rpc_pipefs.mount has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit var-lib-nfs-rpc_pipefs.mount has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd[1]: Starting Security Auditing Service... -- Subject: Unit auditd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit auditd.service has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd[1]: Found device /dev/ttyS0. -- Subject: Unit dev-ttyS0.device has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dev-ttyS0.device has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain kernel: input: PC Speaker as /devices/platform/pcspkr/input/input4 Jul 22 08:23:55 localhost.localdomain kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 22 08:23:55 localhost.localdomain kernel: RPC: Registered named UNIX socket transport module. Jul 22 08:23:55 localhost.localdomain kernel: RPC: Registered udp transport module. Jul 22 08:23:55 localhost.localdomain kernel: RPC: Registered tcp transport module. Jul 22 08:23:55 localhost.localdomain kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 22 08:23:55 localhost.localdomain systemd[1]: Mounted RPC Pipe File System. -- Subject: Unit var-lib-nfs-rpc_pipefs.mount has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit var-lib-nfs-rpc_pipefs.mount has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Reached target rpc_pipefs.target. -- Subject: Unit rpc_pipefs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpc_pipefs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain kernel: cryptd: max_cpu_qlen set to 1000 Jul 22 08:23:55 localhost.localdomain kernel: AVX2 version of gcm_enc/dec engaged. Jul 22 08:23:55 localhost.localdomain kernel: AES CTR mode by8 optimization enabled Jul 22 08:23:55 localhost.localdomain kernel: alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) Jul 22 08:23:55 localhost.localdomain kernel: alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) Jul 22 08:23:55 localhost.localdomain auditd[457]: Started dispatcher: /sbin/audispd pid: 459 Jul 22 08:23:55 localhost.localdomain kernel: type=1305 audit(1753187035.693:4): audit_pid=457 old=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Jul 22 08:23:55 localhost.localdomain audispd[459]: No plugins found, exiting Jul 22 08:23:55 localhost.localdomain kernel: EDAC sbridge: Seeking for: PCI ID 8086:2fa0 Jul 22 08:23:55 localhost.localdomain kernel: EDAC sbridge: Ver: 1.1.2 Jul 22 08:23:55 localhost.localdomain kernel: [TTM] Zone kernel: Available graphics memory: 1843226 kiB Jul 22 08:23:55 localhost.localdomain kernel: [TTM] Initializing pool allocator Jul 22 08:23:55 localhost.localdomain kernel: [TTM] Initializing DMA pool allocator Jul 22 08:23:55 localhost.localdomain kernel: [drm] fb mappable at 0xF8000000 Jul 22 08:23:55 localhost.localdomain kernel: [drm] vram aper at 0xF8000000 Jul 22 08:23:55 localhost.localdomain kernel: [drm] size 33554432 Jul 22 08:23:55 localhost.localdomain kernel: [drm] fb depth is 16 Jul 22 08:23:55 localhost.localdomain kernel: [drm] pitch is 2048 Jul 22 08:23:55 localhost.localdomain kernel: fbcon: cirrusdrmfb (fb0) is primary device Jul 22 08:23:55 localhost.localdomain kernel: ppdev: user-space parallel port driver Jul 22 08:23:55 localhost.localdomain kernel: Console: switching to colour frame buffer device 128x48 Jul 22 08:23:55 localhost.localdomain kernel: cirrus 0000:00:02.0: fb0: cirrusdrmfb frame buffer device Jul 22 08:23:55 localhost.localdomain kernel: [drm] Initialized cirrus 1.0.0 20110418 for 0000:00:02.0 on minor 0 Jul 22 08:23:55 localhost.localdomain auditd[457]: Init complete, auditd 2.8.5 listening for events (startup state enable) Jul 22 08:23:55 localhost.localdomain augenrules[479]: /sbin/augenrules: No change Jul 22 08:23:55 localhost.localdomain augenrules[479]: No rules Jul 22 08:23:55 localhost.localdomain augenrules[479]: enabled 1 Jul 22 08:23:55 localhost.localdomain augenrules[479]: failure 1 Jul 22 08:23:55 localhost.localdomain augenrules[479]: pid 457 Jul 22 08:23:55 localhost.localdomain augenrules[479]: rate_limit 0 Jul 22 08:23:55 localhost.localdomain augenrules[479]: backlog_limit 8192 Jul 22 08:23:55 localhost.localdomain augenrules[479]: lost 0 Jul 22 08:23:55 localhost.localdomain augenrules[479]: backlog 1 Jul 22 08:23:55 localhost.localdomain augenrules[479]: enabled 1 Jul 22 08:23:55 localhost.localdomain augenrules[479]: failure 1 Jul 22 08:23:55 localhost.localdomain augenrules[479]: pid 457 Jul 22 08:23:55 localhost.localdomain augenrules[479]: rate_limit 0 Jul 22 08:23:55 localhost.localdomain augenrules[479]: backlog_limit 8192 Jul 22 08:23:55 localhost.localdomain augenrules[479]: lost 0 Jul 22 08:23:55 localhost.localdomain augenrules[479]: backlog 0 Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Security Auditing Service. -- Subject: Unit auditd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit auditd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Starting Update UTMP about System Boot/Shutdown... -- Subject: Unit systemd-update-utmp.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-update-utmp.service has begun starting up. Jul 22 08:23:55 localhost.localdomain systemd[1]: Started Update UTMP about System Boot/Shutdown. -- Subject: Unit systemd-update-utmp.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-update-utmp.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Reached target System Initialization. -- Subject: Unit sysinit.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sysinit.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 localhost.localdomain systemd[1]: Listening on RPCbind Server Activation Socket. -- Subject: Unit rpcbind.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpcbind.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Starting RPC bind service... -- Subject: Unit rpcbind.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpcbind.service has begun starting up. Jul 22 08:23:56 localhost.localdomain systemd[1]: Listening on D-Bus System Message Bus Socket. -- Subject: Unit dbus.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dbus.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Reached target Sockets. -- Subject: Unit sockets.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sockets.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Starting Initial cloud-init job (pre-networking)... -- Subject: Unit cloud-init-local.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-init-local.service has begun starting up. Jul 22 08:23:56 localhost.localdomain systemd[1]: Reached target Basic System. -- Subject: Unit basic.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit basic.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Started D-Bus System Message Bus. -- Subject: Unit dbus.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dbus.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Starting Login Service... -- Subject: Unit systemd-logind.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-logind.service has begun starting up. Jul 22 08:23:56 localhost.localdomain systemd[1]: Starting GSSAPI Proxy Daemon... -- Subject: Unit gssproxy.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit gssproxy.service has begun starting up. Jul 22 08:23:56 localhost.localdomain systemd[1]: Starting Dump dmesg to /var/log/dmesg... -- Subject: Unit rhel-dmesg.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-dmesg.service has begun starting up. Jul 22 08:23:56 localhost.localdomain systemd[1]: Started irqbalance daemon. -- Subject: Unit irqbalance.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit irqbalance.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Started Hardware RNG Entropy Gatherer Daemon. -- Subject: Unit rngd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rngd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Starting NTP client/server... -- Subject: Unit chronyd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit chronyd.service has begun starting up. Jul 22 08:23:56 localhost.localdomain systemd[1]: Starting Authorization Manager... -- Subject: Unit polkit.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit polkit.service has begun starting up. Jul 22 08:23:56 localhost.localdomain systemd[1]: Started Daily Cleanup of Temporary Directories. -- Subject: Unit systemd-tmpfiles-clean.timer has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-tmpfiles-clean.timer has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Reached target Timers. -- Subject: Unit timers.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit timers.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Started RPC bind service. -- Subject: Unit rpcbind.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpcbind.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Started Login Service. -- Subject: Unit systemd-logind.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-logind.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd-logind[505]: New seat seat0. -- Subject: A new seat seat0 is now available -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new seat seat0 has been configured and is now available. Jul 22 08:23:56 localhost.localdomain systemd-logind[505]: Watching system buttons on /dev/input/event0 (Power Button) Jul 22 08:23:56 localhost.localdomain systemd-logind[505]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 22 08:23:56 localhost.localdomain chronyd[523]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG) Jul 22 08:23:56 localhost.localdomain systemd[1]: Started Dump dmesg to /var/log/dmesg. -- Subject: Unit rhel-dmesg.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-dmesg.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain chronyd[523]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift Jul 22 08:23:56 localhost.localdomain systemd[1]: Started NTP client/server. -- Subject: Unit chronyd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit chronyd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Started GSSAPI Proxy Daemon. -- Subject: Unit gssproxy.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit gssproxy.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Reached target NFS client services. -- Subject: Unit nfs-client.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nfs-client.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Reached target Remote File Systems (Pre). -- Subject: Unit remote-fs-pre.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs-pre.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain systemd[1]: Reached target Remote File Systems. -- Subject: Unit remote-fs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:56 localhost.localdomain polkitd[511]: Started polkitd version 0.112 Jul 22 08:23:57 localhost.localdomain rngd[509]: Initalizing available sources Jul 22 08:23:57 localhost.localdomain rngd[509]: Failed to init entropy source 0: Hardware RNG Device Jul 22 08:23:57 localhost.localdomain rngd[509]: Enabling RDRAND rng support Jul 22 08:23:57 localhost.localdomain rngd[509]: Initalizing entropy source Intel RDRAND Instruction RNG Jul 22 08:23:57 localhost.localdomain polkitd[511]: Loading rules from directory /etc/polkit-1/rules.d Jul 22 08:23:57 localhost.localdomain polkitd[511]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 22 08:23:57 localhost.localdomain polkitd[511]: Finished loading, compiling and executing 2 rules Jul 22 08:23:57 localhost.localdomain polkitd[511]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 22 08:23:57 localhost.localdomain systemd[1]: Started Authorization Manager. -- Subject: Unit polkit.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit polkit.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:58 localhost.localdomain rngd[509]: Enabling JITTER rng support Jul 22 08:23:58 localhost.localdomain rngd[509]: Initalizing entropy source JITTER Entropy generator Jul 22 08:23:58 localhost.localdomain cloud-init[520]: Cloud-init v. 0.7.9 running 'init-local' at Tue, 22 Jul 2025 12:23:58 +0000. Up 12.01 seconds. Jul 22 08:23:58 localhost.localdomain kernel: floppy0: no floppy controllers found Jul 22 08:23:58 localhost.localdomain kernel: work still pending Jul 22 08:23:59 localhost.localdomain systemd[1]: Started Initial cloud-init job (pre-networking). -- Subject: Unit cloud-init-local.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-init-local.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:59 localhost.localdomain systemd[1]: Reached target Network (Pre). -- Subject: Unit network-pre.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network-pre.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:59 localhost.localdomain systemd[1]: Starting Network Manager... -- Subject: Unit NetworkManager.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager.service has begun starting up. Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.4261] NetworkManager (version 1.18.8-2.el7_9) is starting... (for the first time) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.4268] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-slaves-order.conf) Jul 22 08:23:59 localhost.localdomain systemd[1]: Started Network Manager. -- Subject: Unit NetworkManager.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:59 localhost.localdomain systemd[1]: Starting Network Manager Wait Online... -- Subject: Unit NetworkManager-wait-online.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-wait-online.service has begun starting up. Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.4405] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.4445] manager[0x55b69b0b7090]: monitoring kernel firmware directory '/lib/firmware'. Jul 22 08:23:59 localhost.localdomain dbus[501]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' Jul 22 08:23:59 localhost.localdomain systemd[1]: Starting Hostname Service... -- Subject: Unit systemd-hostnamed.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-hostnamed.service has begun starting up. Jul 22 08:23:59 localhost.localdomain dbus[501]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 22 08:23:59 localhost.localdomain systemd[1]: Started Hostname Service. -- Subject: Unit systemd-hostnamed.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-hostnamed.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5017] hostname: hostname: using hostnamed Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5019] hostname: hostname changed from (none) to "localhost.localdomain" Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5026] dns-mgr[0x55b69b09f220]: init: dns=default,systemd-resolved rc-manager=file Jul 22 08:23:59 localhost.localdomain dbus[501]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' Jul 22 08:23:59 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-dispatcher.service has begun starting up. Jul 22 08:23:59 localhost.localdomain dbus[501]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jul 22 08:23:59 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5545] settings: Loaded settings plugin: SettingsPluginIfcfg ("/usr/lib64/NetworkManager/1.18.8-2.el7_9/libnm-settings-plugin-ifcfg-rh.so") Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5564] settings: Loaded settings plugin: NMSIbftPlugin ("/usr/lib64/NetworkManager/1.18.8-2.el7_9/libnm-settings-plugin-ibft.so") Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5564] settings: Loaded settings plugin: NMSKeyfilePlugin (internal) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5608] ifcfg-rh: new connection /etc/sysconfig/network-scripts/ifcfg-eth0 (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03,"System eth0") Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5641] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5642] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5642] manager: Networking is enabled by state file Jul 22 08:23:59 localhost.localdomain nm-dispatcher[582]: req:1 'hostname': new request (4 scripts) Jul 22 08:23:59 localhost.localdomain nm-dispatcher[582]: req:1 'hostname': start running ordered scripts... Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.5665] dhcp-init: Using DHCP client 'dhclient' Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6043] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.18.8-2.el7_9/libnm-device-plugin-team.so) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6059] device (lo): carrier: link connected Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6062] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6071] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6083] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jul 22 08:23:59 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6109] device (eth0): carrier: link connected Jul 22 08:23:59 localhost.localdomain nm-dispatcher[582]: req:2 'connectivity-change': new request (4 scripts) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6185] device (eth0): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6192] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6215] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6220] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6229] manager: NetworkManager state is now CONNECTING Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6266] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6270] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6275] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.6352] dhcp4 (eth0): dhclient started with pid 601 Jul 22 08:23:59 localhost.localdomain nm-dispatcher[582]: req:2 'connectivity-change': start running ordered scripts... Jul 22 08:23:59 localhost.localdomain dhclient[601]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 6 (xid=0x661f75da) Jul 22 08:23:59 localhost.localdomain dhclient[601]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x661f75da) Jul 22 08:23:59 localhost.localdomain dhclient[601]: DHCPOFFER from 10.31.40.1 Jul 22 08:23:59 localhost.localdomain dhclient[601]: DHCPACK from 10.31.40.1 (xid=0x661f75da) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8368] dhcp4 (eth0): address 10.31.40.191 Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8368] dhcp4 (eth0): plen 22 (255.255.252.0) Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8368] dhcp4 (eth0): gateway 10.31.40.1 Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8368] dhcp4 (eth0): lease time 3600 Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8368] dhcp4 (eth0): hostname 'ip-10-31-40-191' Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8368] dhcp4 (eth0): nameserver '10.29.169.13' Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8368] dhcp4 (eth0): nameserver '10.29.170.12' Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8368] dhcp4 (eth0): nameserver '10.2.32.1' Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8369] dhcp4 (eth0): domain name 'testing-farm.us-east-1.aws.redhat.com' Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8369] dhcp4 (eth0): state changed unknown -> bound Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8377] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8385] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8387] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8391] manager: NetworkManager state is now CONNECTED_LOCAL Jul 22 08:23:59 localhost.localdomain dhclient[601]: bound to 10.31.40.191 -- renewal in 1497 seconds. Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8447] manager: NetworkManager state is now CONNECTED_SITE Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8447] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Jul 22 08:23:59 localhost.localdomain NetworkManager[576]: [1753187039.8448] policy: set-hostname: set hostname to 'ip-10-31-40-191' (from DHCPv4) Jul 22 08:23:59 ip-10-31-40-191 systemd-hostnamed[581]: Changed host name to 'ip-10-31-40-191' Jul 22 08:23:59 ip-10-31-40-191 NetworkManager[576]: [1753187039.8512] device (eth0): Activation: successful, device activated. Jul 22 08:23:59 ip-10-31-40-191 NetworkManager[576]: [1753187039.8521] manager: NetworkManager state is now CONNECTED_GLOBAL Jul 22 08:23:59 ip-10-31-40-191 NetworkManager[576]: [1753187039.8527] manager: startup complete Jul 22 08:23:59 ip-10-31-40-191 nm-dispatcher[582]: req:3 'up' [eth0]: new request (4 scripts) Jul 22 08:23:59 ip-10-31-40-191 nm-dispatcher[582]: req:3 'up' [eth0]: start running ordered scripts... Jul 22 08:23:59 ip-10-31-40-191 nm-dispatcher[582]: req:4 'connectivity-change': new request (4 scripts) Jul 22 08:23:59 ip-10-31-40-191 nm-dispatcher[582]: req:5 'hostname': new request (4 scripts) Jul 22 08:23:59 ip-10-31-40-191 systemd[1]: Started Network Manager Wait Online. -- Subject: Unit NetworkManager-wait-online.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-wait-online.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:59 ip-10-31-40-191 systemd[1]: Starting LSB: Bring up/down networking... -- Subject: Unit network.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network.service has begun starting up. Jul 22 08:23:59 ip-10-31-40-191 nm-dispatcher[582]: req:4 'connectivity-change': start running ordered scripts... Jul 22 08:24:00 ip-10-31-40-191 nm-dispatcher[582]: req:5 'hostname': start running ordered scripts... Jul 22 08:24:00 ip-10-31-40-191 network[642]: Bringing up loopback interface: [ OK ] Jul 22 08:24:00 ip-10-31-40-191 network[642]: Bringing up interface eth0: [ OK ] Jul 22 08:24:00 ip-10-31-40-191 systemd[1]: Started LSB: Bring up/down networking. -- Subject: Unit network.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:00 ip-10-31-40-191 systemd[1]: Starting Initial cloud-init job (metadata service crawler)... -- Subject: Unit cloud-init.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-init.service has begun starting up. Jul 22 08:24:00 ip-10-31-40-191 systemd[1]: Reached target Network. -- Subject: Unit network.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network.target has finished starting up. -- -- The start-up result is done. Jul 22 08:24:00 ip-10-31-40-191 systemd[1]: Starting Dynamic System Tuning Daemon... -- Subject: Unit tuned.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tuned.service has begun starting up. Jul 22 08:24:00 ip-10-31-40-191 systemd[1]: Starting Postfix Mail Transport Agent... -- Subject: Unit postfix.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit postfix.service has begun starting up. Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: Cloud-init v. 0.7.9 running 'init' at Tue, 22 Jul 2025 12:24:00 +0000. Up 13.91 seconds. Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: ++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++ Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: +--------+------+--------------+---------------+-------+-------------------+ Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: +--------+------+--------------+---------------+-------+-------------------+ Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: | lo: | True | 127.0.0.1 | 255.0.0.0 | . | . | Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: | lo: | True | . | . | d | . | Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: | eth0: | True | 10.31.40.191 | 255.255.252.0 | . | 0e:f2:64:8d:63:bf | Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: | eth0: | True | . | . | d | 0e:f2:64:8d:63:bf | Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: +--------+------+--------------+---------------+-------+-------------------+ Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: ++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++ Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: | 0 | 0.0.0.0 | 10.31.40.1 | 0.0.0.0 | eth0 | UG | Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: | 1 | 10.31.40.0 | 0.0.0.0 | 255.255.252.0 | eth0 | U | Jul 22 08:24:00 ip-10-31-40-191 cloud-init[850]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:24:01 ip-10-31-40-191 postfix/postfix-script[1077]: starting the Postfix mail system Jul 22 08:24:01 ip-10-31-40-191 postfix/master[1119]: daemon started -- version 2.10.1, configuration /etc/postfix Jul 22 08:24:01 ip-10-31-40-191 systemd[1]: Started Postfix Mail Transport Agent. -- Subject: Unit postfix.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit postfix.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191 systemd[1]: Started Dynamic System Tuning Daemon. -- Subject: Unit tuned.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tuned.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191 kernel: EXT4-fs (xvda1): resizing filesystem from 1048320 to 65535739 blocks Jul 22 08:24:01 ip-10-31-40-191 kernel: EXT4-fs (xvda1): resized filesystem to 65535739 Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd-hostnamed[581]: Changed static host name to 'ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com' Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd-hostnamed[581]: Changed host name to 'ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com' Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com NetworkManager[576]: [1753187041.5952] hostname: hostname changed from "localhost.localdomain" to "ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com" Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com nm-dispatcher[582]: req:6 'hostname': new request (4 scripts) Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com nm-dispatcher[582]: req:6 'hostname': start running ordered scripts... Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com NetworkManager[576]: [1753187041.6094] policy: set-hostname: set hostname to 'ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com' (from system configuration) Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com nm-dispatcher[582]: req:7 'hostname': new request (4 scripts) Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com nm-dispatcher[582]: req:7 'hostname': start running ordered scripts... Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Initial cloud-init job (metadata service crawler). -- Subject: Unit cloud-init.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-init.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target Cloud-config availability. -- Subject: Unit cloud-config.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-config.target has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Permit User Sessions... -- Subject: Unit systemd-user-sessions.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-user-sessions.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target Network is Online. -- Subject: Unit network-online.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network-online.target has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting System Logging Service... -- Subject: Unit rsyslog.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rsyslog.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Notify NFS peers of a restart... -- Subject: Unit rpc-statd-notify.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpc-statd-notify.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Crash recovery kernel arming... -- Subject: Unit kdump.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kdump.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Apply the settings specified in cloud-config... -- Subject: Unit cloud-config.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-config.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting The restraint harness.... -- Subject: Unit restraintd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit restraintd.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sm-notify[1221]: Version 1.3.0 starting Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting OpenSSH server daemon... -- Subject: Unit sshd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Permit User Sessions. -- Subject: Unit systemd-user-sessions.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-user-sessions.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Terminate Plymouth Boot Screen... -- Subject: Unit plymouth-quit.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-quit.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Wait for Plymouth Boot Screen to Quit... -- Subject: Unit plymouth-quit-wait.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-quit-wait.service has begun starting up. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Command Scheduler. -- Subject: Unit crond.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit crond.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Notify NFS peers of a restart. -- Subject: Unit rpc-statd-notify.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpc-statd-notify.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Received SIGRTMIN+21 from PID 230 (plymouthd). Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com crond[1229]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 71% if used.) Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Wait for Plymouth Boot Screen to Quit. -- Subject: Unit plymouth-quit-wait.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-quit-wait.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Getty on tty1. -- Subject: Unit getty@tty1.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit getty@tty1.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Serial Getty on ttyS0. -- Subject: Unit serial-getty@ttyS0.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit serial-getty@ttyS0.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target Login Prompts. -- Subject: Unit getty.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit getty.target has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Terminate Plymouth Boot Screen. -- Subject: Unit plymouth-quit.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-quit.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started The restraint harness.. -- Subject: Unit restraintd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit restraintd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com rsyslogd[1220]: [origin software="rsyslogd" swVersion="8.24.0-57.el7_9.3" x-pid="1220" x-info="http://www.rsyslog.com"] start Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com restraintd[1238]: Listening on http://localhost:8081 Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started System Logging Service. -- Subject: Unit rsyslog.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rsyslog.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[1225]: Server listening on 0.0.0.0 port 22. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[1225]: Server listening on :: port 22. Jul 22 08:24:01 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started OpenSSH server daemon. -- Subject: Unit sshd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com crond[1229]: (CRON) INFO (running with inotify support) Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com cloud-init[1223]: Cloud-init v. 0.7.9 running 'modules:config' at Tue, 22 Jul 2025 12:24:01 +0000. Up 15.42 seconds. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[1225]: Received signal 15; terminating. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopping OpenSSH server daemon... -- Subject: Unit sshd.service has begun shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has begun shutting down. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopped OpenSSH server daemon. -- Subject: Unit sshd.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has finished shutting down. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com kdumpctl[1222]: No kdump initial ramdisk found. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com kdumpctl[1222]: Rebuilding /boot/initramfs-3.10.0-1160.119.1.el7.x86_64kdump.img Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting OpenSSH server daemon... -- Subject: Unit sshd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has begun starting up. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[1315]: Server listening on 0.0.0.0 port 22. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[1315]: Server listening on :: port 22. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started OpenSSH server daemon. -- Subject: Unit sshd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Apply the settings specified in cloud-config. -- Subject: Unit cloud-config.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-config.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Execute cloud user/final scripts... -- Subject: Unit cloud-final.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-final.service has begun starting up. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com cloud-init[1358]: Cloud-init v. 0.7.9 running 'modules:final' at Tue, 22 Jul 2025 12:24:02 +0000. Up 15.83 seconds. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com ec2[1526]: Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com ec2[1526]: ############################################################# Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com ec2[1526]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com ec2[1526]: 256 SHA256:6XI06CKZZBRV/e3/IgrDLqYzZkOtPFSrZnF8OajHEho no comment (ECDSA) Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com ec2[1526]: 256 SHA256:JrbbBdywcMOrldKxyOAXBx69rlIzfdGLj+pLfM+mDE0 no comment (ED25519) Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com ec2[1526]: 2048 SHA256:PpouZOZBkzpae6L06EXka+DhqyUqJ1ceG3xB7C5VtE0 no comment (RSA) Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com ec2[1526]: -----END SSH HOST KEY FINGERPRINTS----- Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com ec2[1526]: ############################################################# Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com cloud-init[1358]: Cloud-init v. 0.7.9 finished at Tue, 22 Jul 2025 12:24:02 +0000. Datasource DataSourceEc2. Up 15.95 seconds Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Execute cloud user/final scripts. -- Subject: Unit cloud-final.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-final.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target Multi-User System. -- Subject: Unit multi-user.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit multi-user.target has finished starting up. -- -- The start-up result is done. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Update UTMP about System Runlevel Changes... -- Subject: Unit systemd-update-utmp-runlevel.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-update-utmp-runlevel.service has begun starting up. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Update UTMP about System Runlevel Changes. -- Subject: Unit systemd-update-utmp-runlevel.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-update-utmp-runlevel.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1554]: dracut-033-572.el7 Jul 22 08:24:02 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: Executing: /usr/sbin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict -o "plymouth dash resume ifcfg" --mount "/dev/disk/by-uuid/c7b7d6a5-fd01-4b9b-bcca-153eaff9d312 /sysroot ext4 defaults" --no-hostonly-default-device -f /boot/initramfs-3.10.0-1160.119.1.el7.x86_64kdump.img 3.10.0-1160.119.1.el7.x86_64 Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'ifcfg' will not be installed, because it's in the list to be omitted! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'plymouth' will not be installed, because it's in the list to be omitted! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'crypt' will not be installed, because command 'cryptsetup' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'dmsquash-live-ntfs' will not be installed, because command 'ntfs-3g' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'iscsi' will not be installed, because command 'iscsistart' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'resume' will not be installed, because it's in the list to be omitted! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'crypt' will not be installed, because command 'cryptsetup' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'dmsquash-live-ntfs' will not be installed, because command 'ntfs-3g' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'iscsi' will not be installed, because command 'iscsistart' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: bash *** Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: nss-softokn *** Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: i18n *** Jul 22 08:24:03 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: drm *** Jul 22 08:24:04 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: kernel-modules *** Jul 22 08:24:05 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com chronyd[523]: Selected source 198.46.254.130 Jul 22 08:24:11 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: fstab-sys *** Jul 22 08:24:11 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: rootfs-block *** Jul 22 08:24:11 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: terminfo *** Jul 22 08:24:11 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: udev-rules *** Jul 22 08:24:11 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: Skipping udev rule: 40-redhat-cpu-hotplug.rules Jul 22 08:24:11 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: Skipping udev rule: 91-permissions.rules Jul 22 08:24:11 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: biosdevname *** Jul 22 08:24:11 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: systemd *** Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: usrmount *** Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: base *** Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: fs-lib *** Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: kdumpbase *** Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: microcode_ctl-fw_dir_override *** Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl module: mangling fw_dir Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware" Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: intel: caveats check for kernel version "3.10.0-1160.119.1.el7.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-2d-07" is ignored Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-4e-03" is ignored Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-4f-01" is ignored Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-55-04" is ignored Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-5e-03" is ignored Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-8c-01" is ignored Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates /lib/firmware" Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: shutdown *** Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including modules done *** Jul 22 08:24:12 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Installing kernel module dependencies and firmware *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Installing kernel module dependencies and firmware done *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Resolving executable dependencies *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Resolving executable dependencies done*** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Hardlinking files *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Hardlinking files done *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Stripping files *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Stripping files done *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Generating early-microcode cpio image contents *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Constructing GenuineIntel.bin **** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Store current command line parameters *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Creating image file *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Creating microcode section *** Jul 22 08:24:13 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Created microcode section *** Jul 22 08:24:18 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Creating image file done *** Jul 22 08:24:18 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Creating initramfs image file '/boot/initramfs-3.10.0-1160.119.1.el7.x86_64kdump.img' done *** Jul 22 08:24:18 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com kdumpctl[1222]: kexec: loaded kdump kernel Jul 22 08:24:18 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com kdumpctl[1222]: Starting kdump: [OK] Jul 22 08:24:18 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Crash recovery kernel arming. -- Subject: Unit kdump.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kdump.service has finished starting up. -- -- The start-up result is done. Jul 22 08:24:18 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Startup finished in 955ms (kernel) + 4.570s (initrd) + 26.738s (userspace) = 32.264s. -- Subject: System start-up is now complete -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- All system services necessary queued for starting at boot have been -- successfully started. Note that this does not mean that the machine is -- now idle as services might still be busy with completing start-up. -- -- Kernel start-up required 955430 microseconds. -- -- Initial RAM disk start-up required 4570471 microseconds. -- -- Userspace start-up required 26738444 microseconds. Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7773]: Accepted publickey for root from 10.30.33.122 port 60622 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Created slice User Slice of root. -- Subject: Unit user-0.slice has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit user-0.slice has finished starting up. -- -- The start-up result is done. Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd-logind[505]: New session 1 of user root. -- Subject: A new session 1 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 1 has been created for the user root. -- -- The leading process of the session is 7773. Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Session 1 of user root. -- Subject: Unit session-1.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-1.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7773]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7773]: Received disconnect from 10.30.33.122 port 60622:11: disconnected by user Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7773]: Disconnected from 10.30.33.122 port 60622 Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7773]: pam_unix(sshd:session): session closed for user root Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd-logind[505]: Removed session 1. -- Subject: Session 1 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 1 has been terminated. Jul 22 08:26:20 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Removed slice User Slice of root. -- Subject: Unit user-0.slice has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit user-0.slice has finished shutting down. Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7786]: reverse mapping checking getaddrinfo for b7034625-83bc-41d8-963c-73ccfab3bff8.testing-farm.us-east-1.aws.redhat.com [10.31.9.79] failed - POSSIBLE BREAK-IN ATTEMPT! Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7787]: reverse mapping checking getaddrinfo for b7034625-83bc-41d8-963c-73ccfab3bff8.testing-farm.us-east-1.aws.redhat.com [10.31.9.79] failed - POSSIBLE BREAK-IN ATTEMPT! Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7787]: Accepted publickey for root from 10.31.9.79 port 51394 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7786]: Accepted publickey for root from 10.31.9.79 port 51378 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Created slice User Slice of root. -- Subject: Unit user-0.slice has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit user-0.slice has finished starting up. -- -- The start-up result is done. Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Session 2 of user root. -- Subject: Unit session-2.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-2.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd-logind[505]: New session 2 of user root. -- Subject: A new session 2 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 2 has been created for the user root. -- -- The leading process of the session is 7786. Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd-logind[505]: New session 3 of user root. -- Subject: A new session 3 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 3 has been created for the user root. -- -- The leading process of the session is 7787. Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Session 3 of user root. -- Subject: Unit session-3.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-3.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7786]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7787]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7787]: Received disconnect from 10.31.9.79 port 51394:11: disconnected by user Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7787]: Disconnected from 10.31.9.79 port 51394 Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com sshd[7787]: pam_unix(sshd:session): session closed for user root Jul 22 08:26:25 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd-logind[505]: Removed session 3. -- Subject: Session 3 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 3 has been terminated. Jul 22 08:27:55 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com unknown: Running test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1) with reboot count 0 and test restart count 0. (Be aware the test name is sanitized!) Jul 22 08:27:56 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dbus[501]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' Jul 22 08:27:56 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Hostname Service... -- Subject: Unit systemd-hostnamed.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-hostnamed.service has begun starting up. Jul 22 08:27:56 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com dbus[501]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 22 08:27:56 ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Hostname Service. -- Subject: Unit systemd-hostnamed.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-hostnamed.service has finished starting up. -- -- The start-up result is done. Jul 22 08:27:56 managed-node12 systemd-hostnamed[8782]: Changed static host name to 'managed-node12' Jul 22 08:27:56 managed-node12 systemd-hostnamed[8782]: Changed host name to 'managed-node12' Jul 22 08:27:56 managed-node12 NetworkManager[576]: [1753187276.1632] hostname: hostname changed from "ip-10-31-40-191.testing-farm.us-east-1.aws.redhat.com" to "managed-node12" Jul 22 08:27:56 managed-node12 dbus[501]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' Jul 22 08:27:56 managed-node12 systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-dispatcher.service has begun starting up. Jul 22 08:27:56 managed-node12 NetworkManager[576]: [1753187276.1703] policy: set-hostname: set hostname to 'managed-node12' (from system configuration) Jul 22 08:27:56 managed-node12 dbus[501]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jul 22 08:27:56 managed-node12 systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Jul 22 08:27:56 managed-node12 nm-dispatcher[8784]: req:1 'hostname': new request (4 scripts) Jul 22 08:27:56 managed-node12 nm-dispatcher[8784]: req:1 'hostname': start running ordered scripts... Jul 22 08:27:56 managed-node12 nm-dispatcher[8784]: req:2 'hostname': new request (4 scripts) Jul 22 08:27:56 managed-node12 nm-dispatcher[8784]: req:2 'hostname': start running ordered scripts... Jul 22 08:27:56 managed-node12 unknown: Leaving test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1). (Be aware the test name is sanitized!) Jul 22 08:28:59 managed-node12 sshd[9585]: Accepted publickey for root from 10.31.42.107 port 47634 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:59 managed-node12 systemd-logind[505]: New session 4 of user root. -- Subject: A new session 4 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 4 has been created for the user root. -- -- The leading process of the session is 9585. Jul 22 08:28:59 managed-node12 sshd[9585]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:28:59 managed-node12 systemd[1]: Started Session 4 of user root. -- Subject: Unit session-4.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-4.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:28:59 managed-node12 sshd[9585]: Received disconnect from 10.31.42.107 port 47634:11: disconnected by user Jul 22 08:28:59 managed-node12 sshd[9585]: Disconnected from 10.31.42.107 port 47634 Jul 22 08:28:59 managed-node12 sshd[9585]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:59 managed-node12 systemd-logind[505]: Removed session 4. -- Subject: Session 4 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 4 has been terminated. Jul 22 08:29:00 managed-node12 sshd[9594]: Accepted publickey for root from 10.31.42.107 port 47636 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:00 managed-node12 systemd-logind[505]: New session 5 of user root. -- Subject: A new session 5 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 5 has been created for the user root. -- -- The leading process of the session is 9594. Jul 22 08:29:00 managed-node12 systemd[1]: Started Session 5 of user root. -- Subject: Unit session-5.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-5.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:29:00 managed-node12 sshd[9594]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:29:00 managed-node12 sshd[9594]: Received disconnect from 10.31.42.107 port 47636:11: disconnected by user Jul 22 08:29:00 managed-node12 sshd[9594]: Disconnected from 10.31.42.107 port 47636 Jul 22 08:29:00 managed-node12 sshd[9594]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:00 managed-node12 systemd-logind[505]: Removed session 5. -- Subject: Session 5 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 5 has been terminated. Jul 22 08:29:17 managed-node12 sshd[9605]: Accepted publickey for root from 10.31.42.107 port 47700 ssh2: ECDSA SHA256:VQ9XK4k6Vt8UPOycnISNyAPDgpYR9n+H9XDBRYiSHdA Jul 22 08:29:17 managed-node12 systemd-logind[505]: New session 6 of user root. -- Subject: A new session 6 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 6 has been created for the user root. -- -- The leading process of the session is 9605. Jul 22 08:29:17 managed-node12 systemd[1]: Started Session 6 of user root. -- Subject: Unit session-6.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-6.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:29:17 managed-node12 sshd[9605]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:29:18 managed-node12 sudo[9662]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztyznyfvhzymowzctsmnljeatlbqiatw ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187357.98-10730-82406003385936/AnsiballZ_setup.py Jul 22 08:29:18 managed-node12 sudo[9662]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:18 managed-node12 ansible-setup[9665]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:29:18 managed-node12 sudo[9662]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:19 managed-node12 sudo[9739]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsygmtyrdgojxkttydugwcdixmvxdrjh ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187359.5-10825-276467645390014/AnsiballZ_stat.py Jul 22 08:29:19 managed-node12 sudo[9739]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:19 managed-node12 ansible-stat[9742]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:29:19 managed-node12 sudo[9739]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:20 managed-node12 sudo[9791]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-petyeargzcaxfcfizizeitvugidkkaud ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187360.35-10875-122290133405489/AnsiballZ_yum.py Jul 22 08:29:20 managed-node12 sudo[9791]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:20 managed-node12 ansible-yum[9794]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:29:34 managed-node12 yum[9837]: Installed: libblockdev-utils-2.18-5.el7.x86_64 Jul 22 08:29:34 managed-node12 yum[9837]: Installed: 7:device-mapper-event-libs-1.02.170-6.el7_9.5.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: libsolv-0.6.34-4.el7.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: libaio-0.3.109-13.el7.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: librepo-1.8.1-8.el7_9.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: libmodulemd-1.6.3-1.el7.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: libdnf-0.22.5-2.el7_9.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64 Jul 22 08:29:35 managed-node12 systemd[1]: Reloading. Jul 22 08:29:35 managed-node12 systemd[1]: Reloading. Jul 22 08:29:35 managed-node12 systemd[1]: Listening on Device-mapper event daemon FIFOs. -- Subject: Unit dm-event.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dm-event.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:29:35 managed-node12 yum[9837]: Installed: 7:device-mapper-event-1.02.170-6.el7_9.5.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: libbytesize-1.2-1.el7.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: python2-bytesize-1.2-1.el7.x86_64 Jul 22 08:29:35 managed-node12 yum[9837]: Installed: 7:lvm2-libs-2.02.187-6.el7_9.5.x86_64 Jul 22 08:29:35 managed-node12 systemd[1]: Reloading. Jul 22 08:29:35 managed-node12 systemd[1]: Reloading. Jul 22 08:29:36 managed-node12 systemd[1]: Listening on LVM2 metadata daemon socket. -- Subject: Unit lvm2-lvmetad.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-lvmetad.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:29:36 managed-node12 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... -- Subject: Unit lvm2-monitor.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-monitor.service has begun starting up. Jul 22 08:29:36 managed-node12 systemd[1]: Started LVM2 metadata daemon. -- Subject: Unit lvm2-lvmetad.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-lvmetad.service has finished starting up. -- -- The start-up result is done. Jul 22 08:29:36 managed-node12 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. -- Subject: Unit lvm2-monitor.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-monitor.service has finished starting up. -- -- The start-up result is done. Jul 22 08:29:36 managed-node12 systemd[1]: Reloading. Jul 22 08:29:36 managed-node12 systemd[1]: Reloading. Jul 22 08:29:36 managed-node12 systemd[1]: Reloading. Jul 22 08:29:36 managed-node12 systemd[1]: Reloading. Jul 22 08:29:36 managed-node12 systemd[1]: Listening on LVM2 poll daemon socket. -- Subject: Unit lvm2-lvmpolld.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-lvmpolld.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:29:36 managed-node12 yum[9837]: Installed: 7:lvm2-2.02.187-6.el7_9.5.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: python2-libdnf-0.22.5-2.el7_9.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: python2-hawkey-0.22.5-2.el7_9.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: libblockdev-2.18-5.el7.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: python2-blockdev-2.18-5.el7.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: 1:pyparted-3.9-15.el7.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: sgpio-1.2.0.10-13.el7.x86_64 Jul 22 08:29:36 managed-node12 systemd[1]: Reloading. Jul 22 08:29:36 managed-node12 yum[9837]: Installed: dmraid-1.0.0.rc16-28.el7.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: dmraid-events-1.0.0.rc16-28.el7.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: volume_key-libs-0.3.9-9.el7.x86_64 Jul 22 08:29:36 managed-node12 yum[9837]: Installed: libreport-filesystem-2.1.11-53.el7.centos.x86_64 Jul 22 08:29:36 managed-node12 systemd[1]: Reloading. Jul 22 08:29:36 managed-node12 yum[9837]: Installed: mdadm-4.1-9.el7_9.x86_64 Jul 22 08:29:36 managed-node12 dbus[501]: [system] Reloaded configuration Jul 22 08:29:36 managed-node12 dbus[501]: [system] Reloaded configuration Jul 22 08:29:36 managed-node12 dbus[501]: [system] Reloaded configuration Jul 22 08:29:36 managed-node12 yum[9837]: Installed: 1:blivet3-data-3.1.3-3.el7.noarch Jul 22 08:29:36 managed-node12 yum[9837]: Installed: lsof-4.87-6.el7.x86_64 Jul 22 08:29:37 managed-node12 yum[9837]: Installed: 1:python2-blivet3-3.1.3-3.el7.noarch Jul 22 08:29:37 managed-node12 yum[9837]: Installed: libblockdev-mdraid-2.18-5.el7.x86_64 Jul 22 08:29:37 managed-node12 yum[9837]: Installed: libblockdev-crypto-2.18-5.el7.x86_64 Jul 22 08:29:37 managed-node12 yum[9837]: Installed: libblockdev-dm-2.18-5.el7.x86_64 Jul 22 08:29:37 managed-node12 yum[9837]: Installed: libblockdev-lvm-2.18-5.el7.x86_64 Jul 22 08:29:37 managed-node12 yum[9837]: Installed: libblockdev-swap-2.18-5.el7.x86_64 Jul 22 08:29:37 managed-node12 yum[9837]: Installed: python-enum34-1.0.4-1.el7.noarch Jul 22 08:29:37 managed-node12 sudo[9791]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:39 managed-node12 sudo[10124]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qqrnrwaokzonhmydoxxwghjcdbixhwzb ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187378.54-12234-224916127537624/AnsiballZ_blivet.py Jul 22 08:29:39 managed-node12 sudo[10124]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:40 managed-node12 ansible-fedora.linux_system_roles.blivet[10127]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:29:40 managed-node12 sudo[10124]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:41 managed-node12 sudo[10182]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okczxfqngrviihgfrmuyaaibslflkzsq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187381.44-12421-17670071717558/AnsiballZ_yum.py Jul 22 08:29:41 managed-node12 sudo[10182]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:41 managed-node12 ansible-yum[10185]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:29:42 managed-node12 sudo[10182]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:43 managed-node12 sudo[10238]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ccelcbnmooaeajzwpvnsotbocjidjxos ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187382.38-12554-34034073082156/AnsiballZ_service_facts.py Jul 22 08:29:43 managed-node12 sudo[10238]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:43 managed-node12 ansible-service_facts[10241]: Invoked Jul 22 08:29:43 managed-node12 sudo[10238]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:45 managed-node12 sudo[10403]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ifjehihbxdnrhwildegrbtwpyixuwvpo ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187384.81-12794-189849665012883/AnsiballZ_blivet.py Jul 22 08:29:45 managed-node12 sudo[10403]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:45 managed-node12 ansible-fedora.linux_system_roles.blivet[10406]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:29:45 managed-node12 sudo[10403]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:46 managed-node12 sudo[10461]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-korccvmvkwhbmgwmgtrqduqpecbxvsju ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187386.06-12936-104773501559087/AnsiballZ_stat.py Jul 22 08:29:46 managed-node12 sudo[10461]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:46 managed-node12 ansible-stat[10464]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:29:46 managed-node12 sudo[10461]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:48 managed-node12 sudo[10515]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pvgahqfkiooxogwpwiuxmlpwfzplznco ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187388.29-13230-20186914232299/AnsiballZ_stat.py Jul 22 08:29:48 managed-node12 sudo[10515]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:48 managed-node12 ansible-stat[10518]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:29:48 managed-node12 sudo[10515]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:49 managed-node12 sudo[10569]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjpiewvhzysqbyjqhelzxavchlxmahvr ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187389.44-13322-151072817404070/AnsiballZ_setup.py Jul 22 08:29:49 managed-node12 sudo[10569]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:50 managed-node12 ansible-setup[10572]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:29:50 managed-node12 sudo[10569]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:52 managed-node12 sudo[10652]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugydqrdgzavkbghsbuulhbkxcdgczxbw ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187392.27-13476-107809868324232/AnsiballZ_yum.py Jul 22 08:29:52 managed-node12 sudo[10652]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:53 managed-node12 ansible-yum[10655]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:29:53 managed-node12 sudo[10652]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:55 managed-node12 sudo[10708]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hunjibflkocwxfsbmyjjhmsxjkwadspq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187393.63-13726-66748761657150/AnsiballZ_find_unused_disk.py Jul 22 08:29:55 managed-node12 sudo[10708]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:55 managed-node12 ansible-fedora.linux_system_roles.find_unused_disk[10711]: Invoked with min_size=5g max_return=1 max_size=0 with_interface=None match_sector_size=False Jul 22 08:29:55 managed-node12 sudo[10708]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:57 managed-node12 sudo[10762]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iemdaidalhesbqfotpilswbuwuhghoyc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187396.01-13904-87748334206181/AnsiballZ_command.py Jul 22 08:29:57 managed-node12 sudo[10762]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:57 managed-node12 ansible-command[10765]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:29:57 managed-node12 sudo[10762]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:59 managed-node12 sshd[10776]: Accepted publickey for root from 10.31.42.107 port 47772 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:59 managed-node12 systemd-logind[505]: New session 7 of user root. -- Subject: A new session 7 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 7 has been created for the user root. -- -- The leading process of the session is 10776. Jul 22 08:29:59 managed-node12 systemd[1]: Started Session 7 of user root. -- Subject: Unit session-7.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-7.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:29:59 managed-node12 sshd[10776]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:29:59 managed-node12 sshd[10776]: Received disconnect from 10.31.42.107 port 47772:11: disconnected by user Jul 22 08:29:59 managed-node12 sshd[10776]: Disconnected from 10.31.42.107 port 47772 Jul 22 08:29:59 managed-node12 sshd[10776]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:59 managed-node12 systemd-logind[505]: Removed session 7. -- Subject: Session 7 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 7 has been terminated. Jul 22 08:30:00 managed-node12 sshd[10786]: Accepted publickey for root from 10.31.42.107 port 47776 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:00 managed-node12 systemd[1]: Started Session 8 of user root. -- Subject: Unit session-8.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-8.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:30:00 managed-node12 systemd-logind[505]: New session 8 of user root. -- Subject: A new session 8 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 8 has been created for the user root. -- -- The leading process of the session is 10786. Jul 22 08:30:00 managed-node12 sshd[10786]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:30:00 managed-node12 sshd[10786]: Received disconnect from 10.31.42.107 port 47776:11: disconnected by user Jul 22 08:30:00 managed-node12 sshd[10786]: Disconnected from 10.31.42.107 port 47776 Jul 22 08:30:00 managed-node12 sshd[10786]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:00 managed-node12 systemd-logind[505]: Removed session 8. -- Subject: Session 8 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 8 has been terminated. Jul 22 08:30:10 managed-node12 sshd[10797]: Accepted publickey for root from 10.31.42.107 port 47794 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:11 managed-node12 systemd-logind[505]: New session 9 of user root. -- Subject: A new session 9 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 9 has been created for the user root. -- -- The leading process of the session is 10797. Jul 22 08:30:11 managed-node12 systemd[1]: Started Session 9 of user root. -- Subject: Unit session-9.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-9.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:30:11 managed-node12 sshd[10797]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:30:11 managed-node12 sshd[10797]: Received disconnect from 10.31.42.107 port 47794:11: disconnected by user Jul 22 08:30:11 managed-node12 sshd[10797]: Disconnected from 10.31.42.107 port 47794 Jul 22 08:30:11 managed-node12 sshd[10797]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:11 managed-node12 systemd-logind[505]: Removed session 9. -- Subject: Session 9 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 9 has been terminated. Jul 22 08:30:20 managed-node12 sudo[10861]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvrttquedewibnysxlweyyakvbldstzc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187417.36-16048-180153636723329/AnsiballZ_setup.py Jul 22 08:30:20 managed-node12 sudo[10861]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:20 managed-node12 ansible-setup[10864]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:30:21 managed-node12 sudo[10861]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:23 managed-node12 sudo[10944]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iptfczxcttfmysflnoyfqurufcqfpjwl ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187422.27-16651-230992089069696/AnsiballZ_stat.py Jul 22 08:30:23 managed-node12 sudo[10944]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:23 managed-node12 ansible-stat[10947]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:30:23 managed-node12 sudo[10944]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:27 managed-node12 sudo[10996]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbcszabstxumahyaxvdcinrjltzoabxn ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187425.52-16990-20232595419963/AnsiballZ_yum.py Jul 22 08:30:27 managed-node12 sudo[10996]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:27 managed-node12 ansible-yum[10999]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:30:31 managed-node12 sudo[10996]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:34 managed-node12 sudo[11073]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqvemkobvltzhqtqeqtzirdyjddltidb ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187432.41-17648-112843223142554/AnsiballZ_blivet.py Jul 22 08:30:34 managed-node12 sudo[11073]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:34 managed-node12 ansible-fedora.linux_system_roles.blivet[11076]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=True Jul 22 08:30:34 managed-node12 sudo[11073]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:37 managed-node12 sudo[11131]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rntfgguargnzgaolyeohmmhqbybmaxgm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187436.33-18147-6298477086869/AnsiballZ_yum.py Jul 22 08:30:37 managed-node12 sudo[11131]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:37 managed-node12 ansible-yum[11134]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:30:37 managed-node12 sudo[11131]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:39 managed-node12 sudo[11187]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fpqtufswcnzmdmlffndvyzipcwgdwsdk ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187437.78-18293-139962615953973/AnsiballZ_service_facts.py Jul 22 08:30:39 managed-node12 sudo[11187]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:39 managed-node12 ansible-service_facts[11190]: Invoked Jul 22 08:30:40 managed-node12 sudo[11187]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:42 managed-node12 sudo[11352]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzehwxdxkpwbvsnqnwyoregqsioxohdg ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187442.05-18605-7434516341377/AnsiballZ_blivet.py Jul 22 08:30:42 managed-node12 sudo[11352]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:42 managed-node12 ansible-fedora.linux_system_roles.blivet[11355]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=True Jul 22 08:30:42 managed-node12 sudo[11352]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:44 managed-node12 sudo[11410]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qjmredcvukpvqasbrcduqrvcdvuvjykn ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187443.82-18748-44894670378566/AnsiballZ_stat.py Jul 22 08:30:44 managed-node12 sudo[11410]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:44 managed-node12 ansible-stat[11413]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:30:44 managed-node12 sudo[11410]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:48 managed-node12 sudo[11464]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyrtrnfuhiunytcnrnmyjdkqmbndqsnu ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187447.86-19125-28333838716623/AnsiballZ_stat.py Jul 22 08:30:48 managed-node12 sudo[11464]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:48 managed-node12 ansible-stat[11467]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:30:48 managed-node12 sudo[11464]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:50 managed-node12 sudo[11518]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yccvruzjnvuddtpgatvygfsazoccwuuo ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187449.4-19264-235634989173859/AnsiballZ_setup.py Jul 22 08:30:50 managed-node12 sudo[11518]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:50 managed-node12 ansible-setup[11521]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:30:50 managed-node12 sudo[11518]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:55 managed-node12 sudo[11601]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjvpbkhpmcsrhpkvjmepiwnwuwlrkqoi ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187451.2-19462-37945010486140/AnsiballZ_package_facts.py Jul 22 08:30:55 managed-node12 sudo[11601]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:55 managed-node12 ansible-package_facts[11604]: Invoked with manager=['auto'] strategy=first Jul 22 08:30:55 managed-node12 sudo[11601]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:04 managed-node12 sudo[11655]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgxrwaoiiqlxbipgiabgtxxlbgbdjiej ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187462.39-20364-182079313735200/AnsiballZ_command.py Jul 22 08:31:04 managed-node12 sudo[11655]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:04 managed-node12 ansible-command[11658]: Invoked with creates=None executable=None _uses_shell=False strip_empty_ends=True _raw_params=modprobe --dry-run kvdo removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:31:04 managed-node12 sudo[11655]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:06 managed-node12 sudo[11707]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xlhrublxukwnbiknlxigwpmneklxakfh ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187465.73-20672-243493274236485/AnsiballZ_command.py Jul 22 08:31:06 managed-node12 sudo[11707]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:06 managed-node12 ansible-command[11710]: Invoked with creates=None executable=None _uses_shell=False strip_empty_ends=True _raw_params=modprobe --dry-run dm-vdo removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:31:06 managed-node12 sudo[11707]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:16 managed-node12 sshd[11719]: Accepted publickey for root from 10.31.42.107 port 47928 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:16 managed-node12 systemd-logind[505]: New session 10 of user root. -- Subject: A new session 10 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 10 has been created for the user root. -- -- The leading process of the session is 11719. Jul 22 08:31:16 managed-node12 systemd[1]: Started Session 10 of user root. -- Subject: Unit session-10.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-10.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:31:16 managed-node12 sshd[11719]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:31:16 managed-node12 sshd[11719]: Received disconnect from 10.31.42.107 port 47928:11: disconnected by user Jul 22 08:31:16 managed-node12 sshd[11719]: Disconnected from 10.31.42.107 port 47928 Jul 22 08:31:16 managed-node12 sshd[11719]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:16 managed-node12 systemd-logind[505]: Removed session 10. -- Subject: Session 10 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 10 has been terminated. Jul 22 08:31:24 managed-node12 sshd[11729]: Accepted publickey for root from 10.31.42.107 port 47946 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:25 managed-node12 systemd[1]: Started Session 11 of user root. -- Subject: Unit session-11.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-11.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:31:25 managed-node12 systemd-logind[505]: New session 11 of user root. -- Subject: A new session 11 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 11 has been created for the user root. -- -- The leading process of the session is 11729. Jul 22 08:31:25 managed-node12 sshd[11729]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:31:25 managed-node12 sshd[11729]: Received disconnect from 10.31.42.107 port 47946:11: disconnected by user Jul 22 08:31:25 managed-node12 sshd[11729]: Disconnected from 10.31.42.107 port 47946 Jul 22 08:31:25 managed-node12 sshd[11729]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:25 managed-node12 systemd-logind[505]: Removed session 11. -- Subject: Session 11 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 11 has been terminated. Jul 22 08:31:35 managed-node12 sudo[11793]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kddhqnstejmtikqhljrjinuoufboaahv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187490.33-23250-151593431846837/AnsiballZ_setup.py Jul 22 08:31:35 managed-node12 sudo[11793]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:35 managed-node12 ansible-setup[11796]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:31:35 managed-node12 sudo[11793]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:41 managed-node12 sudo[11876]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qushspdlcdhydhjtqdghxaqhicpnewtt ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187499.38-23939-73678596900722/AnsiballZ_stat.py Jul 22 08:31:41 managed-node12 sudo[11876]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:41 managed-node12 ansible-stat[11879]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:31:41 managed-node12 sudo[11876]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:48 managed-node12 sudo[11928]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgjumcqecmkcspyposujezaalifnaqae ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187505.33-24356-46899576590525/AnsiballZ_yum.py Jul 22 08:31:48 managed-node12 sudo[11928]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:48 managed-node12 ansible-yum[11931]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:31:52 managed-node12 sudo[11928]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:55 managed-node12 sudo[12005]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cphhxytckafglpplmxwdrwzfdgqctbaq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187513.97-25204-210083074677830/AnsiballZ_blivet.py Jul 22 08:31:55 managed-node12 sudo[12005]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:55 managed-node12 ansible-fedora.linux_system_roles.blivet[12008]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:31:55 managed-node12 sudo[12005]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:59 managed-node12 sudo[12063]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjjpsnzbegjtocspclbiublnldxkkwbo ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187518.62-25695-106744797056522/AnsiballZ_yum.py Jul 22 08:31:59 managed-node12 sudo[12063]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:59 managed-node12 ansible-yum[12066]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:31:59 managed-node12 sudo[12063]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:01 managed-node12 sudo[12119]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ayaqhxqnuxyroovxtodrtrlxvzznqxgm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187520.04-26103-167884736633912/AnsiballZ_service_facts.py Jul 22 08:32:01 managed-node12 sudo[12119]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:01 managed-node12 ansible-service_facts[12122]: Invoked Jul 22 08:32:02 managed-node12 sudo[12119]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:04 managed-node12 sudo[12284]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-szvoxnrploaoyhcznrqtpxorladnnshy ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187523.89-26552-207539694928655/AnsiballZ_blivet.py Jul 22 08:32:04 managed-node12 sudo[12284]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:04 managed-node12 ansible-fedora.linux_system_roles.blivet[12287]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:32:04 managed-node12 sudo[12284]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:06 managed-node12 sudo[12342]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djnyzfclarejjsospsjdlcdusubgcaky ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187525.7-26797-10650294757252/AnsiballZ_stat.py Jul 22 08:32:06 managed-node12 sudo[12342]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:06 managed-node12 ansible-stat[12345]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:32:06 managed-node12 sudo[12342]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:10 managed-node12 sudo[12396]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ouwmrpvuegkybzpjgajtkpvudxmmoilm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187529.35-27289-54497353508751/AnsiballZ_stat.py Jul 22 08:32:10 managed-node12 sudo[12396]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:10 managed-node12 ansible-stat[12399]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:32:10 managed-node12 sudo[12396]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:11 managed-node12 sudo[12450]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aopojgcphhjracqmfmrorusubbeeyojr ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187530.98-27421-62145316656403/AnsiballZ_setup.py Jul 22 08:32:11 managed-node12 sudo[12450]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:11 managed-node12 ansible-setup[12453]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:32:12 managed-node12 sudo[12450]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:14 managed-node12 sudo[12533]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-snejtqvqhhwsggvuniaslhljpghdzwle ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187533.5-27654-204376602529535/AnsiballZ_blivet.py Jul 22 08:32:14 managed-node12 sudo[12533]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:14 managed-node12 ansible-fedora.linux_system_roles.blivet[12536]: Invoked with packages_only=True uses_kmod_kvdo=False disklabel_type=None volumes=[] diskvolume_mkfs_option_map={} safe_mode=True pools=[{'shared': None, 'raid_device_count': None, 'grow_to_fill': None, 'name': 'foo', 'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_key_size': None, 'disks': [], 'encryption_key': None, 'encryption_luks_version': None, 'raid_spare_count': None, 'state': 'present', 'volumes': [{'fs_type': 'xfs', 'mount_options': None, 'size': None, 'mount_point': '/foo', 'compression': None, 'encryption_password': None, 'encryption': False, 'raid_level': None, 'state': 'present', 'vdo_pool_size': None, 'mount_mode': None, 'thin_pool_name': None, 'type': 'lvm', 'encryption_cipher': None, 'deduplication': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': None, 'encryption_luks_version': None, 'raid_stripe_size': None, 'mount_user': None, 'raid_disks': [], 'cache_mode': None, 'name': 'test1', 'mount_group': None, 'thin_pool_size': None, 'cached': None, 'thin': False, 'cache_size': None, 'cache_devices': [], 'fs_create_options': None}], 'encryption_tang_url': None, 'encryption_cipher': None, 'raid_chunk_size': None, 'encryption_clevis_pin': None, 'type': 'lvm', 'raid_level': None, 'encryption_tang_thumbprint': None}] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:32:14 managed-node12 systemd-udevd[366]: Network interface NamePolicy= disabled on kernel command line, ignoring. Jul 22 08:32:17 managed-node12 kernel: SGI XFS with ACLs, security attributes, no debug enabled Jul 22 08:32:17 managed-node12 sudo[12533]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:19 managed-node12 sudo[12607]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-waglhotbleqwjwigjswdqukvxbmrfofz ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187538.9-28211-5876962623764/AnsiballZ_blivet.py Jul 22 08:32:19 managed-node12 sudo[12607]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:19 managed-node12 ansible-fedora.linux_system_roles.blivet[12610]: Invoked with packages_only=True uses_kmod_kvdo=False disklabel_type=None volumes=[{'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'ext4', 'mount_options': None, 'size': None, 'mount_point': None, 'encryption_password': None, 'encryption': False, 'raid_chunk_size': None, 'raid_device_count': None, 'state': 'present', 'mount_mode': None, 'type': 'disk', 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': None, 'encryption_luks_version': None, 'mount_user': None, 'raid_spare_count': None, 'name': 'foo', 'mount_group': None, 'disks': [], 'fs_create_options': None}] diskvolume_mkfs_option_map={} safe_mode=True pools=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:32:23 managed-node12 sudo[12607]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:24 managed-node12 sudo[12673]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgjimjxgxclfsxzfgfmtjqjawwmpqynj ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187544.32-28578-160712093182076/AnsiballZ_blivet.py Jul 22 08:32:24 managed-node12 sudo[12673]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:25 managed-node12 ansible-fedora.linux_system_roles.blivet[12676]: Invoked with packages_only=True uses_kmod_kvdo=False disklabel_type=None volumes=[{'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'swap', 'mount_options': None, 'size': None, 'mount_point': None, 'encryption_password': None, 'encryption': False, 'raid_chunk_size': None, 'raid_device_count': None, 'state': 'present', 'mount_mode': None, 'type': 'disk', 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': None, 'encryption_luks_version': None, 'mount_user': None, 'raid_spare_count': None, 'name': 'foo', 'mount_group': None, 'disks': [], 'fs_create_options': None}] diskvolume_mkfs_option_map={} safe_mode=True pools=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:32:28 managed-node12 sudo[12673]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:31 managed-node12 sshd[12699]: Accepted publickey for root from 10.31.42.107 port 48086 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:31 managed-node12 systemd-logind[505]: New session 12 of user root. -- Subject: A new session 12 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 12 has been created for the user root. -- -- The leading process of the session is 12699. Jul 22 08:32:31 managed-node12 systemd[1]: Started Session 12 of user root. -- Subject: Unit session-12.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-12.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:32:31 managed-node12 sshd[12699]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:32:31 managed-node12 sshd[12699]: Received disconnect from 10.31.42.107 port 48086:11: disconnected by user Jul 22 08:32:31 managed-node12 sshd[12699]: Disconnected from 10.31.42.107 port 48086 Jul 22 08:32:31 managed-node12 sshd[12699]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:31 managed-node12 systemd-logind[505]: Removed session 12. -- Subject: Session 12 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 12 has been terminated. Jul 22 08:32:40 managed-node12 ansible-setup[12763]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:32:43 managed-node12 ansible-setup[12843]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:33:13 managed-node12 ansible-stat[12923]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:33:20 managed-node12 ansible-yum[12972]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:33:27 managed-node12 ansible-fedora.linux_system_roles.blivet[13046]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:33:31 managed-node12 ansible-yum[13101]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:33:33 managed-node12 ansible-service_facts[13154]: Invoked Jul 22 08:33:37 managed-node12 ansible-fedora.linux_system_roles.blivet[13316]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:33:39 managed-node12 ansible-stat[13371]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:33:45 managed-node12 ansible-stat[13422]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:33:47 managed-node12 ansible-setup[13473]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:33:49 managed-node12 sshd[13513]: Accepted publickey for root from 10.31.42.107 port 48238 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:49 managed-node12 systemd-logind[505]: New session 13 of user root. -- Subject: A new session 13 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 13 has been created for the user root. -- -- The leading process of the session is 13513. Jul 22 08:33:49 managed-node12 systemd[1]: Started Session 13 of user root. -- Subject: Unit session-13.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-13.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:33:49 managed-node12 sshd[13513]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:33:49 managed-node12 sshd[13513]: Received disconnect from 10.31.42.107 port 48238:11: disconnected by user Jul 22 08:33:49 managed-node12 sshd[13513]: Disconnected from 10.31.42.107 port 48238 Jul 22 08:33:49 managed-node12 sshd[13513]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:49 managed-node12 systemd-logind[505]: Removed session 13. -- Subject: Session 13 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 13 has been terminated. Jul 22 08:34:01 managed-node12 sudo[13577]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-admadsufnloyvkolbufscosyuwulutwj ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187638.3-5690-23802950434255/AnsiballZ_setup.py Jul 22 08:34:01 managed-node12 sudo[13577]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:01 managed-node12 ansible-setup[13580]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:01 managed-node12 sudo[13577]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:06 managed-node12 sudo[13660]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vltpwkzhlwocsfsqgmppjzueqffzebuz ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187644.99-6394-194396731724035/AnsiballZ_stat.py Jul 22 08:34:06 managed-node12 sudo[13660]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:06 managed-node12 ansible-stat[13663]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:34:06 managed-node12 sudo[13660]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:11 managed-node12 sudo[13712]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvzxtkknkjoiovelajstgomvpdttfgts ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187650.0-6860-20895251279755/AnsiballZ_yum.py Jul 22 08:34:11 managed-node12 sudo[13712]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:12 managed-node12 ansible-yum[13715]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:34:15 managed-node12 sudo[13712]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:19 managed-node12 sudo[13789]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cowagqkzeuqamulslltxygoyakwsozjp ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187657.54-7869-82758375365019/AnsiballZ_blivet.py Jul 22 08:34:19 managed-node12 sudo[13789]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:19 managed-node12 ansible-fedora.linux_system_roles.blivet[13792]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:34:19 managed-node12 sudo[13789]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:22 managed-node12 sudo[13847]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqryrmnhuabajkugkacrdpijaabovzol ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187661.73-8254-230251087096550/AnsiballZ_yum.py Jul 22 08:34:22 managed-node12 sudo[13847]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:22 managed-node12 ansible-yum[13850]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:34:22 managed-node12 sudo[13847]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:24 managed-node12 sudo[13903]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fiabydzchuobhdqsakgsnmuiptzfrpbo ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187662.99-8539-13456724513550/AnsiballZ_service_facts.py Jul 22 08:34:24 managed-node12 sudo[13903]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:24 managed-node12 ansible-service_facts[13906]: Invoked Jul 22 08:34:25 managed-node12 sudo[13903]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:27 managed-node12 sudo[14068]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-siaxxuoagbjqazxipikklqgzofjgxlzr ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187667.14-8892-62088847466024/AnsiballZ_blivet.py Jul 22 08:34:27 managed-node12 sudo[14068]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:27 managed-node12 ansible-fedora.linux_system_roles.blivet[14071]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:34:27 managed-node12 sudo[14068]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:28 managed-node12 sudo[14126]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ddtfgiiritlognfratpdrafioflkiqfx ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187668.4-9079-96356703391353/AnsiballZ_stat.py Jul 22 08:34:28 managed-node12 sudo[14126]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:28 managed-node12 ansible-stat[14129]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:34:28 managed-node12 sudo[14126]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:31 managed-node12 sudo[14180]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbmzecehcbsaasgsymjcprechgxjoquc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187671.34-9491-218830372423403/AnsiballZ_stat.py Jul 22 08:34:31 managed-node12 sudo[14180]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:32 managed-node12 ansible-stat[14183]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:34:32 managed-node12 sudo[14180]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:33 managed-node12 sudo[14234]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxcpunidoddliuazofyluvhxjnxnblwr ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187672.98-9709-191487334396121/AnsiballZ_setup.py Jul 22 08:34:33 managed-node12 sudo[14234]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:33 managed-node12 ansible-setup[14237]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:33 managed-node12 sudo[14234]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:37 managed-node12 sudo[14317]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zggnsrlurwdqbdgddofbzsrrqdzqmjjt ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187676.66-9865-28476874798133/AnsiballZ_yum.py Jul 22 08:34:37 managed-node12 sudo[14317]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:37 managed-node12 ansible-yum[14320]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:34:37 managed-node12 sudo[14317]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:39 managed-node12 sudo[14373]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pawkjgpkxhacmisctajgkegegzqhxcrf ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187677.88-10102-25710953709718/AnsiballZ_find_unused_disk.py Jul 22 08:34:39 managed-node12 sudo[14373]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:39 managed-node12 ansible-fedora.linux_system_roles.find_unused_disk[14376]: Invoked with min_size=10g max_return=1 max_size=0 with_interface=None match_sector_size=False Jul 22 08:34:39 managed-node12 sudo[14373]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:41 managed-node12 sudo[14427]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uslqbzoixrggwfqpaxvrtsxodruzxaai ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187679.91-10283-3649001784675/AnsiballZ_command.py Jul 22 08:34:41 managed-node12 sudo[14427]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:41 managed-node12 ansible-command[14430]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:34:41 managed-node12 sudo[14427]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:44 managed-node12 sshd[14441]: Accepted publickey for root from 10.31.42.107 port 48374 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:44 managed-node12 systemd-logind[505]: New session 14 of user root. -- Subject: A new session 14 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 14 has been created for the user root. -- -- The leading process of the session is 14441. Jul 22 08:34:44 managed-node12 systemd[1]: Started Session 14 of user root. -- Subject: Unit session-14.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-14.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:44 managed-node12 sshd[14441]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:44 managed-node12 sshd[14441]: Received disconnect from 10.31.42.107 port 48374:11: disconnected by user Jul 22 08:34:44 managed-node12 sshd[14441]: Disconnected from 10.31.42.107 port 48374 Jul 22 08:34:44 managed-node12 sshd[14441]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:44 managed-node12 systemd-logind[505]: Removed session 14. -- Subject: Session 14 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 14 has been terminated. Jul 22 08:34:45 managed-node12 sshd[14451]: Accepted publickey for root from 10.31.42.107 port 48378 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:45 managed-node12 systemd[1]: Started Session 15 of user root. -- Subject: Unit session-15.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-15.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:45 managed-node12 systemd-logind[505]: New session 15 of user root. -- Subject: A new session 15 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 15 has been created for the user root. -- -- The leading process of the session is 14451. Jul 22 08:34:45 managed-node12 sshd[14451]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:45 managed-node12 sshd[14451]: Received disconnect from 10.31.42.107 port 48378:11: disconnected by user Jul 22 08:34:45 managed-node12 sshd[14451]: Disconnected from 10.31.42.107 port 48378 Jul 22 08:34:45 managed-node12 sshd[14451]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:45 managed-node12 systemd-logind[505]: Removed session 15. -- Subject: Session 15 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 15 has been terminated. Jul 22 08:34:53 managed-node12 ansible-setup[14515]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:55 managed-node12 sudo[14595]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zglrkdtcwdcxqvxqiplxzhivtlwkafid ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187694.19-12135-141022628089182/AnsiballZ_setup.py Jul 22 08:34:55 managed-node12 sudo[14595]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:55 managed-node12 ansible-setup[14598]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:55 managed-node12 sudo[14595]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:59 managed-node12 sudo[14678]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-prcdcvaqccjaozjzlaeycgpnarlezvgn ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187697.79-12554-111486064534879/AnsiballZ_stat.py Jul 22 08:34:59 managed-node12 sudo[14678]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:59 managed-node12 ansible-stat[14681]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:34:59 managed-node12 sudo[14678]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:03 managed-node12 sudo[14730]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahfisgsidgckowgforqfcysaoshkmsst ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187701.93-12853-18391217287015/AnsiballZ_yum.py Jul 22 08:35:03 managed-node12 sudo[14730]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:03 managed-node12 ansible-yum[14733]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:07 managed-node12 sudo[14730]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:10 managed-node12 sudo[14807]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apmqhubnevywrljoldujxzuozdwiulyz ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187708.37-13599-46347760283703/AnsiballZ_blivet.py Jul 22 08:35:10 managed-node12 sudo[14807]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:10 managed-node12 ansible-fedora.linux_system_roles.blivet[14810]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:35:10 managed-node12 sudo[14807]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:13 managed-node12 sudo[14865]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fusqbzvwjqgqgtwepkcpppfupkllpvqo ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187713.27-13812-65163490147436/AnsiballZ_yum.py Jul 22 08:35:13 managed-node12 sudo[14865]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:14 managed-node12 ansible-yum[14868]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:14 managed-node12 sudo[14865]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:15 managed-node12 sudo[14921]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztgmwqhixjxpclaskozxzhcrrsqdwelo ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187714.59-13957-51547518324838/AnsiballZ_service_facts.py Jul 22 08:35:15 managed-node12 sudo[14921]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:16 managed-node12 ansible-service_facts[14924]: Invoked Jul 22 08:35:16 managed-node12 sudo[14921]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:19 managed-node12 sudo[15086]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bzzbdgfeqkknmamotxviuajlnezenopv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187718.83-14215-225605693883080/AnsiballZ_blivet.py Jul 22 08:35:19 managed-node12 sudo[15086]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:20 managed-node12 ansible-fedora.linux_system_roles.blivet[15089]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:35:20 managed-node12 sudo[15086]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:21 managed-node12 sudo[15144]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-egffplldonsdzfrlitdgxhjgovvtumov ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187720.87-14368-140778745288531/AnsiballZ_stat.py Jul 22 08:35:21 managed-node12 sudo[15144]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:21 managed-node12 ansible-stat[15147]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:35:21 managed-node12 sudo[15144]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:25 managed-node12 sudo[15198]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkwszvyxchvrcylqgpwwzparmyfoygar ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187725.27-14652-90844270183471/AnsiballZ_stat.py Jul 22 08:35:25 managed-node12 sudo[15198]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:25 managed-node12 ansible-stat[15201]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:35:26 managed-node12 sudo[15198]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:27 managed-node12 sudo[15252]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pwnnibgmgxeftpjjbzwdwwsrcdhylklv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187726.99-14794-209918270581941/AnsiballZ_setup.py Jul 22 08:35:27 managed-node12 sudo[15252]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:27 managed-node12 ansible-setup[15255]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:35:27 managed-node12 sudo[15252]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:31 managed-node12 sudo[15335]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avxxkuoxwzwjzboelpqjtlffjlbvpvos ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187730.6-15071-206007643526555/AnsiballZ_yum.py Jul 22 08:35:31 managed-node12 sudo[15335]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:31 managed-node12 ansible-yum[15338]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:31 managed-node12 sudo[15335]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:33 managed-node12 sudo[15391]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pekwwkueirbcjbdcmbulbociywbsxrtm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187732.3-15321-38700087750231/AnsiballZ_find_unused_disk.py Jul 22 08:35:33 managed-node12 sudo[15391]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:34 managed-node12 ansible-fedora.linux_system_roles.find_unused_disk[15394]: Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:35:34 managed-node12 sudo[15391]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:36 managed-node12 sudo[15445]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbcizlyuqnozpmvkmsrqhgvimyenslba ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187734.7-15547-242223474404358/AnsiballZ_command.py Jul 22 08:35:36 managed-node12 sudo[15445]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:36 managed-node12 ansible-command[15448]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:35:36 managed-node12 sudo[15445]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:40 managed-node12 sshd[15459]: Accepted publickey for root from 10.31.42.107 port 48484 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:40 managed-node12 systemd-logind[505]: New session 16 of user root. -- Subject: A new session 16 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 16 has been created for the user root. -- -- The leading process of the session is 15459. Jul 22 08:35:40 managed-node12 sshd[15459]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:35:40 managed-node12 systemd[1]: Started Session 16 of user root. -- Subject: Unit session-16.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-16.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:40 managed-node12 sshd[15459]: Received disconnect from 10.31.42.107 port 48484:11: disconnected by user Jul 22 08:35:40 managed-node12 sshd[15459]: Disconnected from 10.31.42.107 port 48484 Jul 22 08:35:40 managed-node12 sshd[15459]: pam_unix(sshd:session): session closed for user root Jul 22 08:35:40 managed-node12 systemd-logind[505]: Removed session 16. -- Subject: Session 16 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 16 has been terminated. Jul 22 08:35:41 managed-node12 sshd[15469]: Accepted publickey for root from 10.31.42.107 port 48486 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:41 managed-node12 systemd[1]: Started Session 17 of user root. -- Subject: Unit session-17.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-17.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:41 managed-node12 systemd-logind[505]: New session 17 of user root. -- Subject: A new session 17 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 17 has been created for the user root. -- -- The leading process of the session is 15469. Jul 22 08:35:41 managed-node12 sshd[15469]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:35:41 managed-node12 sshd[15469]: Received disconnect from 10.31.42.107 port 48486:11: disconnected by user Jul 22 08:35:41 managed-node12 sshd[15469]: Disconnected from 10.31.42.107 port 48486 Jul 22 08:35:41 managed-node12 sshd[15469]: pam_unix(sshd:session): session closed for user root Jul 22 08:35:41 managed-node12 systemd-logind[505]: Removed session 17. -- Subject: Session 17 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 17 has been terminated. Jul 22 08:35:51 managed-node12 sudo[15533]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zyuoszlawvgpafijszugnzbtlsjbfwin ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187747.21-16618-50355112455526/AnsiballZ_setup.py Jul 22 08:35:51 managed-node12 sudo[15533]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:51 managed-node12 ansible-setup[15536]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:35:51 managed-node12 sudo[15533]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:56 managed-node12 sudo[15616]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvdydeimpetrcywonlwenqscubhdpqfm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187755.07-17305-103953543094133/AnsiballZ_stat.py Jul 22 08:35:56 managed-node12 sudo[15616]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:56 managed-node12 ansible-stat[15619]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:35:56 managed-node12 sudo[15616]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:02 managed-node12 sudo[15668]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzalruahwfttnlybpocmehehjjuxziwc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187760.62-17742-175652469266240/AnsiballZ_yum.py Jul 22 08:36:02 managed-node12 sudo[15668]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:03 managed-node12 ansible-yum[15671]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:36:06 managed-node12 sudo[15668]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:11 managed-node12 sudo[15745]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yabvhpwhoxoeumezxobmevxekdrbrprz ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187768.78-18559-94624983028805/AnsiballZ_blivet.py Jul 22 08:36:11 managed-node12 sudo[15745]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:11 managed-node12 ansible-fedora.linux_system_roles.blivet[15748]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=True Jul 22 08:36:11 managed-node12 sudo[15745]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:14 managed-node12 sudo[15803]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ilupippmuydhvrglgfhrqapqsuornvan ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187774.1-19099-109923517572915/AnsiballZ_yum.py Jul 22 08:36:14 managed-node12 sudo[15803]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:15 managed-node12 ansible-yum[15806]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:36:15 managed-node12 sudo[15803]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:17 managed-node12 sudo[15859]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzpfosspnnnrzhhiralougagndmckunb ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187775.83-19395-43592406509346/AnsiballZ_service_facts.py Jul 22 08:36:17 managed-node12 sudo[15859]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:17 managed-node12 ansible-service_facts[15862]: Invoked Jul 22 08:36:18 managed-node12 sudo[15859]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:20 managed-node12 sudo[16024]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hkwxkjfwkuxzlqtygloftzudlpryxaiy ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187780.19-19941-226963478806483/AnsiballZ_blivet.py Jul 22 08:36:20 managed-node12 sudo[16024]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:21 managed-node12 ansible-fedora.linux_system_roles.blivet[16027]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=True Jul 22 08:36:21 managed-node12 sudo[16024]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:22 managed-node12 sudo[16082]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozboklggctarwndezdcrdxdkqfgjphbr ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187782.07-20159-54574231285843/AnsiballZ_stat.py Jul 22 08:36:22 managed-node12 sudo[16082]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:22 managed-node12 ansible-stat[16085]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:36:22 managed-node12 sudo[16082]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:27 managed-node12 sudo[16136]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fagvsfyknurrdcltjojzrpahaeojuhwv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187786.55-20489-5747266627918/AnsiballZ_stat.py Jul 22 08:36:27 managed-node12 sudo[16136]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:27 managed-node12 ansible-stat[16139]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:36:27 managed-node12 sudo[16136]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:29 managed-node12 sudo[16190]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oselatzsywtblqjibtdkeldvzvldisqp ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187788.51-20622-110513803345698/AnsiballZ_setup.py Jul 22 08:36:29 managed-node12 sudo[16190]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:29 managed-node12 ansible-setup[16193]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:36:29 managed-node12 sudo[16190]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:32 managed-node12 sudo[16273]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdscvnuyzrdortegcljgriiuhlwzjwhy ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187791.66-20841-264823389448953/AnsiballZ_yum.py Jul 22 08:36:32 managed-node12 sudo[16273]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:32 managed-node12 ansible-yum[16276]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:36:32 managed-node12 sudo[16273]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:35 managed-node12 sudo[16329]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffxwwsbsskoineujsdfewqrwmceqyftc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187793.32-21105-194168092509193/AnsiballZ_find_unused_disk.py Jul 22 08:36:35 managed-node12 sudo[16329]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:35 managed-node12 ansible-fedora.linux_system_roles.find_unused_disk[16332]: Invoked with min_size=0 max_return=3 max_size=0 with_interface=None match_sector_size=False Jul 22 08:36:35 managed-node12 sudo[16329]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:37 managed-node12 sudo[16383]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svxakbdhsfqauymnwpxyzxgzobntawkh ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187796.03-21383-18163475001771/AnsiballZ_command.py Jul 22 08:36:37 managed-node12 sudo[16383]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:36:37 managed-node12 ansible-command[16386]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:36:37 managed-node12 sudo[16383]: pam_unix(sudo:session): session closed for user root Jul 22 08:36:40 managed-node12 sshd[16397]: Accepted publickey for root from 10.31.42.107 port 48618 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:36:41 managed-node12 systemd-logind[505]: New session 18 of user root. -- Subject: A new session 18 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 18 has been created for the user root. -- -- The leading process of the session is 16397. Jul 22 08:36:41 managed-node12 systemd[1]: Started Session 18 of user root. -- Subject: Unit session-18.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-18.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:36:41 managed-node12 sshd[16397]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:36:41 managed-node12 sshd[16397]: Received disconnect from 10.31.42.107 port 48618:11: disconnected by user Jul 22 08:36:41 managed-node12 sshd[16397]: Disconnected from 10.31.42.107 port 48618 Jul 22 08:36:41 managed-node12 sshd[16397]: pam_unix(sshd:session): session closed for user root Jul 22 08:36:41 managed-node12 systemd-logind[505]: Removed session 18. -- Subject: Session 18 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 18 has been terminated. Jul 22 08:36:41 managed-node12 sshd[16407]: Accepted publickey for root from 10.31.42.107 port 48622 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:36:41 managed-node12 systemd[1]: Started Session 19 of user root. -- Subject: Unit session-19.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-19.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:36:41 managed-node12 systemd-logind[505]: New session 19 of user root. -- Subject: A new session 19 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 19 has been created for the user root. -- -- The leading process of the session is 16407. Jul 22 08:36:41 managed-node12 sshd[16407]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:36:41 managed-node12 sshd[16407]: Received disconnect from 10.31.42.107 port 48622:11: disconnected by user Jul 22 08:36:41 managed-node12 sshd[16407]: Disconnected from 10.31.42.107 port 48622 Jul 22 08:36:41 managed-node12 sshd[16407]: pam_unix(sshd:session): session closed for user root Jul 22 08:36:41 managed-node12 systemd-logind[505]: Removed session 19. -- Subject: Session 19 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 19 has been terminated. Jul 22 08:36:51 managed-node12 ansible-setup[16471]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:36:56 managed-node12 ansible-stat[16551]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:37:00 managed-node12 ansible-yum[16600]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:37:05 managed-node12 ansible-fedora.linux_system_roles.blivet[16674]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:37:07 managed-node12 ansible-yum[16729]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:37:09 managed-node12 ansible-service_facts[16782]: Invoked Jul 22 08:37:10 managed-node12 ansible-fedora.linux_system_roles.blivet[16944]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:37:11 managed-node12 ansible-stat[16999]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:37:13 managed-node12 ansible-stat[17050]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:37:14 managed-node12 ansible-setup[17101]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:37:16 managed-node12 ansible-yum[17181]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:37:17 managed-node12 ansible-fedora.linux_system_roles.find_unused_disk[17234]: Invoked with min_size=5g max_return=1 max_size=127g with_interface=None match_sector_size=False Jul 22 08:37:18 managed-node12 ansible-command[17285]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:29 Tuesday 22 July 2025 08:37:18 -0400 (0:00:01.258) 0:00:30.885 ********** skipping: [managed-node12] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34 Tuesday 22 July 2025 08:37:19 -0400 (0:00:00.139) 0:00:31.024 ********** fatal: [managed-node12]: FAILED! => { "changed": false } MSG: Unable to find enough unused disks. Exiting playbook. PLAY RECAP ********************************************************************* managed-node12 : ok=28 changed=0 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.9.27", "end_time": "2025-07-22T12:37:19.061740Z", "host": "managed-node12", "message": "Unable to find enough unused disks. Exiting playbook.", "start_time": "2025-07-22T12:37:19.008134Z", "task_name": "Exit playbook when there's not enough unused disks in the system", "task_path": "/tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Tuesday 22 July 2025 08:37:19 -0400 (0:00:00.081) 0:00:31.106 ********** =============================================================================== fedora.linux_system_roles.storage : Make sure blivet is available ------- 5.54s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Gathering Facts --------------------------------------------------------- 4.01s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:2 fedora.linux_system_roles.storage : Check if system is ostree ----------- 2.18s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 fedora.linux_system_roles.storage : Get required packages --------------- 1.97s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 fedora.linux_system_roles.storage : Get service facts ------------------- 1.65s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Ensure test packages ---------------------------------------------------- 1.31s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Debug why there are no unused disks ------------------------------------- 1.26s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Find unused disks in the system ----------------------------------------- 1.23s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 fedora.linux_system_roles.storage : Make sure required packages are installed --- 1.14s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 fedora.linux_system_roles.storage : Update facts ------------------------ 1.09s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.86s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file --- 0.78s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.75s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 0.72s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.70s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 fedora.linux_system_roles.storage : Check if /etc/fstab is present ------ 0.61s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 fedora.linux_system_roles.storage : Ensure ansible_facts used by role --- 0.59s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 fedora.linux_system_roles.storage : Enable copr repositories if needed --- 0.50s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 fedora.linux_system_roles.storage : Show storage_volumes ---------------- 0.34s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 fedora.linux_system_roles.storage : Set flag to indicate system is ostree --- 0.27s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 -- Logs begin at Tue 2025-07-22 08:23:47 EDT, end at Tue 2025-07-22 08:37:20 EDT. -- Jul 22 08:36:51 managed-node12 ansible-setup[16471]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:36:56 managed-node12 ansible-stat[16551]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:37:00 managed-node12 ansible-yum[16600]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:37:05 managed-node12 ansible-fedora.linux_system_roles.blivet[16674]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:37:07 managed-node12 ansible-yum[16729]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:37:09 managed-node12 ansible-service_facts[16782]: Invoked Jul 22 08:37:10 managed-node12 ansible-fedora.linux_system_roles.blivet[16944]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:37:11 managed-node12 ansible-stat[16999]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:37:13 managed-node12 ansible-stat[17050]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:37:14 managed-node12 ansible-setup[17101]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:37:16 managed-node12 ansible-yum[17181]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:37:17 managed-node12 ansible-fedora.linux_system_roles.find_unused_disk[17234]: Invoked with min_size=5g max_return=1 max_size=127g with_interface=None match_sector_size=False Jul 22 08:37:18 managed-node12 ansible-command[17285]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:37:19 managed-node12 sshd[17296]: Accepted publickey for root from 10.31.42.107 port 48708 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:37:19 managed-node12 systemd-logind[505]: New session 20 of user root. -- Subject: A new session 20 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 20 has been created for the user root. -- -- The leading process of the session is 17296. Jul 22 08:37:19 managed-node12 systemd[1]: Started Session 20 of user root. -- Subject: Unit session-20.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-20.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:37:19 managed-node12 sshd[17296]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:37:19 managed-node12 sshd[17296]: Received disconnect from 10.31.42.107 port 48708:11: disconnected by user Jul 22 08:37:19 managed-node12 sshd[17296]: Disconnected from 10.31.42.107 port 48708 Jul 22 08:37:19 managed-node12 sshd[17296]: pam_unix(sshd:session): session closed for user root Jul 22 08:37:19 managed-node12 systemd-logind[505]: Removed session 20. -- Subject: Session 20 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 20 has been terminated. Jul 22 08:37:19 managed-node12 sshd[17306]: Accepted publickey for root from 10.31.42.107 port 48712 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:37:20 managed-node12 systemd[1]: Started Session 21 of user root. -- Subject: Unit session-21.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-21.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:37:20 managed-node12 systemd-logind[505]: New session 21 of user root. -- Subject: A new session 21 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 21 has been created for the user root. -- -- The leading process of the session is 17306. Jul 22 08:37:20 managed-node12 sshd[17306]: pam_unix(sshd:session): session opened for user root by (uid=0)