ansible-playbook 2.9.27 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.19 (main, May 16 2024, 11:40:09) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] No config file found; using defaults [WARNING]: running playbook inside collection fedora.linux_system_roles Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: tests_swap.yml ******************************************************* 1 plays in /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml PLAY [Test management of swap] ************************************************* TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:2 Tuesday 22 July 2025 08:35:27 -0400 (0:00:00.039) 0:00:00.039 ********** ok: [managed-node11] META: ran handlers TASK [Include role to ensure packages are installed] *************************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:10 Tuesday 22 July 2025 08:35:28 -0400 (0:00:01.299) 0:00:01.339 ********** TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Tuesday 22 July 2025 08:35:28 -0400 (0:00:00.158) 0:00:01.498 ********** included: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node11 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Tuesday 22 July 2025 08:35:29 -0400 (0:00:00.228) 0:00:01.726 ********** skipping: [managed-node11] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Tuesday 22 July 2025 08:35:29 -0400 (0:00:00.208) 0:00:01.934 ********** skipping: [managed-node11] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node11] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node11] => (item=CentOS_8.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } ok: [managed-node11] => (item=CentOS_8.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Tuesday 22 July 2025 08:35:29 -0400 (0:00:00.294) 0:00:02.229 ********** ok: [managed-node11] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Tuesday 22 July 2025 08:35:30 -0400 (0:00:01.157) 0:00:03.386 ********** ok: [managed-node11] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Tuesday 22 July 2025 08:35:30 -0400 (0:00:00.295) 0:00:03.682 ********** ok: [managed-node11] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Tuesday 22 July 2025 08:35:31 -0400 (0:00:00.116) 0:00:03.798 ********** ok: [managed-node11] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Tuesday 22 July 2025 08:35:31 -0400 (0:00:00.088) 0:00:03.887 ********** included: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node11 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Tuesday 22 July 2025 08:35:31 -0400 (0:00:00.565) 0:00:04.452 ********** ok: [managed-node11] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Tuesday 22 July 2025 08:35:35 -0400 (0:00:03.764) 0:00:08.216 ********** ok: [managed-node11] => { "storage_pools | d([])": [] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Tuesday 22 July 2025 08:35:35 -0400 (0:00:00.124) 0:00:08.341 ********** ok: [managed-node11] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Tuesday 22 July 2025 08:35:35 -0400 (0:00:00.123) 0:00:08.465 ********** ok: [managed-node11] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Tuesday 22 July 2025 08:35:37 -0400 (0:00:01.404) 0:00:09.870 ********** included: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node11 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Tuesday 22 July 2025 08:35:37 -0400 (0:00:00.260) 0:00:10.130 ********** TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Tuesday 22 July 2025 08:35:37 -0400 (0:00:00.117) 0:00:10.248 ********** skipping: [managed-node11] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Tuesday 22 July 2025 08:35:37 -0400 (0:00:00.057) 0:00:10.305 ********** TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Tuesday 22 July 2025 08:35:37 -0400 (0:00:00.090) 0:00:10.396 ********** ok: [managed-node11] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Tuesday 22 July 2025 08:35:40 -0400 (0:00:03.255) 0:00:13.652 ********** ok: [managed-node11] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "chrony-dnssrv@.service": { "name": "chrony-dnssrv@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "static" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "dbus-org.freedesktop.portable1.service": { "name": "dbus-org.freedesktop.portable1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "running", "status": "static" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "unknown" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "halt-local.service": { "name": "halt-local.service", "source": "systemd", "state": "inactive", "status": "static" }, "import-state.service": { "name": "import-state.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "iprdump.service": { "name": "iprdump.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "iprinit.service": { "name": "iprinit.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "iprupdate.service": { "name": "iprupdate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "loadmodules.service": { "name": "loadmodules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "lvm2-pvscan@.service": { "name": "lvm2-pvscan@.service", "source": "systemd", "state": "unknown", "status": "static" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "messagebus.service": { "name": "messagebus.service", "source": "systemd", "state": "active", "status": "static" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "ndctl-monitor.service": { "name": "ndctl-monitor.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-convert.service": { "name": "nfs-convert.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-halt.service": { "name": "plymouth-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-kexec.service": { "name": "plymouth-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-poweroff.service": { "name": "plymouth-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-quit.service": { "name": "plymouth-quit.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-read-write.service": { "name": "plymouth-read-write.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-reboot.service": { "name": "plymouth-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-switch-root-initramfs.service": { "name": "plymouth-switch-root-initramfs.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-switch-root.service": { "name": "plymouth-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "unknown" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "unknown" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "unknown" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "active", "status": "enabled" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-plymouth.service": { "name": "systemd-ask-password-plymouth.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-portabled.service": { "name": "systemd-portabled.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-resolved.service": { "name": "systemd-resolved.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "masked" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "tcsd.service": { "name": "tcsd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "timedatex.service": { "name": "timedatex.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "tuned.service": { "name": "tuned.service", "source": "systemd", "state": "running", "status": "enabled" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "unknown" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "unknown" }, "vdo-start-by-dev@.service": { "name": "vdo-start-by-dev@.service", "source": "systemd", "state": "unknown", "status": "static" }, "vdo.service": { "name": "vdo.service", "source": "systemd", "state": "stopped", "status": "enabled" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Tuesday 22 July 2025 08:35:42 -0400 (0:00:01.971) 0:00:15.623 ********** ok: [managed-node11] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Tuesday 22 July 2025 08:35:43 -0400 (0:00:00.070) 0:00:15.694 ********** TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Tuesday 22 July 2025 08:35:43 -0400 (0:00:00.068) 0:00:15.762 ********** ok: [managed-node11] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Tuesday 22 July 2025 08:35:43 -0400 (0:00:00.841) 0:00:16.604 ********** skipping: [managed-node11] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Tuesday 22 July 2025 08:35:43 -0400 (0:00:00.044) 0:00:16.649 ********** ok: [managed-node11] => { "changed": false, "stat": { "atime": 1753187083.8119779, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "4644c54c9838cbc512975c626cc4f46832c5b181", "ctime": 1716969427.210901, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 134, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1716969427.210901, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1321, "uid": 0, "version": "1528175122", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Tuesday 22 July 2025 08:35:44 -0400 (0:00:00.731) 0:00:17.380 ********** skipping: [managed-node11] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Tuesday 22 July 2025 08:35:44 -0400 (0:00:00.106) 0:00:17.487 ********** TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Tuesday 22 July 2025 08:35:44 -0400 (0:00:00.065) 0:00:17.553 ********** ok: [managed-node11] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Tuesday 22 July 2025 08:35:44 -0400 (0:00:00.071) 0:00:17.624 ********** ok: [managed-node11] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Tuesday 22 July 2025 08:35:44 -0400 (0:00:00.064) 0:00:17.689 ********** ok: [managed-node11] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Tuesday 22 July 2025 08:35:45 -0400 (0:00:00.129) 0:00:17.819 ********** TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Tuesday 22 July 2025 08:35:45 -0400 (0:00:00.058) 0:00:17.878 ********** skipping: [managed-node11] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Tuesday 22 July 2025 08:35:45 -0400 (0:00:00.052) 0:00:17.930 ********** TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Tuesday 22 July 2025 08:35:45 -0400 (0:00:00.048) 0:00:17.979 ********** TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Tuesday 22 July 2025 08:35:45 -0400 (0:00:00.042) 0:00:18.021 ********** skipping: [managed-node11] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Tuesday 22 July 2025 08:35:45 -0400 (0:00:00.045) 0:00:18.067 ********** ok: [managed-node11] => { "changed": false, "stat": { "atime": 1753187388.4106774, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1716968941.893, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 135, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1716968586.525, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "1157759751", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Tuesday 22 July 2025 08:35:45 -0400 (0:00:00.429) 0:00:18.497 ********** TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Tuesday 22 July 2025 08:35:45 -0400 (0:00:00.022) 0:00:18.519 ********** ok: [managed-node11] TASK [Mark tasks to be skipped] ************************************************ task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:14 Tuesday 22 July 2025 08:35:46 -0400 (0:00:00.729) 0:00:19.248 ********** ok: [managed-node11] => { "ansible_facts": { "storage_skip_checks": [ "blivet_available", "packages_installed", "service_facts" ] }, "changed": false } TASK [Get unused disks for swap] *********************************************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:22 Tuesday 22 July 2025 08:35:46 -0400 (0:00:00.043) 0:00:19.291 ********** included: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml for managed-node11 TASK [Ensure test packages] **************************************************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Tuesday 22 July 2025 08:35:46 -0400 (0:00:00.044) 0:00:19.336 ********** ok: [managed-node11] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Tuesday 22 July 2025 08:35:49 -0400 (0:00:02.907) 0:00:22.243 ********** ok: [managed-node11] => { "changed": false, "disks": "Unable to find unused disk", "info": [ "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Tuesday 22 July 2025 08:35:50 -0400 (0:00:00.573) 0:00:22.817 ********** ok: [managed-node11] => { "changed": false, "cmd": "set -x\nexec 1>&2\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC\njournalctl -ex\n", "delta": "0:00:00.022060", "end": "2025-07-22 08:35:50.650292", "rc": 0, "start": "2025-07-22 08:35:50.628232" } STDERR: + exec + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda1" TYPE="part" SIZE="268434390528" FSTYPE="xfs" LOG-SEC="512" + journalctl -ex -- Logs begin at Tue 2025-07-22 08:24:28 EDT, end at Tue 2025-07-22 08:35:50 EDT. -- Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 33288 Jan 15 2024 usr/lib/udev/cdrom_id Jul 22 08:30:18 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/lib/udev/rules.d Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 1834 Jan 15 2024 usr/lib/udev/rules.d/40-redhat.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 3750 Jan 15 2024 usr/lib/udev/rules.d/50-udev-default.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 626 Jan 15 2024 usr/lib/udev/rules.d/60-block.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 6528 Jun 22 2018 usr/lib/udev/rules.d/60-persistent-storage.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 2671 Jun 22 2018 usr/lib/udev/rules.d/70-uaccess.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 995 May 11 2019 usr/lib/udev/rules.d/71-biosdevname.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 2758 Jan 15 2024 usr/lib/udev/rules.d/71-seat.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 636 Jan 15 2024 usr/lib/udev/rules.d/73-seat-late.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 452 Jun 22 2018 usr/lib/udev/rules.d/75-net-description.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 615 Jun 22 2018 usr/lib/udev/rules.d/80-drivers.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 292 Jun 22 2018 usr/lib/udev/rules.d/80-net-setup-link.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 2013 Oct 11 2022 usr/lib/udev/rules.d/85-nm-unmanaged.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 510 Jan 15 2024 usr/lib/udev/rules.d/90-vconsole.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 4367 Jan 15 2024 usr/lib/udev/rules.d/99-systemd.rules Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 54976 Jan 15 2024 usr/lib/udev/scsi_id Jul 22 08:30:18 managed-node11 dracut[8687]: drwxr-xr-x 5 root root 0 Jan 15 2024 usr/lib64 Jul 22 08:30:18 managed-node11 dracut[8687]: drwxr-xr-x 3 root root 0 Jan 15 2024 usr/lib64/NetworkManager Jul 22 08:30:18 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/lib64/NetworkManager/1.40.16-15.el8 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 49264 Jan 15 2024 usr/lib64/NetworkManager/1.40.16-15.el8/libnm-device-plugin-team.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 306184 Jan 15 2024 usr/lib64/NetworkManager/1.40.16-15.el8/libnm-settings-plugin-ifcfg-rh.so Jul 22 08:30:18 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/lib64/bind9-export Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 25 Jan 15 2024 usr/lib64/bind9-export/libdns-export.so.1115 -> libdns-export.so.1115.0.3 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 2369968 Jan 15 2024 usr/lib64/bind9-export/libdns-export.so.1115.0.3 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 24 Jan 15 2024 usr/lib64/bind9-export/libirs-export.so.161 -> libirs-export.so.161.0.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 49072 Jan 15 2024 usr/lib64/bind9-export/libirs-export.so.161.0.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 25 Jan 15 2024 usr/lib64/bind9-export/libisc-export.so.1107 -> libisc-export.so.1107.0.7 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 471832 Jan 15 2024 usr/lib64/bind9-export/libisc-export.so.1107.0.7 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 27 Jan 15 2024 usr/lib64/bind9-export/libisccfg-export.so.163 -> libisccfg-export.so.163.0.8 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 186520 Jan 15 2024 usr/lib64/bind9-export/libisccfg-export.so.163.0.8 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 226472 Jan 15 2024 usr/lib64/ld-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 10 Jan 15 2024 usr/lib64/ld-linux-x86-64.so.2 -> ld-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libacl.so.1 -> libacl.so.1.1.2253 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 37232 Oct 6 2023 usr/lib64/libacl.so.1.1.2253 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libattr.so.1 -> libattr.so.1.1.2448 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 24616 May 10 2019 usr/lib64/libattr.so.1.1.2448 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libaudit.so.1 -> libaudit.so.1.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 135344 Nov 6 2023 usr/lib64/libaudit.so.1.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libblkid.so.1 -> libblkid.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 343144 Jan 15 2024 usr/lib64/libblkid.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libbpf.so.0 -> libbpf.so.0.5.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 307768 Jul 1 2022 usr/lib64/libbpf.so.0.5.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 24 Jan 15 2024 usr/lib64/libbrotlicommon.so.1 -> libbrotlicommon.so.1.0.6 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 134944 Jan 12 2021 usr/lib64/libbrotlicommon.so.1.0.6 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libbrotlidec.so.1 -> libbrotlidec.so.1.0.6 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 53192 Jan 12 2021 usr/lib64/libbrotlidec.so.1.0.6 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libbz2.so.1 -> libbz2.so.1.0.6 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 72816 May 10 2019 usr/lib64/libbz2.so.1.0.6 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 2164600 Jan 15 2024 usr/lib64/libc-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 253 Jan 15 2024 usr/lib64/libc.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 12 Jan 15 2024 usr/lib64/libc.so.6 -> libc-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libcap-ng.so.0 -> libcap-ng.so.0.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 24368 Jun 7 2021 usr/lib64/libcap-ng.so.0.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 14 Jan 15 2024 usr/lib64/libcap.so.2 -> libcap.so.2.48 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 33208 Jul 12 2023 usr/lib64/libcap.so.2.48 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libcom_err.so -> libcom_err.so.2.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libcom_err.so.2 -> libcom_err.so.2.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 16168 May 22 2022 usr/lib64/libcom_err.so.2.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libcrypt.so -> libcrypt.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libcrypt.so.1 -> libcrypt.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 135616 May 5 2021 usr/lib64/libcrypt.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libcrypto.so -> libcrypto.so.1.1.1k Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libcrypto.so.1.1 -> libcrypto.so.1.1.1k Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 3087096 Nov 30 2023 usr/lib64/libcrypto.so.1.1.1k Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 23 Jan 15 2024 usr/lib64/libcryptsetup.so.12 -> libcryptsetup.so.12.6.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 525864 Jul 11 2023 usr/lib64/libcryptsetup.so.12.6.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libcurl.so.4 -> libcurl.so.4.5.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 596088 Dec 11 2023 usr/lib64/libcurl.so.4.5.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libdaemon.so.0 -> libdaemon.so.0.5.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 28672 May 11 2019 usr/lib64/libdaemon.so.0.5.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libdbus-1.so.3 -> libdbus-1.so.3.19.7 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 351032 Jun 19 2023 usr/lib64/libdbus-1.so.3.19.7 Jul 22 08:30:18 managed-node11 dracut[8687]: -r-xr-xr-x 1 root root 370608 Jan 15 2024 usr/lib64/libdevmapper.so.1.02 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 19104 Jan 15 2024 usr/lib64/libdl-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 13 Jan 15 2024 usr/lib64/libdl.so -> libdl-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 13 Jan 15 2024 usr/lib64/libdl.so.2 -> libdl-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 678192 Dec 12 2023 usr/lib64/libdw-0.190.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 14 Jan 15 2024 usr/lib64/libdw.so -> libdw-0.190.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 14 Jan 15 2024 usr/lib64/libdw.so.1 -> libdw-0.190.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 108368 Dec 12 2023 usr/lib64/libelf-0.190.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libelf.so -> libelf-0.190.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libelf.so.1 -> libelf-0.190.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libffi.so.6 -> libffi.so.6.0.2 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 36600 Dec 6 2022 usr/lib64/libffi.so.6.0.2 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 11752 Jan 15 2024 usr/lib64/libfreebl3.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 84 Jan 15 2024 usr/lib64/libfreeblpriv3.chk Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 824008 Jan 15 2024 usr/lib64/libfreeblpriv3.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 99632 Jan 15 2024 usr/lib64/libgcc_s-8-20210514.so.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 24 Jan 15 2024 usr/lib64/libgcc_s.so.1 -> libgcc_s-8-20210514.so.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libgcrypt.so.20 -> libgcrypt.so.20.2.5 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1186624 Jun 27 2022 usr/lib64/libgcrypt.so.20.2.5 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 22 Jan 15 2024 usr/lib64/libgio-2.0.so.0 -> libgio-2.0.so.0.5600.4 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1769696 Jan 15 2024 usr/lib64/libgio-2.0.so.0.5600.4 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 23 Jan 15 2024 usr/lib64/libglib-2.0.so.0 -> libglib-2.0.so.0.5600.4 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1171328 Jan 15 2024 usr/lib64/libglib-2.0.so.0.5600.4 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 26 Jan 15 2024 usr/lib64/libgmodule-2.0.so.0 -> libgmodule-2.0.so.0.5600.4 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 16064 Jan 15 2024 usr/lib64/libgmodule-2.0.so.0.5600.4 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libgmp.so.10 -> libgmp.so.10.3.2 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 628752 Jan 15 2024 usr/lib64/libgmp.so.10.3.2 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 20 Jan 15 2024 usr/lib64/libgnutls.so.30 -> libgnutls.so.30.28.2 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 2051280 Jan 15 2024 usr/lib64/libgnutls.so.30.28.2 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 26 Jan 15 2024 usr/lib64/libgobject-2.0.so.0 -> libgobject-2.0.so.0.5600.4 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 346760 Jan 15 2024 usr/lib64/libgobject-2.0.so.0.5600.4 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 22 Jan 15 2024 usr/lib64/libgpg-error.so.0 -> libgpg-error.so.0.24.2 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 138368 May 10 2019 usr/lib64/libgpg-error.so.0.24.2 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libgssapi_krb5.so -> libgssapi_krb5.so.2.2 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libgssapi_krb5.so.2 -> libgssapi_krb5.so.2.2 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 355432 Jan 15 2024 usr/lib64/libgssapi_krb5.so.2.2 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libhogweed.so.4 -> libhogweed.so.4.5 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 197048 Jul 15 2021 usr/lib64/libhogweed.so.4.5 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libidn2.so.0 -> libidn2.so.0.3.6 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 123248 Nov 8 2019 usr/lib64/libidn2.so.0.3.6 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 20 Jan 15 2024 usr/lib64/libjansson.so.4 -> libjansson.so.4.14.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 58432 Dec 2 2021 usr/lib64/libjansson.so.4.14.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libjson-c.so.4 -> libjson-c.so.4.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 66456 Nov 11 2021 usr/lib64/libjson-c.so.4.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libk5crypto.so -> libk5crypto.so.3.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libk5crypto.so.3 -> libk5crypto.so.3.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 95792 Jan 15 2024 usr/lib64/libk5crypto.so.3.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libkeyutils.so -> libkeyutils.so.1.6 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libkeyutils.so.1 -> libkeyutils.so.1.6 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 16192 Jun 19 2021 usr/lib64/libkeyutils.so.1.6 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libkmod.so.2 -> libkmod.so.2.3.3 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 99832 Oct 24 2023 usr/lib64/libkmod.so.2.3.3 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 14 Jan 15 2024 usr/lib64/libkrb5.so -> libkrb5.so.3.3 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 14 Jan 15 2024 usr/lib64/libkrb5.so.3 -> libkrb5.so.3.3 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 975040 Jan 15 2024 usr/lib64/libkrb5.so.3.3 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libkrb5support.so -> libkrb5support.so.0.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libkrb5support.so.0 -> libkrb5support.so.0.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 70928 Jan 15 2024 usr/lib64/libkrb5support.so.0.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/liblber-2.4.so.2 -> liblber-2.4.so.2.10.9 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 66808 Aug 10 2021 usr/lib64/liblber-2.4.so.2.10.9 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libldap-2.4.so.2 -> libldap-2.4.so.2.10.9 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 328600 Aug 10 2021 usr/lib64/libldap-2.4.so.2.10.9 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/liblz4.so.1 -> liblz4.so.1.8.3 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 119336 Jun 29 2021 usr/lib64/liblz4.so.1.8.3 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/liblzma.so -> liblzma.so.5.2.4 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/liblzma.so.5 -> liblzma.so.5.2.4 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 161568 Jun 27 2022 usr/lib64/liblzma.so.5.2.4 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1597840 Jan 15 2024 usr/lib64/libm-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 141 Jan 15 2024 usr/lib64/libm.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 12 Jan 15 2024 usr/lib64/libm.so.6 -> libm-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libmnl.so.0 -> libmnl.so.0.2.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 24688 May 11 2019 usr/lib64/libmnl.so.0.2.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libmount.so.1 -> libmount.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 370952 Jan 15 2024 usr/lib64/libmount.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libncurses.so.6 -> libncurses.so.6.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 179816 Aug 15 2023 usr/lib64/libncurses.so.6.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libndp.so.0 -> libndp.so.0.1.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 28696 May 2 2021 usr/lib64/libndp.so.0.1.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libnettle.so.6 -> libnettle.so.6.5 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 238640 Jul 15 2021 usr/lib64/libnettle.so.6.5 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libnghttp2.so.14 -> libnghttp2.so.14.17.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 166616 Jan 15 2024 usr/lib64/libnghttp2.so.14.17.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libnl-3.so.200 -> libnl-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 146264 Jul 7 2022 usr/lib64/libnl-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 23 Jan 15 2024 usr/lib64/libnl-cli-3.so.200 -> libnl-cli-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 49152 Jul 7 2022 usr/lib64/libnl-cli-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 24 Jan 15 2024 usr/lib64/libnl-genl-3.so.200 -> libnl-genl-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 29744 Jul 7 2022 usr/lib64/libnl-genl-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 22 Jan 15 2024 usr/lib64/libnl-nf-3.so.200 -> libnl-nf-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 115184 Jul 7 2022 usr/lib64/libnl-nf-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 25 Jan 15 2024 usr/lib64/libnl-route-3.so.200 -> libnl-route-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 600504 Jul 7 2022 usr/lib64/libnl-route-3.so.200.26.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 28792 Jan 15 2024 usr/lib64/libnss_dns-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libnss_dns.so.2 -> libnss_dns-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 53888 Jan 15 2024 usr/lib64/libnss_files-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 20 Jan 15 2024 usr/lib64/libnss_files.so.2 -> libnss_files-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libomapi.so.0 -> libomapi.so.0.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 136368 Jan 15 2024 usr/lib64/libomapi.so.0.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libp11-kit.so.0 -> libp11-kit.so.0.3.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1258912 Nov 29 2023 usr/lib64/libp11-kit.so.0.3.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libpam.so.0 -> libpam.so.0.84.2 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 65960 Jan 15 2024 usr/lib64/libpam.so.0.84.2 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libpci.so.3 -> libpci.so.3.7.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 58968 Sep 23 2022 usr/lib64/libpci.so.3.7.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libpcre.so.1 -> libpcre.so.1.2.10 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 464256 Jun 7 2021 usr/lib64/libpcre.so.1.2.10 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libpcre2-8.so -> libpcre2-8.so.0.7.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libpcre2-8.so.0 -> libpcre2-8.so.0.7.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 542840 Jun 27 2022 usr/lib64/libpcre2-8.so.0.7.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 27 Jan 15 2024 usr/lib64/libply-splash-core.so.5 -> libply-splash-core.so.5.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 143576 Feb 25 2022 usr/lib64/libply-splash-core.so.5.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libply.so.5 -> libply.so.5.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 115456 Feb 25 2022 usr/lib64/libply.so.5.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libprocps.so.7 -> libprocps.so.7.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 82848 Aug 15 2023 usr/lib64/libprocps.so.7.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libpsl.so.5 -> libpsl.so.5.3.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 69672 Jun 16 2020 usr/lib64/libpsl.so.5.3.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 149960 Jan 15 2024 usr/lib64/libpthread-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libpthread.so -> libpthread-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libpthread.so.0 -> libpthread-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 18 Jan 15 2024 usr/lib64/libreadline.so.7 -> libreadline.so.7.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 324792 May 10 2019 usr/lib64/libreadline.so.7.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 92328 Jan 15 2024 usr/lib64/libresolv-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libresolv.so -> libresolv-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libresolv.so.2 -> libresolv-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 42720 Jan 15 2024 usr/lib64/librt-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 13 Jan 15 2024 usr/lib64/librt.so -> librt-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 13 Jan 15 2024 usr/lib64/librt.so.1 -> librt-2.28.so Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libsasl2.so.3 -> libsasl2.so.3.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 124904 Feb 24 2022 usr/lib64/libsasl2.so.3.0.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 19 Jan 15 2024 usr/lib64/libseccomp.so.2 -> libseccomp.so.2.5.2 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 128328 Nov 11 2021 usr/lib64/libseccomp.so.2.5.2 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libselinux.so -> libselinux.so.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 171480 Dec 13 2022 usr/lib64/libselinux.so.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 13 Jan 15 2024 usr/lib64/libsepol.so -> libsepol.so.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 734512 Aug 24 2021 usr/lib64/libsepol.so.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libsmartcols.so.1 -> libsmartcols.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 221096 Jan 15 2024 usr/lib64/libsmartcols.so.1.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libssh.so.4 -> libssh.so.4.8.7 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 468264 Jan 15 2024 usr/lib64/libssh.so.4.8.7 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libssl.so -> libssl.so.1.1.1k Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libssl.so.1.1 -> libssl.so.1.1.1k Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 615200 Nov 30 2023 usr/lib64/libssl.so.1.1.1k Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 20 Jan 15 2024 usr/lib64/libsystemd.so.0 -> libsystemd.so.0.23.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1386936 Jan 15 2024 usr/lib64/libsystemd.so.0.23.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libtasn1.so.6 -> libtasn1.so.6.5.5 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 78344 Jan 17 2023 usr/lib64/libtasn1.so.6.5.5 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libteam.so.5 -> libteam.so.5.6.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 57448 Dec 8 2022 usr/lib64/libteam.so.5.6.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 20 Jan 15 2024 usr/lib64/libteamdctl.so.0 -> libteamdctl.so.0.1.5 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 24064 Dec 8 2022 usr/lib64/libteamdctl.so.0.1.5 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 15 Jan 15 2024 usr/lib64/libtinfo.so.6 -> libtinfo.so.6.1 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 187088 Aug 15 2023 usr/lib64/libtinfo.so.6.1 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 17 Jan 15 2024 usr/lib64/libudev.so.1 -> libudev.so.1.6.11 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 661792 Jan 15 2024 usr/lib64/libudev.so.1.6.11 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 21 Jan 15 2024 usr/lib64/libunistring.so.2 -> libunistring.so.2.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1580256 May 10 2019 usr/lib64/libunistring.so.2.1.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libuuid.so.1 -> libuuid.so.1.3.0 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 32864 Jan 15 2024 usr/lib64/libuuid.so.1.3.0 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libxml2.so.2 -> libxml2.so.2.9.7 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1502896 Sep 20 2023 usr/lib64/libxml2.so.2.9.7 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 14 Jan 15 2024 usr/lib64/libz.so -> libz.so.1.2.11 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 14 Jan 15 2024 usr/lib64/libz.so.1 -> libz.so.1.2.11 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 99112 May 17 2023 usr/lib64/libz.so.1.2.11 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libzstd.so -> libzstd.so.1.4.4 Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/lib64/libzstd.so.1 -> libzstd.so.1.4.4 Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 678016 Jun 17 2020 usr/lib64/libzstd.so.1.4.4 Jul 22 08:30:18 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/lib64/plymouth Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 16312 Feb 25 2022 usr/lib64/plymouth/details.so Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 20496 Feb 25 2022 usr/lib64/plymouth/text.so Jul 22 08:30:18 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/libexec Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 16568 Jan 15 2024 usr/libexec/nm-dhcp-helper Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 757736 Jan 15 2024 usr/libexec/nm-initrd-generator Jul 22 08:30:18 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/sbin Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 3578904 Jan 15 2024 usr/sbin/NetworkManager Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 42192 May 11 2019 usr/sbin/biosdevname Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 100352 Jan 15 2024 usr/sbin/blkid Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 41920 Jan 18 2023 usr/sbin/chroot Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 11 Jan 15 2024 usr/sbin/depmod -> ../bin/kmod Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 464032 Jan 15 2024 usr/sbin/dhclient Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 54096 Jan 15 2024 usr/sbin/fsck Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1968 Jun 8 2023 usr/sbin/fsck.xfs Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/sbin/halt -> ../bin/systemctl Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 22 Jan 15 2024 usr/sbin/init -> ../lib/systemd/systemd Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1163 Oct 8 2018 usr/sbin/initqueue Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 11 Jan 15 2024 usr/sbin/insmod -> ../bin/kmod Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 193 Oct 8 2018 usr/sbin/insmodpost.sh Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 709320 Sep 25 2023 usr/sbin/ip Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 199104 Jan 15 2024 usr/sbin/kexec Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 492 Oct 8 2018 usr/sbin/loginit Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 92104 Jan 15 2024 usr/sbin/losetup Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 11 Jan 15 2024 usr/sbin/lsmod -> ../bin/kmod Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 11 Jan 15 2024 usr/sbin/modinfo -> ../bin/kmod Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 11 Jan 15 2024 usr/sbin/modprobe -> ../bin/kmod Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 2677 Jan 15 2024 usr/sbin/netroot Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 12088 Jan 15 2024 usr/sbin/nologin Jul 22 08:30:18 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 144656 Feb 25 2022 usr/sbin/plymouthd Jul 22 08:30:18 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/sbin/poweroff -> ../bin/systemctl Jul 22 08:30:19 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 1346 Oct 8 2018 usr/sbin/rdsosreport Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 16 Jan 15 2024 usr/sbin/reboot -> ../bin/systemctl Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 11 Jan 15 2024 usr/sbin/rmmod -> ../bin/kmod Jul 22 08:30:19 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 119360 Mar 7 2023 usr/sbin/rngd Jul 22 08:30:19 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 20600 Jan 15 2024 usr/sbin/swapoff Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 14 Jan 15 2024 usr/sbin/udevadm -> ../bin/udevadm Jul 22 08:30:19 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 778024 Jun 8 2023 usr/sbin/xfs_db Jul 22 08:30:19 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 782 Jun 8 2023 usr/sbin/xfs_metadump Jul 22 08:30:19 managed-node11 dracut[8687]: -rwxr-xr-x 1 root root 731720 Jun 8 2023 usr/sbin/xfs_repair Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 4 root root 0 Jan 15 2024 usr/share Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 25 Jan 15 2024 usr/share/consolefonts -> /usr/lib/kbd/consolefonts Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 25 Jan 15 2024 usr/share/consoletrans -> /usr/lib/kbd/consoletrans Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 20 Jan 15 2024 usr/share/keymaps -> /usr/lib/kbd/keymaps Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 3 root root 0 Jan 15 2024 usr/share/plymouth Jul 22 08:30:19 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 134 Feb 25 2022 usr/share/plymouth/plymouthd.defaults Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 4 root root 0 Jan 15 2024 usr/share/plymouth/themes Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/share/plymouth/themes/details Jul 22 08:30:19 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 84 May 30 2020 usr/share/plymouth/themes/details/details.plymouth Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/share/plymouth/themes/text Jul 22 08:30:19 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 94 May 30 2020 usr/share/plymouth/themes/text/text.plymouth Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 4 root root 0 Jan 15 2024 usr/share/terminfo Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/share/terminfo/l Jul 22 08:30:19 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 1822 Aug 15 2023 usr/share/terminfo/l/linux Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 usr/share/terminfo/v Jul 22 08:30:19 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 1190 Aug 15 2023 usr/share/terminfo/v/vt100 Jul 22 08:30:19 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 1184 Aug 15 2023 usr/share/terminfo/v/vt102 Jul 22 08:30:19 managed-node11 dracut[8687]: -rw-r--r-- 1 root root 1377 Aug 15 2023 usr/share/terminfo/v/vt220 Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 20 Jan 15 2024 usr/share/unimaps -> /usr/lib/kbd/unimaps Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 3 root root 0 Jan 15 2024 var Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 11 Jan 15 2024 var/lock -> ../run/lock Jul 22 08:30:19 managed-node11 dracut[8687]: lrwxrwxrwx 1 root root 6 Jan 15 2024 var/run -> ../run Jul 22 08:30:19 managed-node11 dracut[8687]: drwxr-xr-x 2 root root 0 Jan 15 2024 var/tmp Jul 22 08:30:19 managed-node11 dracut[8687]: ======================================================================== Jul 22 08:30:19 managed-node11 dracut[8687]: *** Creating initramfs image file '/boot/initramfs-4.18.0-553.5.1.el8.x86_64.tmp' done *** Jul 22 08:30:21 managed-node11 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-reba4d6c3d4b6446187a01bf3ad5784cc.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-reba4d6c3d4b6446187a01bf3ad5784cc.service has finished starting up. -- -- The start-up result is done. Jul 22 08:30:21 managed-node11 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 22 08:30:21 managed-node11 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Jul 22 08:30:21 managed-node11 systemd[1]: Reloading. Jul 22 08:30:22 managed-node11 sudo[8284]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:22 managed-node11 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Jul 22 08:30:22 managed-node11 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Jul 22 08:30:22 managed-node11 systemd[1]: run-reba4d6c3d4b6446187a01bf3ad5784cc.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-reba4d6c3d4b6446187a01bf3ad5784cc.service has successfully entered the 'dead' state. Jul 22 08:30:23 managed-node11 sudo[17848]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xflyumojhjzbyxqxngpahnkvdnrxlgli ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187422.8569753-12244-195363186560141/AnsiballZ_blivet.py' Jul 22 08:30:23 managed-node11 sudo[17848]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:23 managed-node11 kernel: device-mapper: uevent: version 1.0.3 Jul 22 08:30:23 managed-node11 kernel: device-mapper: ioctl: 4.46.0-ioctl (2022-02-22) initialised: dm-devel@redhat.com Jul 22 08:30:23 managed-node11 systemd-udevd[532]: Network interface NamePolicy= disabled on kernel command line, ignoring. Jul 22 08:30:23 managed-node11 platform-python[17851]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:30:23 managed-node11 sudo[17848]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:24 managed-node11 sudo[17979]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbemattwstvidknrnmgphkvvuttmymcz ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187423.939213-12335-2293783786678/AnsiballZ_dnf.py' Jul 22 08:30:24 managed-node11 sudo[17979]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:24 managed-node11 platform-python[17982]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:30:26 managed-node11 sudo[17979]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:27 managed-node11 sudo[18105]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmcabcxxzngxnrdldvhywjbplfnaxeem ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187426.8560867-12578-4979764391676/AnsiballZ_service_facts.py' Jul 22 08:30:27 managed-node11 sudo[18105]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:27 managed-node11 platform-python[18108]: ansible-service_facts Invoked Jul 22 08:30:28 managed-node11 sudo[18105]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:29 managed-node11 sudo[18325]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcrzfybvnjblcdoekmnprjwqjpkdvmbg ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187428.9631567-12698-174223978345603/AnsiballZ_blivet.py' Jul 22 08:30:29 managed-node11 sudo[18325]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:29 managed-node11 platform-python[18328]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:30:29 managed-node11 sudo[18325]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:30 managed-node11 sudo[18453]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjlbbjicbbvhuqnfeowhtygnkuovwpyv ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187430.152333-12827-249947039778999/AnsiballZ_stat.py' Jul 22 08:30:30 managed-node11 sudo[18453]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:30 managed-node11 platform-python[18456]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:30 managed-node11 sudo[18453]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:31 managed-node11 sudo[18581]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feqvbnffnsaidruojdfcarfghzjeorwi ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187431.2673516-12996-208530061267268/AnsiballZ_stat.py' Jul 22 08:30:31 managed-node11 sudo[18581]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:31 managed-node11 platform-python[18584]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:31 managed-node11 sudo[18581]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:32 managed-node11 sudo[18709]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfodndxbwfvvipnerquijvmbumypscgu ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187431.9822953-13086-255923010330560/AnsiballZ_setup.py' Jul 22 08:30:32 managed-node11 sudo[18709]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:32 managed-node11 platform-python[18712]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:30:32 managed-node11 sudo[18709]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:33 managed-node11 sudo[18863]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfdtqtivexlbigjjsmgwfaecearguphn ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187433.1661184-13210-207756059416231/AnsiballZ_dnf.py' Jul 22 08:30:33 managed-node11 sudo[18863]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:33 managed-node11 platform-python[18866]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:30:35 managed-node11 sudo[18863]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:36 managed-node11 sudo[18989]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxnnnbqffqjnbvviuixocjlrnnuhxgws ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187436.2710984-13593-90190900004378/AnsiballZ_find_unused_disk.py' Jul 22 08:30:36 managed-node11 sudo[18989]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:36 managed-node11 platform-python[18992]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:30:36 managed-node11 sudo[18989]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:37 managed-node11 sudo[19117]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jusccfvcylvodyhacmmuydovaonxinza ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187437.0695906-13643-243091887011699/AnsiballZ_command.py' Jul 22 08:30:37 managed-node11 sudo[19117]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:38 managed-node11 platform-python[19120]: ansible-command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:30:38 managed-node11 sudo[19117]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:38 managed-node11 sshd[19143]: Accepted publickey for root from 10.31.12.181 port 50578 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:38 managed-node11 systemd-logind[602]: New session 9 of user root. -- Subject: A new session 9 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 9 has been created for the user root. -- -- The leading process of the session is 19143. Jul 22 08:30:38 managed-node11 sshd[19143]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:30:38 managed-node11 systemd[1]: Started Session 9 of user root. -- Subject: Unit session-9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-9.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:30:39 managed-node11 sshd[19146]: Received disconnect from 10.31.12.181 port 50578:11: disconnected by user Jul 22 08:30:39 managed-node11 sshd[19146]: Disconnected from user root 10.31.12.181 port 50578 Jul 22 08:30:39 managed-node11 sshd[19143]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:39 managed-node11 systemd[1]: session-9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-9.scope has successfully entered the 'dead' state. Jul 22 08:30:39 managed-node11 systemd-logind[602]: Session 9 logged out. Waiting for processes to exit. Jul 22 08:30:39 managed-node11 systemd-logind[602]: Removed session 9. -- Subject: Session 9 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 9 has been terminated. Jul 22 08:30:39 managed-node11 sshd[19167]: Accepted publickey for root from 10.31.12.181 port 50588 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:39 managed-node11 systemd[1]: Started Session 10 of user root. -- Subject: Unit session-10.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-10.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:30:39 managed-node11 systemd-logind[602]: New session 10 of user root. -- Subject: A new session 10 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 10 has been created for the user root. -- -- The leading process of the session is 19167. Jul 22 08:30:39 managed-node11 sshd[19167]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:30:39 managed-node11 sshd[19170]: Received disconnect from 10.31.12.181 port 50588:11: disconnected by user Jul 22 08:30:39 managed-node11 sshd[19170]: Disconnected from user root 10.31.12.181 port 50588 Jul 22 08:30:39 managed-node11 sshd[19167]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:39 managed-node11 systemd[1]: session-10.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-10.scope has successfully entered the 'dead' state. Jul 22 08:30:39 managed-node11 systemd-logind[602]: Session 10 logged out. Waiting for processes to exit. Jul 22 08:30:39 managed-node11 systemd-logind[602]: Removed session 10. -- Subject: Session 10 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 10 has been terminated. Jul 22 08:30:42 managed-node11 sudo[19332]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phkrmsrixszhwblcsueklnsnthrezebt ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187441.4065847-14053-265682826692384/AnsiballZ_setup.py' Jul 22 08:30:42 managed-node11 sudo[19332]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:42 managed-node11 platform-python[19335]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:30:42 managed-node11 sudo[19332]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:44 managed-node11 sudo[19486]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiojfkphqvvjwzmdewpnnmjejpowyzgx ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187443.916845-14356-67644190439999/AnsiballZ_stat.py' Jul 22 08:30:44 managed-node11 sudo[19486]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:44 managed-node11 platform-python[19489]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:44 managed-node11 sudo[19486]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:46 managed-node11 sudo[19612]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoanjqvxaipzjvljadelbdlhrwncbpnd ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187445.7165647-14593-217576760624547/AnsiballZ_dnf.py' Jul 22 08:30:46 managed-node11 sudo[19612]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:46 managed-node11 platform-python[19615]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:30:49 managed-node11 sudo[19612]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:50 managed-node11 sudo[19738]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgamluyoffdamdmwneekbvxcyryihosd ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187450.0772572-15076-257795353039120/AnsiballZ_blivet.py' Jul 22 08:30:50 managed-node11 sudo[19738]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:50 managed-node11 platform-python[19741]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:30:51 managed-node11 sudo[19738]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:52 managed-node11 sudo[19866]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afsmzqhvnkgfidoreyrjguzkukffnkhk ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187451.7554514-15285-183064931564468/AnsiballZ_dnf.py' Jul 22 08:30:52 managed-node11 sudo[19866]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:52 managed-node11 platform-python[19869]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:30:54 managed-node11 sudo[19866]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:55 managed-node11 sudo[19992]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkeawrbajcjzvgpunqpcwssmuddsprru ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187455.0714626-15644-270807092190421/AnsiballZ_service_facts.py' Jul 22 08:30:55 managed-node11 sudo[19992]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:56 managed-node11 platform-python[19995]: ansible-service_facts Invoked Jul 22 08:30:57 managed-node11 sudo[19992]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:58 managed-node11 sudo[20212]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhvkolcaqkkoqsglvwucsnzmqcucuslt ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187457.994115-15994-274586902294019/AnsiballZ_blivet.py' Jul 22 08:30:58 managed-node11 sudo[20212]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:58 managed-node11 platform-python[20215]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:30:58 managed-node11 sudo[20212]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:00 managed-node11 sudo[20340]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tenlfbelkljbtbsvzujiqljdeeszzqtb ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187459.5342493-16188-257812781159862/AnsiballZ_stat.py' Jul 22 08:31:00 managed-node11 sudo[20340]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:00 managed-node11 platform-python[20343]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:00 managed-node11 sudo[20340]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:03 managed-node11 sudo[20468]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vygfamprdbfsboeezmelgqkkbdgfceyq ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187462.6762831-16656-38047181699806/AnsiballZ_stat.py' Jul 22 08:31:03 managed-node11 sudo[20468]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:03 managed-node11 platform-python[20471]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:03 managed-node11 sudo[20468]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:04 managed-node11 sudo[20596]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-negxeumspofzmjglqiwmeptyhelgehwq ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187464.0810738-16839-91392603542945/AnsiballZ_setup.py' Jul 22 08:31:04 managed-node11 sudo[20596]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:04 managed-node11 platform-python[20599]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:31:05 managed-node11 sudo[20596]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:07 managed-node11 sudo[20750]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihelukztftaxlbvckfhkjiympsxazfme ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187465.7395153-17076-187856227149117/AnsiballZ_package_facts.py' Jul 22 08:31:07 managed-node11 sudo[20750]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:07 managed-node11 platform-python[20753]: ansible-package_facts Invoked with manager=['auto'] strategy=first Jul 22 08:31:08 managed-node11 sudo[20750]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:13 managed-node11 sudo[20878]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrbgjjikwwogpuwweoaioaaszsjdxxrg ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187471.4965212-17773-218033214895515/AnsiballZ_command.py' Jul 22 08:31:13 managed-node11 sudo[20878]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:13 managed-node11 platform-python[20881]: ansible-command Invoked with _raw_params=modprobe --dry-run kvdo warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:31:13 managed-node11 sudo[20878]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:14 managed-node11 sudo[21005]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvdbhhrkmogpjxlfgkagkhrflexgljtg ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187473.7100077-18063-195978065168792/AnsiballZ_command.py' Jul 22 08:31:14 managed-node11 sudo[21005]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:14 managed-node11 platform-python[21008]: ansible-command Invoked with _raw_params=modprobe --dry-run dm-vdo warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:31:14 managed-node11 sudo[21005]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:16 managed-node11 sudo[21132]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-latflhrmpemzmzyyacjzrygxaehfgnvr ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187475.9161906-18280-190442160656364/AnsiballZ_dnf.py' Jul 22 08:31:16 managed-node11 sudo[21132]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:16 managed-node11 platform-python[21135]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:31:19 managed-node11 sudo[21132]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:21 managed-node11 sudo[21258]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dunsrfthmapelmkuhzwtswddpkdfmgvy ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187480.0922537-18896-102098716036784/AnsiballZ_find_unused_disk.py' Jul 22 08:31:21 managed-node11 sudo[21258]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:21 managed-node11 platform-python[21261]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:31:21 managed-node11 sudo[21258]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:22 managed-node11 sudo[21386]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otgchukrsfkueturgjzwgvvlvaqpqpni ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187482.2409515-19212-198283903011597/AnsiballZ_command.py' Jul 22 08:31:22 managed-node11 sudo[21386]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:22 managed-node11 platform-python[21389]: ansible-command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:31:22 managed-node11 sudo[21386]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:25 managed-node11 sshd[21412]: Accepted publickey for root from 10.31.12.181 port 37858 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:25 managed-node11 systemd-logind[602]: New session 11 of user root. -- Subject: A new session 11 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 11 has been created for the user root. -- -- The leading process of the session is 21412. Jul 22 08:31:25 managed-node11 systemd[1]: Started Session 11 of user root. -- Subject: Unit session-11.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-11.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:31:25 managed-node11 sshd[21412]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:31:25 managed-node11 sshd[21415]: Received disconnect from 10.31.12.181 port 37858:11: disconnected by user Jul 22 08:31:25 managed-node11 sshd[21415]: Disconnected from user root 10.31.12.181 port 37858 Jul 22 08:31:25 managed-node11 sshd[21412]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:25 managed-node11 systemd[1]: session-11.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-11.scope has successfully entered the 'dead' state. Jul 22 08:31:25 managed-node11 systemd-logind[602]: Session 11 logged out. Waiting for processes to exit. Jul 22 08:31:25 managed-node11 systemd-logind[602]: Removed session 11. -- Subject: Session 11 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 11 has been terminated. Jul 22 08:31:26 managed-node11 sshd[21437]: Accepted publickey for root from 10.31.12.181 port 37866 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:26 managed-node11 systemd[1]: Started Session 12 of user root. -- Subject: Unit session-12.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-12.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:31:26 managed-node11 systemd-logind[602]: New session 12 of user root. -- Subject: A new session 12 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 12 has been created for the user root. -- -- The leading process of the session is 21437. Jul 22 08:31:26 managed-node11 sshd[21437]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:31:26 managed-node11 sshd[21440]: Received disconnect from 10.31.12.181 port 37866:11: disconnected by user Jul 22 08:31:26 managed-node11 sshd[21440]: Disconnected from user root 10.31.12.181 port 37866 Jul 22 08:31:26 managed-node11 sshd[21437]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:26 managed-node11 systemd[1]: session-12.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-12.scope has successfully entered the 'dead' state. Jul 22 08:31:26 managed-node11 systemd-logind[602]: Session 12 logged out. Waiting for processes to exit. Jul 22 08:31:26 managed-node11 systemd-logind[602]: Removed session 12. -- Subject: Session 12 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 12 has been terminated. Jul 22 08:31:30 managed-node11 platform-python[21602]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:31:32 managed-node11 sudo[21753]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pervwpftxqesdmgpgvzsektlfyftkhsd ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187491.874416-20514-243518420144745/AnsiballZ_setup.py' Jul 22 08:31:32 managed-node11 sudo[21753]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:32 managed-node11 platform-python[21756]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:31:32 managed-node11 sudo[21753]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:35 managed-node11 sudo[21907]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwbuoaidpptsuzkxswlyyaecnryrklhp ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187494.257182-20955-229664809279593/AnsiballZ_stat.py' Jul 22 08:31:35 managed-node11 sudo[21907]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:35 managed-node11 platform-python[21910]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:35 managed-node11 sudo[21907]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:38 managed-node11 sudo[22033]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gacdmaxiwrkcigoqnrkeyazymktoqhje ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187497.052526-21367-8304224834326/AnsiballZ_dnf.py' Jul 22 08:31:38 managed-node11 sudo[22033]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:38 managed-node11 platform-python[22036]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:31:41 managed-node11 sudo[22033]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:43 managed-node11 sudo[22159]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhgxvltsbmmvbnvfiivfrwfrvohbgmum ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187502.211873-22075-268848314605493/AnsiballZ_blivet.py' Jul 22 08:31:43 managed-node11 sudo[22159]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:44 managed-node11 platform-python[22162]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:44 managed-node11 sudo[22159]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:45 managed-node11 sudo[22287]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osfsgntozxaqyhbmyvbolvnioqwapdyb ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187504.8973243-22298-167488311857627/AnsiballZ_dnf.py' Jul 22 08:31:45 managed-node11 sudo[22287]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:45 managed-node11 platform-python[22290]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:31:48 managed-node11 sudo[22287]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:49 managed-node11 sudo[22413]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtypyfphowcpkwwonrjnuwhwjyhuocz ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187508.3957121-22758-223757364413594/AnsiballZ_service_facts.py' Jul 22 08:31:49 managed-node11 sudo[22413]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:49 managed-node11 platform-python[22416]: ansible-service_facts Invoked Jul 22 08:31:51 managed-node11 sudo[22413]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:52 managed-node11 sudo[22633]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejpwgsifdufgurthhxdoiaqxvtltivxd ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187511.8222656-23237-234172767792877/AnsiballZ_blivet.py' Jul 22 08:31:52 managed-node11 sudo[22633]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:52 managed-node11 platform-python[22636]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:52 managed-node11 sudo[22633]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:53 managed-node11 sudo[22761]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmgqpibvvkhcwfvlsgfihyoznsqvmvv ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187513.290937-23358-216832064063735/AnsiballZ_stat.py' Jul 22 08:31:53 managed-node11 sudo[22761]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:53 managed-node11 platform-python[22764]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:53 managed-node11 sudo[22761]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:56 managed-node11 sudo[22889]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhgshpqibdunjjlwsfolxsgsgdxsxvdm ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187515.834358-23645-55089544191904/AnsiballZ_stat.py' Jul 22 08:31:56 managed-node11 sudo[22889]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:56 managed-node11 platform-python[22892]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:56 managed-node11 sudo[22889]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:57 managed-node11 sudo[23017]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzzzocznzktesddtfwyrsrlmbudyxszv ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187516.7406487-23766-9179818028532/AnsiballZ_setup.py' Jul 22 08:31:57 managed-node11 sudo[23017]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:57 managed-node11 platform-python[23020]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:31:57 managed-node11 sudo[23017]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:59 managed-node11 sudo[23171]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdhctwuzsfotojawdrppkorndmnegfjz ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187519.0400856-23976-44241112053764/AnsiballZ_dnf.py' Jul 22 08:31:59 managed-node11 sudo[23171]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:59 managed-node11 platform-python[23174]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:32:02 managed-node11 sudo[23171]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:03 managed-node11 sudo[23297]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkojfqanyjwszjuwpnxuxlqxwcgknrez ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187522.3545222-24697-223040686694492/AnsiballZ_find_unused_disk.py' Jul 22 08:32:03 managed-node11 sudo[23297]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:03 managed-node11 platform-python[23300]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=3 with_interface=scsi match_sector_size=True min_size=0 max_size=0 Jul 22 08:32:03 managed-node11 sudo[23297]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:04 managed-node11 sshd[23323]: Accepted publickey for root from 10.31.12.181 port 57890 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:04 managed-node11 systemd-logind[602]: New session 13 of user root. -- Subject: A new session 13 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 13 has been created for the user root. -- -- The leading process of the session is 23323. Jul 22 08:32:04 managed-node11 systemd[1]: Started Session 13 of user root. -- Subject: Unit session-13.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-13.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:32:04 managed-node11 sshd[23323]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:32:04 managed-node11 sshd[23326]: Received disconnect from 10.31.12.181 port 57890:11: disconnected by user Jul 22 08:32:04 managed-node11 sshd[23326]: Disconnected from user root 10.31.12.181 port 57890 Jul 22 08:32:04 managed-node11 sshd[23323]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:04 managed-node11 systemd[1]: session-13.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-13.scope has successfully entered the 'dead' state. Jul 22 08:32:04 managed-node11 systemd-logind[602]: Session 13 logged out. Waiting for processes to exit. Jul 22 08:32:04 managed-node11 systemd-logind[602]: Removed session 13. -- Subject: Session 13 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 13 has been terminated. Jul 22 08:32:04 managed-node11 sshd[23347]: Accepted publickey for root from 10.31.12.181 port 57904 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:04 managed-node11 systemd[1]: Started Session 14 of user root. -- Subject: Unit session-14.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-14.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:32:04 managed-node11 systemd-logind[602]: New session 14 of user root. -- Subject: A new session 14 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 14 has been created for the user root. -- -- The leading process of the session is 23347. Jul 22 08:32:04 managed-node11 sshd[23347]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:32:04 managed-node11 sshd[23350]: Received disconnect from 10.31.12.181 port 57904:11: disconnected by user Jul 22 08:32:04 managed-node11 sshd[23350]: Disconnected from user root 10.31.12.181 port 57904 Jul 22 08:32:04 managed-node11 sshd[23347]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:04 managed-node11 systemd[1]: session-14.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-14.scope has successfully entered the 'dead' state. Jul 22 08:32:04 managed-node11 systemd-logind[602]: Session 14 logged out. Waiting for processes to exit. Jul 22 08:32:04 managed-node11 systemd-logind[602]: Removed session 14. -- Subject: Session 14 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 14 has been terminated. Jul 22 08:32:10 managed-node11 sudo[23512]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsrlszjxochqyfhaxcytjmhcwjzqkvor ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187529.416791-26085-119578749779490/AnsiballZ_setup.py' Jul 22 08:32:10 managed-node11 sudo[23512]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:10 managed-node11 platform-python[23515]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:32:10 managed-node11 sudo[23512]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:11 managed-node11 sudo[23666]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipzimgfkbiufejnxizlktejjxaxogyza ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187531.1765046-26258-96332613814639/AnsiballZ_stat.py' Jul 22 08:32:11 managed-node11 sudo[23666]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:11 managed-node11 platform-python[23669]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:11 managed-node11 sudo[23666]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:13 managed-node11 sudo[23792]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luqjvfkkbyulxtenkbquhknusfeqytns ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187532.7547367-26425-169654844430844/AnsiballZ_dnf.py' Jul 22 08:32:13 managed-node11 sudo[23792]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:13 managed-node11 platform-python[23795]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:32:16 managed-node11 sudo[23792]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:18 managed-node11 sudo[23918]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikboamwomkiyslevgocpowuxysokobc ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187537.0067272-26823-123969451444932/AnsiballZ_blivet.py' Jul 22 08:32:18 managed-node11 sudo[23918]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:18 managed-node11 platform-python[23921]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:32:18 managed-node11 sudo[23918]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:19 managed-node11 sudo[24046]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlchhscblkuqkfjmeszmtunxnxchqebv ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187539.4745188-27026-235899643345276/AnsiballZ_dnf.py' Jul 22 08:32:19 managed-node11 sudo[24046]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:20 managed-node11 platform-python[24049]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:32:22 managed-node11 sudo[24046]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:23 managed-node11 sudo[24172]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yufabwcvgddeoacnetyeuzroauzoufnf ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187542.780219-27502-257009046164136/AnsiballZ_service_facts.py' Jul 22 08:32:23 managed-node11 sudo[24172]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:23 managed-node11 platform-python[24175]: ansible-service_facts Invoked Jul 22 08:32:25 managed-node11 sudo[24172]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:26 managed-node11 sudo[24392]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqgfbtcuafrqifqulkyxlfhbucgygkh ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187546.2012963-27921-111636732865824/AnsiballZ_blivet.py' Jul 22 08:32:26 managed-node11 sudo[24392]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:26 managed-node11 platform-python[24395]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:32:26 managed-node11 sudo[24392]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:27 managed-node11 sudo[24520]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezmnfbzpvnvxzmucugqjgxxgqszevpgy ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187547.3310242-28184-182685194497268/AnsiballZ_stat.py' Jul 22 08:32:27 managed-node11 sudo[24520]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:27 managed-node11 platform-python[24523]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:27 managed-node11 sudo[24520]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:29 managed-node11 sudo[24648]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxsgfobhtmgeuklutkcxnugazamecpob ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187548.7647352-28490-19899238775783/AnsiballZ_stat.py' Jul 22 08:32:29 managed-node11 sudo[24648]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:29 managed-node11 platform-python[24651]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:29 managed-node11 sudo[24648]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:30 managed-node11 sudo[24776]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddtoksvtfddgrenxwkqdfrrtpqclxcta ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187549.63576-28639-119017345416735/AnsiballZ_setup.py' Jul 22 08:32:30 managed-node11 sudo[24776]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:30 managed-node11 platform-python[24779]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:32:30 managed-node11 sudo[24776]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:31 managed-node11 sudo[24930]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmrdepvwqdwwquvsfoqklqnolcltygxj ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187551.4992313-28841-172574020754871/AnsiballZ_dnf.py' Jul 22 08:32:31 managed-node11 sudo[24930]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:32 managed-node11 platform-python[24933]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:32:34 managed-node11 sudo[24930]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:35 managed-node11 sudo[25056]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcsvhheghntrgraxepiszndzzxxdirkw ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187554.7154427-29386-43666786157872/AnsiballZ_find_unused_disk.py' Jul 22 08:32:35 managed-node11 sudo[25056]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:35 managed-node11 platform-python[25059]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=2 match_sector_size=True min_size=0 max_size=0 with_interface=None Jul 22 08:32:35 managed-node11 sudo[25056]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:36 managed-node11 sshd[25082]: Accepted publickey for root from 10.31.12.181 port 43464 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:36 managed-node11 systemd-logind[602]: New session 15 of user root. -- Subject: A new session 15 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 15 has been created for the user root. -- -- The leading process of the session is 25082. Jul 22 08:32:36 managed-node11 systemd[1]: Started Session 15 of user root. -- Subject: Unit session-15.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-15.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:32:36 managed-node11 sshd[25082]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:32:36 managed-node11 sshd[25085]: Received disconnect from 10.31.12.181 port 43464:11: disconnected by user Jul 22 08:32:36 managed-node11 sshd[25085]: Disconnected from user root 10.31.12.181 port 43464 Jul 22 08:32:36 managed-node11 sshd[25082]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:36 managed-node11 systemd[1]: session-15.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-15.scope has successfully entered the 'dead' state. Jul 22 08:32:36 managed-node11 systemd-logind[602]: Session 15 logged out. Waiting for processes to exit. Jul 22 08:32:36 managed-node11 systemd-logind[602]: Removed session 15. -- Subject: Session 15 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 15 has been terminated. Jul 22 08:32:37 managed-node11 sshd[25106]: Accepted publickey for root from 10.31.12.181 port 43474 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:37 managed-node11 systemd[1]: Started Session 16 of user root. -- Subject: Unit session-16.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-16.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:32:37 managed-node11 systemd-logind[602]: New session 16 of user root. -- Subject: A new session 16 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 16 has been created for the user root. -- -- The leading process of the session is 25106. Jul 22 08:32:37 managed-node11 sshd[25106]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:32:37 managed-node11 sshd[25109]: Received disconnect from 10.31.12.181 port 43474:11: disconnected by user Jul 22 08:32:37 managed-node11 sshd[25109]: Disconnected from user root 10.31.12.181 port 43474 Jul 22 08:32:37 managed-node11 sshd[25106]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:37 managed-node11 systemd[1]: session-16.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-16.scope has successfully entered the 'dead' state. Jul 22 08:32:37 managed-node11 systemd-logind[602]: Session 16 logged out. Waiting for processes to exit. Jul 22 08:32:37 managed-node11 systemd-logind[602]: Removed session 16. -- Subject: Session 16 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 16 has been terminated. Jul 22 08:32:39 managed-node11 sshd[25130]: Accepted publickey for root from 10.31.12.181 port 43476 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:39 managed-node11 systemd[1]: Started Session 17 of user root. -- Subject: Unit session-17.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-17.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:32:39 managed-node11 systemd-logind[602]: New session 17 of user root. -- Subject: A new session 17 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 17 has been created for the user root. -- -- The leading process of the session is 25130. Jul 22 08:32:39 managed-node11 sshd[25130]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:32:39 managed-node11 sshd[25133]: Received disconnect from 10.31.12.181 port 43476:11: disconnected by user Jul 22 08:32:39 managed-node11 sshd[25133]: Disconnected from user root 10.31.12.181 port 43476 Jul 22 08:32:39 managed-node11 sshd[25130]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:39 managed-node11 systemd[1]: session-17.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-17.scope has successfully entered the 'dead' state. Jul 22 08:32:39 managed-node11 systemd-logind[602]: Session 17 logged out. Waiting for processes to exit. Jul 22 08:32:39 managed-node11 systemd-logind[602]: Removed session 17. -- Subject: Session 17 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 17 has been terminated. Jul 22 08:32:45 managed-node11 platform-python[25295]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:32:46 managed-node11 sudo[25446]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjxarcsavrojyyjexvpbhfafnimznziq ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187566.3956413-31033-110641970945344/AnsiballZ_setup.py' Jul 22 08:32:46 managed-node11 sudo[25446]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:47 managed-node11 platform-python[25449]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:32:47 managed-node11 sudo[25446]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:50 managed-node11 sudo[25600]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clktyanwciuopjothaqrgbdfczgacifo ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187569.9884672-31511-195690546135290/AnsiballZ_stat.py' Jul 22 08:32:50 managed-node11 sudo[25600]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:51 managed-node11 platform-python[25603]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:51 managed-node11 sudo[25600]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:53 managed-node11 sudo[25726]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awlqvtcbusitsslebzgalmmoriwsntqx ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187572.7238727-31796-147980486627395/AnsiballZ_dnf.py' Jul 22 08:32:53 managed-node11 sudo[25726]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:54 managed-node11 platform-python[25729]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:32:56 managed-node11 sudo[25726]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:58 managed-node11 sudo[25852]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrwsimqjygtaturbcdkijtcptjcscbmr ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187577.0158892-32530-142216366880508/AnsiballZ_blivet.py' Jul 22 08:32:58 managed-node11 sudo[25852]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:58 managed-node11 platform-python[25855]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:32:58 managed-node11 sudo[25852]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:00 managed-node11 sudo[25980]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aowvlaqfepciklxfswxawemqxohvwulr ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187579.631394-32791-107179007878438/AnsiballZ_dnf.py' Jul 22 08:33:00 managed-node11 sudo[25980]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:00 managed-node11 platform-python[25983]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:33:02 managed-node11 sudo[25980]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:03 managed-node11 sudo[26106]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txpettqsyqzakgjfbqtsymkvdpdcypyj ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187582.9467072-33216-68528713852395/AnsiballZ_service_facts.py' Jul 22 08:33:03 managed-node11 sudo[26106]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:03 managed-node11 platform-python[26109]: ansible-service_facts Invoked Jul 22 08:33:05 managed-node11 sudo[26106]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:06 managed-node11 sudo[26326]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghpjvhbqwrrosabutlmuhaadpybcdvuk ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187585.6325228-33633-5885188976629/AnsiballZ_blivet.py' Jul 22 08:33:06 managed-node11 sudo[26326]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:06 managed-node11 platform-python[26329]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:33:06 managed-node11 sudo[26326]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:07 managed-node11 sudo[26454]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtqaeowvekslfiftpbgirukppkcqhdqx ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187586.9231386-33788-230828483532752/AnsiballZ_stat.py' Jul 22 08:33:07 managed-node11 sudo[26454]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:07 managed-node11 platform-python[26457]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:07 managed-node11 sudo[26454]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:09 managed-node11 sudo[26582]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtgtrbjiuqijsdsmtjntplphkvxnwwk ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187589.4672937-34028-143732559055177/AnsiballZ_stat.py' Jul 22 08:33:09 managed-node11 sudo[26582]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:09 managed-node11 platform-python[26585]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:09 managed-node11 sudo[26582]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:10 managed-node11 sudo[26710]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kipqqhojslwdiuofhynqosnqroiuzkrq ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187590.341506-34178-82869983321475/AnsiballZ_setup.py' Jul 22 08:33:10 managed-node11 sudo[26710]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:11 managed-node11 platform-python[26713]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:33:11 managed-node11 sudo[26710]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:12 managed-node11 sudo[26864]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtlhoufvyguxuhthzaxvffrvjryscxgz ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187592.0991719-34379-135468582259686/AnsiballZ_dnf.py' Jul 22 08:33:12 managed-node11 sudo[26864]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:12 managed-node11 platform-python[26867]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:33:14 managed-node11 sudo[26864]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:16 managed-node11 sudo[26990]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgyhdfvlrolritskqowvhzzksxiubama ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187595.256032-34895-86952645765041/AnsiballZ_find_unused_disk.py' Jul 22 08:33:16 managed-node11 sudo[26990]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:16 managed-node11 platform-python[26993]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:33:16 managed-node11 sudo[26990]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:18 managed-node11 sudo[27118]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-botbwzdxelxxqmlcboydwrwiujtaffor ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187596.7055423-34987-267936848190882/AnsiballZ_command.py' Jul 22 08:33:18 managed-node11 sudo[27118]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:18 managed-node11 platform-python[27121]: ansible-command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:33:18 managed-node11 sudo[27118]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:20 managed-node11 sshd[27144]: Accepted publickey for root from 10.31.12.181 port 48114 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:20 managed-node11 systemd-logind[602]: New session 18 of user root. -- Subject: A new session 18 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 18 has been created for the user root. -- -- The leading process of the session is 27144. Jul 22 08:33:20 managed-node11 systemd[1]: Started Session 18 of user root. -- Subject: Unit session-18.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-18.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:33:20 managed-node11 sshd[27144]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:33:20 managed-node11 sshd[27147]: Received disconnect from 10.31.12.181 port 48114:11: disconnected by user Jul 22 08:33:20 managed-node11 sshd[27147]: Disconnected from user root 10.31.12.181 port 48114 Jul 22 08:33:20 managed-node11 sshd[27144]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:20 managed-node11 systemd[1]: session-18.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-18.scope has successfully entered the 'dead' state. Jul 22 08:33:20 managed-node11 systemd-logind[602]: Session 18 logged out. Waiting for processes to exit. Jul 22 08:33:20 managed-node11 systemd-logind[602]: Removed session 18. -- Subject: Session 18 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 18 has been terminated. Jul 22 08:33:20 managed-node11 sshd[27168]: Accepted publickey for root from 10.31.12.181 port 48116 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:20 managed-node11 systemd[1]: Started Session 19 of user root. -- Subject: Unit session-19.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-19.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:33:20 managed-node11 systemd-logind[602]: New session 19 of user root. -- Subject: A new session 19 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 19 has been created for the user root. -- -- The leading process of the session is 27168. Jul 22 08:33:20 managed-node11 sshd[27168]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:33:20 managed-node11 sshd[27171]: Received disconnect from 10.31.12.181 port 48116:11: disconnected by user Jul 22 08:33:20 managed-node11 sshd[27171]: Disconnected from user root 10.31.12.181 port 48116 Jul 22 08:33:20 managed-node11 sshd[27168]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:20 managed-node11 systemd[1]: session-19.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-19.scope has successfully entered the 'dead' state. Jul 22 08:33:20 managed-node11 systemd-logind[602]: Session 19 logged out. Waiting for processes to exit. Jul 22 08:33:20 managed-node11 systemd-logind[602]: Removed session 19. -- Subject: Session 19 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 19 has been terminated. Jul 22 08:33:22 managed-node11 sudo[27333]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufivwqztustyokuzgenxbcdurcpqzphl ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187602.0330656-35770-202725020063776/AnsiballZ_setup.py' Jul 22 08:33:22 managed-node11 sudo[27333]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:23 managed-node11 platform-python[27336]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:33:23 managed-node11 sudo[27333]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:26 managed-node11 sudo[27487]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqmhutjxoisvxrtrudybviqvsgamfqfd ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187604.9000778-36080-202842972570296/AnsiballZ_stat.py' Jul 22 08:33:26 managed-node11 sudo[27487]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:26 managed-node11 platform-python[27490]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:26 managed-node11 sudo[27487]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:28 managed-node11 sudo[27613]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spvbjsnvgnwhswhovjgnooofosnjfyut ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187607.822392-36381-240403542873792/AnsiballZ_dnf.py' Jul 22 08:33:28 managed-node11 sudo[27613]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:28 managed-node11 platform-python[27616]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:33:31 managed-node11 sudo[27613]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:32 managed-node11 sudo[27739]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajktkktlucizzyccbvkvbasiapwhuvju ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187611.6580715-37170-19744510129820/AnsiballZ_blivet.py' Jul 22 08:33:32 managed-node11 sudo[27739]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:32 managed-node11 platform-python[27742]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:33:32 managed-node11 sudo[27739]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:34 managed-node11 sudo[27867]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjpqotecgbltvhvtzpmiulivqeqbsqle ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187613.7055752-37355-253896694796085/AnsiballZ_dnf.py' Jul 22 08:33:34 managed-node11 sudo[27867]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:34 managed-node11 platform-python[27870]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:33:36 managed-node11 sudo[27867]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:37 managed-node11 sudo[27993]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdxyruicjjqlhzwdomtizttmbrrhxyqr ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187617.011613-37897-104752162928090/AnsiballZ_service_facts.py' Jul 22 08:33:37 managed-node11 sudo[27993]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:37 managed-node11 platform-python[27996]: ansible-service_facts Invoked Jul 22 08:33:39 managed-node11 sudo[27993]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:40 managed-node11 sudo[28213]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiewqgtuimgysdtqrxrsgfbhzfgngkco ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187619.6899374-38294-109487890257853/AnsiballZ_blivet.py' Jul 22 08:33:40 managed-node11 sudo[28213]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:40 managed-node11 platform-python[28216]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:33:40 managed-node11 sudo[28213]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:41 managed-node11 sudo[28341]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frqaswxzkblehjmypkanenyzbnwujeix ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187620.6663997-38496-265609254604055/AnsiballZ_stat.py' Jul 22 08:33:41 managed-node11 sudo[28341]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:41 managed-node11 platform-python[28344]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:41 managed-node11 sudo[28341]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:43 managed-node11 sudo[28469]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwqszjsynvomonkkdqgmsowspdqgjepn ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187623.1008155-38787-170839621124355/AnsiballZ_stat.py' Jul 22 08:33:43 managed-node11 sudo[28469]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:43 managed-node11 platform-python[28472]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:33:43 managed-node11 sudo[28469]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:44 managed-node11 sudo[28597]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpquoouemnvjmjboparldwgmmvfymqhm ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187624.0029192-38909-183891809804975/AnsiballZ_setup.py' Jul 22 08:33:44 managed-node11 sudo[28597]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:44 managed-node11 platform-python[28600]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:33:44 managed-node11 sudo[28597]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:46 managed-node11 sudo[28751]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlgaznztmnunnjfhyzhweeyswkrjzedl ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187625.5283751-39108-119086021127028/AnsiballZ_dnf.py' Jul 22 08:33:46 managed-node11 sudo[28751]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:46 managed-node11 platform-python[28754]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:33:48 managed-node11 sudo[28751]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:49 managed-node11 sudo[28877]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbmcdeqfkwqfqiwmhhioaturwmoanqqf ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187628.9716809-39615-40081685083369/AnsiballZ_find_unused_disk.py' Jul 22 08:33:49 managed-node11 sudo[28877]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:49 managed-node11 platform-python[28880]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=2 match_sector_size=True max_size=0 with_interface=None Jul 22 08:33:49 managed-node11 sudo[28877]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:50 managed-node11 sshd[28903]: Accepted publickey for root from 10.31.12.181 port 37294 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:50 managed-node11 systemd-logind[602]: New session 20 of user root. -- Subject: A new session 20 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 20 has been created for the user root. -- -- The leading process of the session is 28903. Jul 22 08:33:50 managed-node11 systemd[1]: Started Session 20 of user root. -- Subject: Unit session-20.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-20.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:33:50 managed-node11 sshd[28903]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:33:50 managed-node11 sshd[28906]: Received disconnect from 10.31.12.181 port 37294:11: disconnected by user Jul 22 08:33:50 managed-node11 sshd[28906]: Disconnected from user root 10.31.12.181 port 37294 Jul 22 08:33:50 managed-node11 sshd[28903]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:50 managed-node11 systemd[1]: session-20.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-20.scope has successfully entered the 'dead' state. Jul 22 08:33:50 managed-node11 systemd-logind[602]: Session 20 logged out. Waiting for processes to exit. Jul 22 08:33:50 managed-node11 systemd-logind[602]: Removed session 20. -- Subject: Session 20 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 20 has been terminated. Jul 22 08:33:51 managed-node11 sshd[28927]: Accepted publickey for root from 10.31.12.181 port 37306 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:51 managed-node11 systemd[1]: Started Session 21 of user root. -- Subject: Unit session-21.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-21.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:33:51 managed-node11 systemd-logind[602]: New session 21 of user root. -- Subject: A new session 21 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 21 has been created for the user root. -- -- The leading process of the session is 28927. Jul 22 08:33:51 managed-node11 sshd[28927]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:33:51 managed-node11 sshd[28930]: Received disconnect from 10.31.12.181 port 37306:11: disconnected by user Jul 22 08:33:51 managed-node11 sshd[28930]: Disconnected from user root 10.31.12.181 port 37306 Jul 22 08:33:51 managed-node11 sshd[28927]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:51 managed-node11 systemd[1]: session-21.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-21.scope has successfully entered the 'dead' state. Jul 22 08:33:51 managed-node11 systemd-logind[602]: Session 21 logged out. Waiting for processes to exit. Jul 22 08:33:51 managed-node11 systemd-logind[602]: Removed session 21. -- Subject: Session 21 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 21 has been terminated. Jul 22 08:33:53 managed-node11 sshd[28951]: Accepted publickey for root from 10.31.12.181 port 37316 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:53 managed-node11 systemd[1]: Started Session 22 of user root. -- Subject: Unit session-22.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-22.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:33:53 managed-node11 systemd-logind[602]: New session 22 of user root. -- Subject: A new session 22 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 22 has been created for the user root. -- -- The leading process of the session is 28951. Jul 22 08:33:53 managed-node11 sshd[28951]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:33:53 managed-node11 sshd[28954]: Received disconnect from 10.31.12.181 port 37316:11: disconnected by user Jul 22 08:33:53 managed-node11 sshd[28954]: Disconnected from user root 10.31.12.181 port 37316 Jul 22 08:33:53 managed-node11 sshd[28951]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:53 managed-node11 systemd[1]: session-22.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-22.scope has successfully entered the 'dead' state. Jul 22 08:33:53 managed-node11 systemd-logind[602]: Session 22 logged out. Waiting for processes to exit. Jul 22 08:33:53 managed-node11 systemd-logind[602]: Removed session 22. -- Subject: Session 22 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 22 has been terminated. Jul 22 08:33:56 managed-node11 platform-python[29116]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:33:57 managed-node11 sudo[29267]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-verarasedezhjiitfizyoyoozizuqicq ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187637.2906656-40782-239982761864193/AnsiballZ_setup.py' Jul 22 08:33:57 managed-node11 sudo[29267]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:57 managed-node11 platform-python[29270]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:33:58 managed-node11 sudo[29267]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:00 managed-node11 sudo[29421]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxexvqnqwdxbzhrlwndigagiykcqdhxt ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187639.1725426-40990-48941867014980/AnsiballZ_stat.py' Jul 22 08:34:00 managed-node11 sudo[29421]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:00 managed-node11 platform-python[29424]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:00 managed-node11 sudo[29421]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:02 managed-node11 sudo[29547]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-magrwasoghcirsfqefbfariwmfkxanyo ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187641.7125092-41301-87024448645394/AnsiballZ_dnf.py' Jul 22 08:34:02 managed-node11 sudo[29547]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:03 managed-node11 platform-python[29550]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:34:05 managed-node11 sudo[29547]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:07 managed-node11 sudo[29673]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrnprckqfyygezyzaqngbuaknpfybbha ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187646.157847-41918-268931579839469/AnsiballZ_blivet.py' Jul 22 08:34:07 managed-node11 sudo[29673]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:07 managed-node11 platform-python[29676]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:34:07 managed-node11 sudo[29673]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:08 managed-node11 sudo[29801]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hukamykrfhyxlttzcfkspnbkkrulbwug ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187648.340653-42281-68192535685573/AnsiballZ_dnf.py' Jul 22 08:34:08 managed-node11 sudo[29801]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:08 managed-node11 platform-python[29804]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:34:11 managed-node11 sudo[29801]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:12 managed-node11 sudo[29927]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwocvjjdcixmxviupceykturrvvotcmf ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187651.584859-43102-1671820918405/AnsiballZ_service_facts.py' Jul 22 08:34:12 managed-node11 sudo[29927]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:12 managed-node11 platform-python[29930]: ansible-service_facts Invoked Jul 22 08:34:13 managed-node11 sudo[29927]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:14 managed-node11 sudo[30147]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzztetudpgvyubdwpmcfngprzjgxmxme ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187654.3647246-43442-217080079302659/AnsiballZ_blivet.py' Jul 22 08:34:14 managed-node11 sudo[30147]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:15 managed-node11 platform-python[30150]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:34:15 managed-node11 sudo[30147]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:15 managed-node11 sudo[30275]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cowkvljvmeafssidqupmqndnliqfvvry ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187655.4070745-43638-66963897063566/AnsiballZ_stat.py' Jul 22 08:34:15 managed-node11 sudo[30275]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:15 managed-node11 platform-python[30278]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:15 managed-node11 sudo[30275]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:17 managed-node11 sudo[30403]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvyfgnbbodilunmxdlcvmfbgjdgftnad ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187657.029141-43913-111374474491469/AnsiballZ_stat.py' Jul 22 08:34:17 managed-node11 sudo[30403]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:17 managed-node11 platform-python[30406]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:17 managed-node11 sudo[30403]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:17 managed-node11 sudo[30531]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmjtpnowiyyrskejkjagogoqbgsjfade ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187657.6055279-43981-80510721947806/AnsiballZ_setup.py' Jul 22 08:34:17 managed-node11 sudo[30531]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:18 managed-node11 platform-python[30534]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:34:18 managed-node11 sudo[30531]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:19 managed-node11 sudo[30685]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuucstmxxzrsjzqyhogpjruyipnbwwbq ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187658.7288861-44043-140387055311116/AnsiballZ_dnf.py' Jul 22 08:34:19 managed-node11 sudo[30685]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:19 managed-node11 platform-python[30688]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:34:21 managed-node11 sudo[30685]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:22 managed-node11 sudo[30811]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvbvsyrrfgxbirttmobwcvunfsufakic ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187661.8611393-44441-11169140768171/AnsiballZ_find_unused_disk.py' Jul 22 08:34:22 managed-node11 sudo[30811]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:22 managed-node11 platform-python[30814]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:34:22 managed-node11 sudo[30811]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:23 managed-node11 sudo[30939]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oitjarsqbflmfcckdofwzsmidqfdrovv ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187662.777435-44630-197218903116176/AnsiballZ_command.py' Jul 22 08:34:23 managed-node11 sudo[30939]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:23 managed-node11 platform-python[30942]: ansible-command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:34:23 managed-node11 sudo[30939]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:25 managed-node11 sshd[30965]: Accepted publickey for root from 10.31.12.181 port 57774 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:25 managed-node11 systemd-logind[602]: New session 23 of user root. -- Subject: A new session 23 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 23 has been created for the user root. -- -- The leading process of the session is 30965. Jul 22 08:34:25 managed-node11 systemd[1]: Started Session 23 of user root. -- Subject: Unit session-23.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-23.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:25 managed-node11 sshd[30965]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:25 managed-node11 sshd[30968]: Received disconnect from 10.31.12.181 port 57774:11: disconnected by user Jul 22 08:34:25 managed-node11 sshd[30968]: Disconnected from user root 10.31.12.181 port 57774 Jul 22 08:34:25 managed-node11 sshd[30965]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:25 managed-node11 systemd[1]: session-23.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-23.scope has successfully entered the 'dead' state. Jul 22 08:34:25 managed-node11 systemd-logind[602]: Session 23 logged out. Waiting for processes to exit. Jul 22 08:34:25 managed-node11 systemd-logind[602]: Removed session 23. -- Subject: Session 23 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 23 has been terminated. Jul 22 08:34:26 managed-node11 sshd[30989]: Accepted publickey for root from 10.31.12.181 port 57780 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:26 managed-node11 systemd[1]: Started Session 24 of user root. -- Subject: Unit session-24.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-24.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:26 managed-node11 systemd-logind[602]: New session 24 of user root. -- Subject: A new session 24 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 24 has been created for the user root. -- -- The leading process of the session is 30989. Jul 22 08:34:26 managed-node11 sshd[30989]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:26 managed-node11 sshd[30992]: Received disconnect from 10.31.12.181 port 57780:11: disconnected by user Jul 22 08:34:26 managed-node11 sshd[30992]: Disconnected from user root 10.31.12.181 port 57780 Jul 22 08:34:26 managed-node11 sshd[30989]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:26 managed-node11 systemd[1]: session-24.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-24.scope has successfully entered the 'dead' state. Jul 22 08:34:26 managed-node11 systemd-logind[602]: Session 24 logged out. Waiting for processes to exit. Jul 22 08:34:26 managed-node11 systemd-logind[602]: Removed session 24. -- Subject: Session 24 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 24 has been terminated. Jul 22 08:34:28 managed-node11 sshd[31013]: Accepted publickey for root from 10.31.12.181 port 57782 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:28 managed-node11 systemd[1]: Started Session 25 of user root. -- Subject: Unit session-25.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-25.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:28 managed-node11 systemd-logind[602]: New session 25 of user root. -- Subject: A new session 25 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 25 has been created for the user root. -- -- The leading process of the session is 31013. Jul 22 08:34:28 managed-node11 sshd[31013]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:28 managed-node11 sshd[31016]: Received disconnect from 10.31.12.181 port 57782:11: disconnected by user Jul 22 08:34:28 managed-node11 sshd[31016]: Disconnected from user root 10.31.12.181 port 57782 Jul 22 08:34:28 managed-node11 sshd[31013]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:28 managed-node11 systemd[1]: session-25.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-25.scope has successfully entered the 'dead' state. Jul 22 08:34:28 managed-node11 systemd-logind[602]: Session 25 logged out. Waiting for processes to exit. Jul 22 08:34:28 managed-node11 systemd-logind[602]: Removed session 25. -- Subject: Session 25 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 25 has been terminated. Jul 22 08:34:30 managed-node11 platform-python[31178]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:34:31 managed-node11 sudo[31329]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naexuphhrfgykiznxvahpjamitptgnko ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187670.7884104-45889-198972992858840/AnsiballZ_setup.py' Jul 22 08:34:31 managed-node11 sudo[31329]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:31 managed-node11 platform-python[31332]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:34:31 managed-node11 sudo[31329]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:32 managed-node11 sudo[31483]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aklketurksxfaiyqoupvxaqwljbeyime ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187672.0400093-45954-106868721525521/AnsiballZ_stat.py' Jul 22 08:34:32 managed-node11 sudo[31483]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:32 managed-node11 platform-python[31486]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:32 managed-node11 sudo[31483]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:34 managed-node11 sudo[31609]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkebpkdyqyfvfmnpngqeqwptuphskdoj ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187673.4203525-46095-235370986837900/AnsiballZ_dnf.py' Jul 22 08:34:34 managed-node11 sudo[31609]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:34 managed-node11 platform-python[31612]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:34:36 managed-node11 sudo[31609]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:37 managed-node11 sudo[31735]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rruijlappemtgkbpjupdqnsvjrfzeuyx ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187677.3175235-46603-55817453974029/AnsiballZ_blivet.py' Jul 22 08:34:37 managed-node11 sudo[31735]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:38 managed-node11 platform-python[31738]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:34:38 managed-node11 sudo[31735]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:39 managed-node11 sudo[31863]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oasjhlhnqysaukvnrydynnnlqbkxgswp ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187678.6649199-46890-60117662216455/AnsiballZ_dnf.py' Jul 22 08:34:39 managed-node11 sudo[31863]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:39 managed-node11 platform-python[31866]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:34:41 managed-node11 sudo[31863]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:42 managed-node11 sudo[31989]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzpthnknrushcozfhzbbggcxrmgkirs ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187681.7219906-47197-130522047506301/AnsiballZ_service_facts.py' Jul 22 08:34:42 managed-node11 sudo[31989]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:42 managed-node11 platform-python[31992]: ansible-service_facts Invoked Jul 22 08:34:43 managed-node11 sudo[31989]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:44 managed-node11 sudo[32209]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbqngktmiuntwethtrtwokuopgtrwezk ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187684.3486395-47686-271646246805616/AnsiballZ_blivet.py' Jul 22 08:34:44 managed-node11 sudo[32209]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:44 managed-node11 platform-python[32212]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:34:44 managed-node11 sudo[32209]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:45 managed-node11 sudo[32337]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neifwdnqmkodiffjbizylqmflhtgkhpz ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187685.2749627-47866-212021261405714/AnsiballZ_stat.py' Jul 22 08:34:45 managed-node11 sudo[32337]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:45 managed-node11 platform-python[32340]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:45 managed-node11 sudo[32337]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:46 managed-node11 sudo[32465]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abxqyqeuicjqspdfhdepfcqhfozwhofx ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187686.3980687-47993-118463063190187/AnsiballZ_stat.py' Jul 22 08:34:46 managed-node11 sudo[32465]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:46 managed-node11 platform-python[32468]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:34:46 managed-node11 sudo[32465]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:47 managed-node11 sudo[32593]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpgbnsbctbrhhzbbrseyatbvranhmdbr ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187687.326273-48097-18593699838396/AnsiballZ_setup.py' Jul 22 08:34:47 managed-node11 sudo[32593]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:47 managed-node11 platform-python[32596]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:34:48 managed-node11 sudo[32593]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:49 managed-node11 sudo[32747]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozbtlxxpgsxnlgtjoelneeuchbbfakad ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187689.0057292-48187-19250741530700/AnsiballZ_dnf.py' Jul 22 08:34:49 managed-node11 sudo[32747]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:49 managed-node11 platform-python[32750]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:34:51 managed-node11 sudo[32747]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:52 managed-node11 sudo[32873]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jucbwgsynywxfuxvxqfssgloclouyxom ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187692.1041193-48763-139709129527746/AnsiballZ_find_unused_disk.py' Jul 22 08:34:52 managed-node11 sudo[32873]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:52 managed-node11 platform-python[32876]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=3 with_interface=scsi min_size=0 max_size=0 match_sector_size=False Jul 22 08:34:52 managed-node11 sudo[32873]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:53 managed-node11 sudo[33001]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrzdrfutlvtfavrxtrwznyurdvotgrns ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187693.0448284-48891-53863864735376/AnsiballZ_command.py' Jul 22 08:34:53 managed-node11 sudo[33001]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:53 managed-node11 platform-python[33004]: ansible-command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:34:53 managed-node11 sudo[33001]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:55 managed-node11 sshd[33027]: Accepted publickey for root from 10.31.12.181 port 49604 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:55 managed-node11 systemd-logind[602]: New session 26 of user root. -- Subject: A new session 26 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 26 has been created for the user root. -- -- The leading process of the session is 33027. Jul 22 08:34:55 managed-node11 systemd[1]: Started Session 26 of user root. -- Subject: Unit session-26.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-26.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:55 managed-node11 sshd[33027]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:55 managed-node11 sshd[33030]: Received disconnect from 10.31.12.181 port 49604:11: disconnected by user Jul 22 08:34:55 managed-node11 sshd[33030]: Disconnected from user root 10.31.12.181 port 49604 Jul 22 08:34:55 managed-node11 sshd[33027]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:55 managed-node11 systemd[1]: session-26.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-26.scope has successfully entered the 'dead' state. Jul 22 08:34:55 managed-node11 systemd-logind[602]: Session 26 logged out. Waiting for processes to exit. Jul 22 08:34:55 managed-node11 systemd-logind[602]: Removed session 26. -- Subject: Session 26 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 26 has been terminated. Jul 22 08:34:55 managed-node11 sshd[33051]: Accepted publickey for root from 10.31.12.181 port 49616 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:55 managed-node11 systemd[1]: Started Session 27 of user root. -- Subject: Unit session-27.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-27.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:55 managed-node11 systemd-logind[602]: New session 27 of user root. -- Subject: A new session 27 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 27 has been created for the user root. -- -- The leading process of the session is 33051. Jul 22 08:34:55 managed-node11 sshd[33051]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:55 managed-node11 sshd[33054]: Received disconnect from 10.31.12.181 port 49616:11: disconnected by user Jul 22 08:34:55 managed-node11 sshd[33054]: Disconnected from user root 10.31.12.181 port 49616 Jul 22 08:34:55 managed-node11 sshd[33051]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:55 managed-node11 systemd[1]: session-27.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-27.scope has successfully entered the 'dead' state. Jul 22 08:34:55 managed-node11 systemd-logind[602]: Session 27 logged out. Waiting for processes to exit. Jul 22 08:34:55 managed-node11 systemd-logind[602]: Removed session 27. -- Subject: Session 27 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 27 has been terminated. Jul 22 08:34:58 managed-node11 platform-python[33216]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:34:59 managed-node11 sudo[33367]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hawpapjxwcsabdwasnrmhsrlookmhebz ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187698.9697537-49793-80284891252868/AnsiballZ_setup.py' Jul 22 08:34:59 managed-node11 sudo[33367]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:59 managed-node11 platform-python[33370]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:34:59 managed-node11 sudo[33367]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:01 managed-node11 sudo[33521]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkkygotmpohkxtmfllbvehticihijvdf ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187700.4241412-49924-128951828488894/AnsiballZ_stat.py' Jul 22 08:35:01 managed-node11 sudo[33521]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:01 managed-node11 platform-python[33524]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:01 managed-node11 sudo[33521]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:03 managed-node11 sudo[33647]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrtytwozfbmywqqfrvdmmuhfhxeauzpz ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187702.651606-50110-267088201171923/AnsiballZ_dnf.py' Jul 22 08:35:03 managed-node11 sudo[33647]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:04 managed-node11 platform-python[33650]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:06 managed-node11 sudo[33647]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:07 managed-node11 sudo[33773]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmawjfxwwsqfamifbecqkdjbedpchpll ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187706.9571342-50844-189618535443806/AnsiballZ_blivet.py' Jul 22 08:35:07 managed-node11 sudo[33773]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:07 managed-node11 platform-python[33776]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:35:07 managed-node11 sudo[33773]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:09 managed-node11 sudo[33901]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycxlxghsvwzdjatmxzghicpipwxbmhce ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187708.707086-51028-147944877637278/AnsiballZ_dnf.py' Jul 22 08:35:09 managed-node11 sudo[33901]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:09 managed-node11 platform-python[33904]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:11 managed-node11 sudo[33901]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:12 managed-node11 sudo[34027]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noptilravzlrudtexpjwoxmaebsthbsg ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187711.799982-51407-15893985040154/AnsiballZ_service_facts.py' Jul 22 08:35:12 managed-node11 sudo[34027]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:12 managed-node11 platform-python[34030]: ansible-service_facts Invoked Jul 22 08:35:13 managed-node11 sudo[34027]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:14 managed-node11 sudo[34247]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijytvotahruftgxkrmcuucmzekcfhvwn ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187714.20313-51813-211524304997206/AnsiballZ_blivet.py' Jul 22 08:35:14 managed-node11 sudo[34247]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:14 managed-node11 platform-python[34250]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:35:14 managed-node11 sudo[34247]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:15 managed-node11 sudo[34375]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzqqgotxukiwculsnewugcktudmokhcm ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187715.060817-51987-139055208687488/AnsiballZ_stat.py' Jul 22 08:35:15 managed-node11 sudo[34375]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:15 managed-node11 platform-python[34378]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:15 managed-node11 sudo[34375]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:17 managed-node11 sudo[34503]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxfpzojztsvyeonwfyhwoqzdqizyvhgw ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187717.030464-52157-43928677376860/AnsiballZ_stat.py' Jul 22 08:35:17 managed-node11 sudo[34503]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:17 managed-node11 platform-python[34506]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:17 managed-node11 sudo[34503]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:18 managed-node11 sudo[34631]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utajgqzxytbojozcuuivmapcprpskvdk ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187717.9536536-52318-204018151479327/AnsiballZ_setup.py' Jul 22 08:35:18 managed-node11 sudo[34631]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:18 managed-node11 platform-python[34634]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:35:18 managed-node11 sudo[34631]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:19 managed-node11 sudo[34785]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rguwewogecczzuudsgpufdzlpaorucjy ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187719.3969467-52452-121899206037609/AnsiballZ_dnf.py' Jul 22 08:35:19 managed-node11 sudo[34785]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:19 managed-node11 platform-python[34788]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:22 managed-node11 sudo[34785]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:23 managed-node11 sudo[34911]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcuxsokduuxqggzgqktrknbyiskvngqc ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187722.5881538-53012-115811185432020/AnsiballZ_find_unused_disk.py' Jul 22 08:35:23 managed-node11 sudo[34911]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:23 managed-node11 platform-python[34914]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=1 with_interface=scsi min_size=0 max_size=0 match_sector_size=False Jul 22 08:35:23 managed-node11 sudo[34911]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:24 managed-node11 sudo[35039]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brkwtkluuawuqinwjdhxvefmrlmgnfkm ; /usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1753187723.9832623-53174-226239376619298/AnsiballZ_command.py' Jul 22 08:35:24 managed-node11 sudo[35039]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:24 managed-node11 platform-python[35042]: ansible-command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:35:24 managed-node11 sudo[35039]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:25 managed-node11 sshd[35065]: Accepted publickey for root from 10.31.12.181 port 51006 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:25 managed-node11 systemd[1]: Started Session 28 of user root. -- Subject: Unit session-28.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-28.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:25 managed-node11 systemd-logind[602]: New session 28 of user root. -- Subject: A new session 28 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 28 has been created for the user root. -- -- The leading process of the session is 35065. Jul 22 08:35:25 managed-node11 sshd[35065]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:35:26 managed-node11 sshd[35068]: Received disconnect from 10.31.12.181 port 51006:11: disconnected by user Jul 22 08:35:26 managed-node11 sshd[35068]: Disconnected from user root 10.31.12.181 port 51006 Jul 22 08:35:26 managed-node11 sshd[35065]: pam_unix(sshd:session): session closed for user root Jul 22 08:35:26 managed-node11 systemd[1]: session-28.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-28.scope has successfully entered the 'dead' state. Jul 22 08:35:26 managed-node11 systemd-logind[602]: Session 28 logged out. Waiting for processes to exit. Jul 22 08:35:26 managed-node11 systemd-logind[602]: Removed session 28. -- Subject: Session 28 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 28 has been terminated. Jul 22 08:35:26 managed-node11 sshd[35089]: Accepted publickey for root from 10.31.12.181 port 51022 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:26 managed-node11 systemd[1]: Started Session 29 of user root. -- Subject: Unit session-29.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-29.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:26 managed-node11 systemd-logind[602]: New session 29 of user root. -- Subject: A new session 29 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 29 has been created for the user root. -- -- The leading process of the session is 35089. Jul 22 08:35:26 managed-node11 sshd[35089]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:35:26 managed-node11 sshd[35092]: Received disconnect from 10.31.12.181 port 51022:11: disconnected by user Jul 22 08:35:26 managed-node11 sshd[35092]: Disconnected from user root 10.31.12.181 port 51022 Jul 22 08:35:26 managed-node11 sshd[35089]: pam_unix(sshd:session): session closed for user root Jul 22 08:35:26 managed-node11 systemd[1]: session-29.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-29.scope has successfully entered the 'dead' state. Jul 22 08:35:26 managed-node11 systemd-logind[602]: Session 29 logged out. Waiting for processes to exit. Jul 22 08:35:26 managed-node11 systemd-logind[602]: Removed session 29. -- Subject: Session 29 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 29 has been terminated. Jul 22 08:35:28 managed-node11 platform-python[35254]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:35:30 managed-node11 platform-python[35405]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:33 managed-node11 platform-python[35528]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:36 managed-node11 platform-python[35651]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:35:38 managed-node11 platform-python[35776]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:41 managed-node11 platform-python[35899]: ansible-service_facts Invoked Jul 22 08:35:43 managed-node11 platform-python[36116]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:35:44 managed-node11 platform-python[36241]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:45 managed-node11 platform-python[36366]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:46 managed-node11 platform-python[36491]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:35:47 managed-node11 platform-python[36642]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:50 managed-node11 platform-python[36765]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_size=0 max_return=1 match_sector_size=False with_interface=None Jul 22 08:35:50 managed-node11 platform-python[36890]: ansible-command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:29 Tuesday 22 July 2025 08:35:50 -0400 (0:00:00.663) 0:00:23.481 ********** skipping: [managed-node11] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34 Tuesday 22 July 2025 08:35:50 -0400 (0:00:00.034) 0:00:23.515 ********** fatal: [managed-node11]: FAILED! => { "changed": false } MSG: Unable to find enough unused disks. Exiting playbook. PLAY RECAP ********************************************************************* managed-node11 : ok=28 changed=0 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.9.27", "end_time": "2025-07-22T12:35:50.841802+00:00Z", "host": "managed-node11", "message": "Unable to find enough unused disks. Exiting playbook.", "start_time": "2025-07-22T12:35:50.823587+00:00Z", "task_name": "Exit playbook when there's not enough unused disks in the system", "task_path": "/tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Tuesday 22 July 2025 08:35:50 -0400 (0:00:00.028) 0:00:23.543 ********** =============================================================================== fedora.linux_system_roles.storage : Make sure blivet is available ------- 3.76s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 fedora.linux_system_roles.storage : Make sure required packages are installed --- 3.26s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Ensure test packages ---------------------------------------------------- 2.91s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 fedora.linux_system_roles.storage : Get service facts ------------------- 1.97s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Get required packages --------------- 1.40s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Gathering Facts --------------------------------------------------------- 1.30s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:2 fedora.linux_system_roles.storage : Check if system is ostree ----------- 1.16s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 0.84s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Check if /etc/fstab is present ------ 0.73s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 fedora.linux_system_roles.storage : Update facts ------------------------ 0.73s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Debug why there are no unused disks ------------------------------------- 0.66s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Find unused disks in the system ----------------------------------------- 0.57s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.57s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file --- 0.43s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 fedora.linux_system_roles.storage : Set flag to indicate system is ostree --- 0.30s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.29s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 fedora.linux_system_roles.storage : Enable copr repositories if needed --- 0.26s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.23s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 fedora.linux_system_roles.storage : Ensure ansible_facts used by role --- 0.21s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Include role to ensure packages are installed --------------------------- 0.16s /tmp/collections-bHg/ansible_collections/fedora/linux_system_roles/tests/storage/tests_swap.yml:10 -- Logs begin at Tue 2025-07-22 08:24:28 EDT, end at Tue 2025-07-22 08:35:51 EDT. -- Jul 22 08:35:26 managed-node11 sshd[35068]: Received disconnect from 10.31.12.181 port 51006:11: disconnected by user Jul 22 08:35:26 managed-node11 sshd[35068]: Disconnected from user root 10.31.12.181 port 51006 Jul 22 08:35:26 managed-node11 sshd[35065]: pam_unix(sshd:session): session closed for user root Jul 22 08:35:26 managed-node11 systemd[1]: session-28.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-28.scope has successfully entered the 'dead' state. Jul 22 08:35:26 managed-node11 systemd-logind[602]: Session 28 logged out. Waiting for processes to exit. Jul 22 08:35:26 managed-node11 systemd-logind[602]: Removed session 28. -- Subject: Session 28 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 28 has been terminated. Jul 22 08:35:26 managed-node11 sshd[35089]: Accepted publickey for root from 10.31.12.181 port 51022 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:26 managed-node11 systemd[1]: Started Session 29 of user root. -- Subject: Unit session-29.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-29.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:26 managed-node11 systemd-logind[602]: New session 29 of user root. -- Subject: A new session 29 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 29 has been created for the user root. -- -- The leading process of the session is 35089. Jul 22 08:35:26 managed-node11 sshd[35089]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:35:26 managed-node11 sshd[35092]: Received disconnect from 10.31.12.181 port 51022:11: disconnected by user Jul 22 08:35:26 managed-node11 sshd[35092]: Disconnected from user root 10.31.12.181 port 51022 Jul 22 08:35:26 managed-node11 sshd[35089]: pam_unix(sshd:session): session closed for user root Jul 22 08:35:26 managed-node11 systemd[1]: session-29.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-29.scope has successfully entered the 'dead' state. Jul 22 08:35:26 managed-node11 systemd-logind[602]: Session 29 logged out. Waiting for processes to exit. Jul 22 08:35:26 managed-node11 systemd-logind[602]: Removed session 29. -- Subject: Session 29 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 29 has been terminated. Jul 22 08:35:28 managed-node11 platform-python[35254]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:35:30 managed-node11 platform-python[35405]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:33 managed-node11 platform-python[35528]: ansible-dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:36 managed-node11 platform-python[35651]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:35:38 managed-node11 platform-python[35776]: ansible-dnf Invoked with name=['kpartx'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:41 managed-node11 platform-python[35899]: ansible-service_facts Invoked Jul 22 08:35:43 managed-node11 platform-python[36116]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:35:44 managed-node11 platform-python[36241]: ansible-stat Invoked with path=/etc/fstab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:45 managed-node11 platform-python[36366]: ansible-stat Invoked with path=/etc/crypttab follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:35:46 managed-node11 platform-python[36491]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Jul 22 08:35:47 managed-node11 platform-python[36642]: ansible-dnf Invoked with name=['util-linux'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Jul 22 08:35:50 managed-node11 platform-python[36765]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_size=0 max_return=1 match_sector_size=False with_interface=None Jul 22 08:35:50 managed-node11 platform-python[36890]: ansible-command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:35:51 managed-node11 sshd[36913]: Accepted publickey for root from 10.31.12.181 port 55426 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:51 managed-node11 systemd-logind[602]: New session 30 of user root. -- Subject: A new session 30 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 30 has been created for the user root. -- -- The leading process of the session is 36913. Jul 22 08:35:51 managed-node11 systemd[1]: Started Session 30 of user root. -- Subject: Unit session-30.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-30.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:51 managed-node11 sshd[36913]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:35:51 managed-node11 sshd[36916]: Received disconnect from 10.31.12.181 port 55426:11: disconnected by user Jul 22 08:35:51 managed-node11 sshd[36916]: Disconnected from user root 10.31.12.181 port 55426 Jul 22 08:35:51 managed-node11 sshd[36913]: pam_unix(sshd:session): session closed for user root Jul 22 08:35:51 managed-node11 systemd[1]: session-30.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-30.scope has successfully entered the 'dead' state. Jul 22 08:35:51 managed-node11 systemd-logind[602]: Session 30 logged out. Waiting for processes to exit. Jul 22 08:35:51 managed-node11 systemd-logind[602]: Removed session 30. -- Subject: Session 30 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 30 has been terminated. Jul 22 08:35:51 managed-node11 sshd[36937]: Accepted publickey for root from 10.31.12.181 port 55436 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:51 managed-node11 systemd[1]: Started Session 31 of user root. -- Subject: Unit session-31.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-31.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:51 managed-node11 systemd-logind[602]: New session 31 of user root. -- Subject: A new session 31 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 31 has been created for the user root. -- -- The leading process of the session is 36937. Jul 22 08:35:51 managed-node11 sshd[36937]: pam_unix(sshd:session): session opened for user root by (uid=0)