ansible-playbook [core 2.17.13] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-nSC executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Jun 4 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-8)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_misc.yml ******************************************************* 1 plays in /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml PLAY [Test misc features of the storage role] ********************************** TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:2 Tuesday 22 July 2025 08:32:36 -0400 (0:00:00.031) 0:00:00.031 ********** [WARNING]: Platform linux on host managed-node1 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node1] TASK [Include the role to ensure packages are installed] *********************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:15 Tuesday 22 July 2025 08:32:37 -0400 (0:00:01.671) 0:00:01.703 ********** included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Tuesday 22 July 2025 08:32:37 -0400 (0:00:00.142) 0:00:01.846 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Tuesday 22 July 2025 08:32:37 -0400 (0:00:00.070) 0:00:01.916 ********** skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Tuesday 22 July 2025 08:32:38 -0400 (0:00:00.192) 0:00:02.109 ********** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed-node1] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Tuesday 22 July 2025 08:32:38 -0400 (0:00:00.243) 0:00:02.352 ********** ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Tuesday 22 July 2025 08:32:39 -0400 (0:00:01.289) 0:00:03.642 ********** ok: [managed-node1] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Tuesday 22 July 2025 08:32:39 -0400 (0:00:00.080) 0:00:03.722 ********** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Tuesday 22 July 2025 08:32:39 -0400 (0:00:00.049) 0:00:03.772 ********** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Tuesday 22 July 2025 08:32:39 -0400 (0:00:00.024) 0:00:03.796 ********** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Tuesday 22 July 2025 08:32:39 -0400 (0:00:00.168) 0:00:03.964 ********** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Tuesday 22 July 2025 08:32:41 -0400 (0:00:01.824) 0:00:05.789 ********** ok: [managed-node1] => { "storage_pools | d([])": [] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Tuesday 22 July 2025 08:32:41 -0400 (0:00:00.206) 0:00:05.996 ********** ok: [managed-node1] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Tuesday 22 July 2025 08:32:42 -0400 (0:00:00.193) 0:00:06.189 ********** [WARNING]: Module invocation had junk after the JSON data: sys:1: DeprecationWarning: builtin type swigvarlink has no __module__ attribute ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Tuesday 22 July 2025 08:32:43 -0400 (0:00:01.341) 0:00:07.531 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Tuesday 22 July 2025 08:32:43 -0400 (0:00:00.182) 0:00:07.713 ********** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Tuesday 22 July 2025 08:32:43 -0400 (0:00:00.139) 0:00:07.852 ********** skipping: [managed-node1] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Tuesday 22 July 2025 08:32:43 -0400 (0:00:00.139) 0:00:07.992 ********** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Tuesday 22 July 2025 08:32:44 -0400 (0:00:00.158) 0:00:08.151 ********** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Tuesday 22 July 2025 08:32:45 -0400 (0:00:01.036) 0:00:09.188 ********** ok: [managed-node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_multipath.service": { "name": "modprobe@dm_multipath.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "inactive", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup-with-network@.service": { "name": "stratis-fstab-setup-with-network@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Tuesday 22 July 2025 08:32:48 -0400 (0:00:03.046) 0:00:12.236 ********** ok: [managed-node1] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Tuesday 22 July 2025 08:32:48 -0400 (0:00:00.298) 0:00:12.535 ********** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Tuesday 22 July 2025 08:32:48 -0400 (0:00:00.120) 0:00:12.655 ********** ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Tuesday 22 July 2025 08:32:49 -0400 (0:00:01.031) 0:00:13.686 ********** skipping: [managed-node1] => { "changed": false, "false_condition": "storage_udevadm_trigger | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Tuesday 22 July 2025 08:32:49 -0400 (0:00:00.179) 0:00:13.866 ********** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1753187118.1160562, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "016bd7ce6cb6b233647ba6b5c21ac99bb7146610", "ctime": 1750750281.8033595, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194435, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1750750281.8033595, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1344, "uid": 0, "version": "3162749339", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Tuesday 22 July 2025 08:32:50 -0400 (0:00:00.688) 0:00:14.554 ********** skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Tuesday 22 July 2025 08:32:50 -0400 (0:00:00.209) 0:00:14.764 ********** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Tuesday 22 July 2025 08:32:50 -0400 (0:00:00.156) 0:00:14.920 ********** ok: [managed-node1] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Tuesday 22 July 2025 08:32:51 -0400 (0:00:00.156) 0:00:15.076 ********** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Tuesday 22 July 2025 08:32:51 -0400 (0:00:00.149) 0:00:15.226 ********** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Tuesday 22 July 2025 08:32:51 -0400 (0:00:00.120) 0:00:15.347 ********** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Tuesday 22 July 2025 08:32:51 -0400 (0:00:00.190) 0:00:15.537 ********** skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Tuesday 22 July 2025 08:32:51 -0400 (0:00:00.058) 0:00:15.595 ********** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Tuesday 22 July 2025 08:32:51 -0400 (0:00:00.083) 0:00:15.678 ********** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Tuesday 22 July 2025 08:32:51 -0400 (0:00:00.131) 0:00:15.810 ********** skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Tuesday 22 July 2025 08:32:52 -0400 (0:00:00.209) 0:00:16.019 ********** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1753187312.818527, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1750749389.405, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194436, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1750749068.122, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "1830666913", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Tuesday 22 July 2025 08:32:52 -0400 (0:00:00.659) 0:00:16.679 ********** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Tuesday 22 July 2025 08:32:52 -0400 (0:00:00.066) 0:00:16.745 ********** ok: [managed-node1] TASK [Mark tasks to be skipped] ************************************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:19 Tuesday 22 July 2025 08:32:53 -0400 (0:00:01.256) 0:00:18.001 ********** ok: [managed-node1] => { "ansible_facts": { "storage_skip_checks": [ "blivet_available", "packages_installed", "service_facts" ] }, "changed": false } TASK [Get unused disks for test] *********************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:26 Tuesday 22 July 2025 08:32:54 -0400 (0:00:00.174) 0:00:18.176 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml for managed-node1 TASK [Ensure test packages] **************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Tuesday 22 July 2025 08:32:54 -0400 (0:00:00.141) 0:00:18.317 ********** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Tuesday 22 July 2025 08:32:55 -0400 (0:00:01.253) 0:00:19.571 ********** ok: [managed-node1] => { "changed": false, "disks": "Unable to find unused disk", "info": [ "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "filename [xvda2] is a partition", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Tuesday 22 July 2025 08:32:56 -0400 (0:00:01.185) 0:00:20.757 ********** ok: [managed-node1] => { "changed": false, "cmd": "set -x\nexec 1>&2\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC\njournalctl -ex\n", "delta": "0:00:00.030160", "end": "2025-07-22 08:32:57.881064", "rc": 0, "start": "2025-07-22 08:32:57.850904" } STDERR: + exec + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda1" TYPE="part" SIZE="1048576" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda2" TYPE="part" SIZE="268433341952" FSTYPE="xfs" LOG-SEC="512" + journalctl -ex Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 404. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8032] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external') Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8038] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external') Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8043] device (lo): Activation: successful, device activated. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started gssproxy.service - GSSAPI Proxy Daemon. ░░ Subject: A start job for unit gssproxy.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit gssproxy.service has finished successfully. ░░ ░░ The job identifier is 231. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: rpc-gssd.service - RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). ░░ Subject: A start job for unit rpc-gssd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-gssd.service has finished successfully. ░░ ░░ The job identifier is 232. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target nfs-client.target - NFS client services. ░░ Subject: A start job for unit nfs-client.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit nfs-client.target has finished successfully. ░░ ░░ The job identifier is 229. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. ░░ Subject: A start job for unit remote-fs-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-fs-pre.target has finished successfully. ░░ ░░ The job identifier is 236. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. ░░ Subject: A start job for unit remote-cryptsetup.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-cryptsetup.target has finished successfully. ░░ ░░ The job identifier is 278. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-fs.target - Remote File Systems. ░░ Subject: A start job for unit remote-fs.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-fs.target has finished successfully. ░░ ░░ The job identifier is 247. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-pcrphase.service - TPM PCR Barrier (User) was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ░░ Subject: A start job for unit systemd-pcrphase.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-pcrphase.service has finished successfully. ░░ ░░ The job identifier is 139. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8815] dhcp4 (eth0): state changed new lease, address=10.31.45.60 Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8825] policy: set 'cloud-init eth0' (eth0) as default for IPv4 routing and DNS Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8868] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full') Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8898] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full') Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8903] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full') Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8908] manager: NetworkManager state is now CONNECTED_SITE Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8911] device (eth0): Activation: successful, device activated. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8916] manager: NetworkManager state is now CONNECTED_GLOBAL Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com NetworkManager[720]: [1753187116.8918] manager: startup complete Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished NetworkManager-wait-online.service - Network Manager Wait Online. ░░ Subject: A start job for unit NetworkManager-wait-online.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-wait-online.service has finished successfully. ░░ ░░ The job identifier is 205. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-init.service - Cloud-init: Network Stage... ░░ Subject: A start job for unit cloud-init.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.service has begun execution. ░░ ░░ The job identifier is 274. Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Added source 10.11.160.238 Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Added source 10.18.100.10 Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Added source 10.2.32.37 Jul 22 08:25:16 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Added source 10.2.32.38 Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Cloud-init v. 24.4-5.el10 running 'init' at Tue, 22 Jul 2025 12:25:17 +0000. Up 19.59 seconds. Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++++ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | eth0 | True | 10.31.45.60 | 255.255.252.0 | global | 02:a9:49:9a:ed:c1 | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | eth0 | True | fe80::a9:49ff:fe9a:edc1/64 | . | link | 02:a9:49:9a:ed:c1 | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | lo | True | ::1/128 | . | host | . | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: ++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | 0 | 0.0.0.0 | 10.31.44.1 | 0.0.0.0 | eth0 | UG | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | 1 | 10.31.44.0 | 0.0.0.0 | 255.255.252.0 | eth0 | U | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | Route | Destination | Gateway | Interface | Flags | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | 0 | fe80::/64 | :: | eth0 | U | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: | 2 | multicast | :: | eth0 | U | Jul 22 08:25:17 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Generating public/private rsa key pair. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: The key fingerprint is: Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: SHA256:6huW2TcQDZqcTxdzNWTYZYztvf1p1FBROQrGGVssXXg root@ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: The key's randomart image is: Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: +---[RSA 3072]----+ Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . +.**BBB| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . + o X+=oE+| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | = o +.o ooo| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | o o .. o| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | S .+| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | = . oo| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | * . o . o| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | o . . . o.| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | o. . | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: +----[SHA256]-----+ Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Generating public/private ecdsa key pair. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: The key fingerprint is: Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: SHA256:00qzX8meWABT/y8TXwJjy5TJMYc2JuNVol8EYutqxak root@ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: The key's randomart image is: Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: +---[ECDSA 256]---+ Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | + =++ | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | oo**O | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | o.o*X.. | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | *.* * | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | S B + + .| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . O o . =.| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | E = o o| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . . = . o | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | o o | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: +----[SHA256]-----+ Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Generating public/private ed25519 key pair. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: The key fingerprint is: Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: SHA256:3E/tN08ZG+sgsEMtMOHakGLrtV5hyaZZg72LVWOA0Mg root@ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: The key's randomart image is: Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: +--[ED25519 256]--+ Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | ..o . | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | E..+ . | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | o + = | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . o B.=.. . | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . + @SB... .o | Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . . B B =o . *| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . + + o ....=o| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | . + . . . ooo| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: | o . ..| Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[808]: +----[SHA256]-----+ Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-init.service - Cloud-init: Network Stage. ░░ Subject: A start job for unit cloud-init.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.service has finished successfully. ░░ ░░ The job identifier is 274. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target cloud-config.target - Cloud-config availability. ░░ Subject: A start job for unit cloud-config.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.target has finished successfully. ░░ ░░ The job identifier is 276. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target network-online.target - Network is Online. ░░ Subject: A start job for unit network-online.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network-online.target has finished successfully. ░░ ░░ The job identifier is 204. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-config.service - Cloud-init: Config Stage... ░░ Subject: A start job for unit cloud-config.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.service has begun execution. ░░ ░░ The job identifier is 275. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 0 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 0 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 48 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 48 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 49 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 49 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 50 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 50 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 51 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 51 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 52 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 52 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 53 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 53 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 54 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 54 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 55 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 55 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 56 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 56 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 57 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 57 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 58 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 58 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: Cannot change IRQ 59 affinity: Permission denied Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com irqbalance[662]: IRQ 59 affinity is now unmanaged Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting kdump.service - Crash recovery kernel arming... ░░ Subject: A start job for unit kdump.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has begun execution. ░░ ░░ The job identifier is 239. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting restraintd.service - The restraint harness.... ░░ Subject: A start job for unit restraintd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit restraintd.service has begun execution. ░░ ░░ The job identifier is 271. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting rpc-statd-notify.service - Notify NFS peers of a restart... ░░ Subject: A start job for unit rpc-statd-notify.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-statd-notify.service has begun execution. ░░ ░░ The job identifier is 237. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting rsyslog.service - System Logging Service... ░░ Subject: A start job for unit rsyslog.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rsyslog.service has begun execution. ░░ ░░ The job identifier is 248. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sm-notify[893]: Version 2.8.3 starting Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 249. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... ░░ Subject: A start job for unit systemd-user-sessions.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-user-sessions.service has begun execution. ░░ ░░ The job identifier is 280. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started rpc-statd-notify.service - Notify NFS peers of a restart. ░░ Subject: A start job for unit rpc-statd-notify.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-statd-notify.service has finished successfully. ░░ ░░ The job identifier is 237. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd[895]: Server listening on 0.0.0.0 port 22. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd[895]: Server listening on :: port 22. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 249. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started restraintd.service - The restraint harness.. ░░ Subject: A start job for unit restraintd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit restraintd.service has finished successfully. ░░ ░░ The job identifier is 271. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. ░░ Subject: A start job for unit systemd-user-sessions.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-user-sessions.service has finished successfully. ░░ ░░ The job identifier is 280. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started crond.service - Command Scheduler. ░░ Subject: A start job for unit crond.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit crond.service has finished successfully. ░░ ░░ The job identifier is 256. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started getty@tty1.service - Getty on tty1. ░░ Subject: A start job for unit getty@tty1.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit getty@tty1.service has finished successfully. ░░ ░░ The job identifier is 266. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. ░░ Subject: A start job for unit serial-getty@ttyS0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit serial-getty@ttyS0.service has finished successfully. ░░ ░░ The job identifier is 261. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target getty.target - Login Prompts. ░░ Subject: A start job for unit getty.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit getty.target has finished successfully. ░░ ░░ The job identifier is 260. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com rsyslogd[894]: [origin software="rsyslogd" swVersion="8.2506.0-1.el10" x-pid="894" x-info="https://www.rsyslog.com"] start Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started rsyslog.service - System Logging Service. ░░ Subject: A start job for unit rsyslog.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rsyslog.service has finished successfully. ░░ ░░ The job identifier is 248. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target multi-user.target - Multi-User System. ░░ Subject: A start job for unit multi-user.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit multi-user.target has finished successfully. ░░ ░░ The job identifier is 121. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com crond[916]: (CRON) STARTUP (1.7.0) Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com crond[916]: (CRON) INFO (Syslog will be used instead of sendmail.) Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com crond[916]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 32% if used.) Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com crond[916]: (CRON) INFO (running with inotify support) Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... ░░ Subject: A start job for unit systemd-update-utmp-runlevel.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp-runlevel.service has begun execution. ░░ ░░ The job identifier is 245. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-update-utmp-runlevel.service has successfully entered the 'dead' state. Jul 22 08:25:18 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. ░░ Subject: A start job for unit systemd-update-utmp-runlevel.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp-runlevel.service has finished successfully. ░░ ░░ The job identifier is 245. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com rsyslogd[894]: imjournal: journal files changed, reloading... [v8.2506.0-1.el10 try https://www.rsyslog.com/e/0 ] Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[951]: Cloud-init v. 24.4-5.el10 running 'modules:config' at Tue, 22 Jul 2025 12:25:19 +0000. Up 21.44 seconds. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd[895]: Received signal 15; terminating. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopping sshd.service - OpenSSH server daemon... ░░ Subject: A stop job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 507. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: sshd.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit sshd.service has successfully entered the 'dead' state. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopped sshd.service - OpenSSH server daemon. ░░ Subject: A stop job for unit sshd.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has finished. ░░ ░░ The job identifier is 507 and the job result is done. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 507. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd[955]: Server listening on 0.0.0.0 port 22. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd[955]: Server listening on :: port 22. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 507. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-config.service - Cloud-init: Config Stage. ░░ Subject: A start job for unit cloud-config.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.service has finished successfully. ░░ ░░ The job identifier is 275. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-final.service - Cloud-init: Final Stage... ░░ Subject: A start job for unit cloud-final.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-final.service has begun execution. ░░ ░░ The job identifier is 277. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com restraintd[904]: Listening on http://localhost:8081 Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[992]: Cloud-init v. 24.4-5.el10 running 'modules:final' at Tue, 22 Jul 2025 12:25:19 +0000. Up 21.86 seconds. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[1016]: ############################################################# Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[1017]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[1019]: 256 SHA256:00qzX8meWABT/y8TXwJjy5TJMYc2JuNVol8EYutqxak root@ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com (ECDSA) Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[1021]: 256 SHA256:3E/tN08ZG+sgsEMtMOHakGLrtV5hyaZZg72LVWOA0Mg root@ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com (ED25519) Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[1023]: 3072 SHA256:6huW2TcQDZqcTxdzNWTYZYztvf1p1FBROQrGGVssXXg root@ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com (RSA) Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[1024]: -----END SSH HOST KEY FINGERPRINTS----- Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[1026]: ############################################################# Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com kdumpctl[909]: kdump: Detected change(s) in the following file(s): /etc/fstab Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com cloud-init[992]: Cloud-init v. 24.4-5.el10 finished at Tue, 22 Jul 2025 12:25:19 +0000. Datasource DataSourceEc2Local. Up 21.97 seconds Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-final.service - Cloud-init: Final Stage. ░░ Subject: A start job for unit cloud-final.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-final.service has finished successfully. ░░ ░░ The job identifier is 277. Jul 22 08:25:19 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target cloud-init.target - Cloud-init target. ░░ Subject: A start job for unit cloud-init.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.target has finished successfully. ░░ ░░ The job identifier is 272. Jul 22 08:25:21 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com kernel: block xvda: the capability attribute has been deprecated. Jul 22 08:25:21 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com kdumpctl[909]: kdump: Rebuilding /boot/initramfs-6.12.0-98.el10.x86_64kdump.img Jul 22 08:25:22 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1428]: dracut-105-4.el10 Jul 22 08:25:22 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1431]: Executing: /usr/bin/dracut --list-modules Jul 22 08:25:22 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1487]: dracut-105-4.el10 Jul 22 08:25:22 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1489]: Executing: /usr/bin/dracut --list-modules Jul 22 08:25:22 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1551]: dracut-105-4.el10 Jul 22 08:25:22 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics --aggressive-strip --mount "/dev/disk/by-uuid/0a4c0384-ac05-49a1-bf2b-0105495224f1 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --add squash-erofs --squash-compressor lzma --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-6.12.0-98.el10.x86_64kdump.img 6.12.0-98.el10.x86_64 Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Selected source 10.2.32.37 Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-bsod' will not be installed, because command '/usr/lib/systemd/systemd-bsod' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-pcrphase' will not be installed, because command '/usr/lib/systemd/systemd-pcrphase' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'plymouth' will not be installed, because it's in the list to be omitted! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'crypt-gpg' will not be installed, because command 'gpg' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'pcsc' will not be installed, because command 'pcscd' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'hwdb' will not be installed, because it's in the list to be omitted! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'resume' will not be installed, because it's in the list to be omitted! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'squash-squashfs' will not be installed, because command 'mksquashfs' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'squash-squashfs' will not be installed, because command 'unsquashfs' could not be found! Jul 22 08:25:23 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'earlykdump' will not be installed, because it's in the list to be omitted! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-bsod' will not be installed, because command '/usr/lib/systemd/systemd-bsod' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-pcrphase' will not be installed, because command '/usr/lib/systemd/systemd-pcrphase' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'crypt-gpg' will not be installed, because command 'gpg' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'pcsc' will not be installed, because command 'pcscd' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'squash-squashfs' will not be installed, because command 'mksquashfs' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'squash-squashfs' will not be installed, because command 'unsquashfs' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: bash *** Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: shell-interpreter *** Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd *** Jul 22 08:25:24 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: fips *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: fips-crypto-policies *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd-ask-password *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd-initrd *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd-journald *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd-modules-load *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd-sysctl *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd-sysusers *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd-tmpfiles *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: systemd-udevd *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: rngd *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: i18n *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: drm *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: prefixdevname *** Jul 22 08:25:25 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: kernel-modules *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: kernel-modules-extra *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: kernel-modules-extra: configuration source "/run/depmod.d" does not exist Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: kernel-modules-extra: configuration source "/lib/depmod.d" does not exist Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf" Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: fstab-sys *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: rootfs-block *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: squash-erofs *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: terminfo *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: udev-rules *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: dracut-systemd *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: usrmount *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: base *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: fs-lib *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: kdumpbase *** Jul 22 08:25:26 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: memstrack *** Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: microcode_ctl-fw_dir_override *** Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl module: mangling fw_dir Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware" Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl: intel: caveats check for kernel version "6.12.0-98.el10.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl: configuration "intel-06-4f-01" is ignored Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"... Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl: configuration "intel-06-8f-08" is ignored Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates /lib/firmware" Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: openssl *** Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: shutdown *** Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including module: squash-lib *** Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Including modules done *** Jul 22 08:25:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Installing kernel module dependencies *** Jul 22 08:25:28 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Installing kernel module dependencies done *** Jul 22 08:25:28 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Resolving executable dependencies *** Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Resolving executable dependencies done *** Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Hardlinking files *** Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Mode: real Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Method: sha256 Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Files: 550 Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Linked: 24 files Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Compared: 0 xattrs Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Compared: 42 files Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Saved: 14.22 MiB Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Duration: 0.159279 seconds Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Hardlinking files done *** Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Generating early-microcode cpio image *** Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Constructing GenuineIntel.bin *** Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Constructing GenuineIntel.bin *** Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Store current command line parameters *** Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: Stored kernel commandline: Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: No dracut internal kernel commandline stored in the initramfs Jul 22 08:25:29 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Squashing the files inside the initramfs *** Jul 22 08:25:43 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Squashing the files inside the initramfs done *** Jul 22 08:25:43 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Creating image file '/boot/initramfs-6.12.0-98.el10.x86_64kdump.img' *** Jul 22 08:25:44 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com dracut[1554]: *** Creating initramfs image file '/boot/initramfs-6.12.0-98.el10.x86_64kdump.img' done *** Jul 22 08:25:44 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com kdumpctl[909]: kdump: kexec: loaded kdump kernel Jul 22 08:25:44 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com kdumpctl[909]: kdump: Starting kdump: [OK] Jul 22 08:25:44 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished kdump.service - Crash recovery kernel arming. ░░ Subject: A start job for unit kdump.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has finished successfully. ░░ ░░ The job identifier is 239. Jul 22 08:25:44 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Startup finished in 1.238s (kernel) + 4.190s (initrd) + 41.947s (userspace) = 47.375s. ░░ Subject: System start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ All system services necessary queued for starting at boot have been ░░ started. Note that this does not mean that the machine is now idle as services ░░ might still be busy with completing start-up. ░░ ░░ Kernel start-up required 1238157 microseconds. ░░ ░░ Initrd start-up required 4190069 microseconds. ░░ ░░ Userspace start-up required 41947219 microseconds. Jul 22 08:25:46 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 22 08:26:28 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Selected source 208.113.130.146 (2.centos.pool.ntp.org) Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4455]: Accepted publickey for root from 10.30.34.89 port 34758 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Created slice user-0.slice - User Slice of UID 0. ░░ Subject: A start job for unit user-0.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-0.slice has finished successfully. ░░ ░░ The job identifier is 586. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0... ░░ Subject: A start job for unit user-runtime-dir@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has begun execution. ░░ ░░ The job identifier is 508. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: New session 1 of user root. ░░ Subject: A new session 1 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 1 has been created for the user root. ░░ ░░ The leading process of the session is 4455. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0. ░░ Subject: A start job for unit user-runtime-dir@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has finished successfully. ░░ ░░ The job identifier is 508. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting user@0.service - User Manager for UID 0... ░░ Subject: A start job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 588. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: New session 2 of user root. ░░ Subject: A new session 2 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 2 has been created for the user root. ░░ ░░ The leading process of the session is 4460. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com (systemd)[4460]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Queued start job for default target default.target. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Created slice app.slice - User Application Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 4. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 9. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 8. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target paths.target - Paths. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 10. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target timers.target - Timers. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 7. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Starting dbus.socket - D-Bus User Message Bus Socket... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 12. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 3. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Listening on dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 12. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 3. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target sockets.target - Sockets. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 11. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 2. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Reached target default.target - Main User Target. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 1. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[4460]: Startup finished in 116ms. ░░ Subject: User manager start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The user manager instance for user 0 has been started. All services queued ░░ for starting have been started. Note that other services might still be starting ░░ up or be started at any later time. ░░ ░░ Startup of the manager took 116152 microseconds. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started user@0.service - User Manager for UID 0. ░░ Subject: A start job for unit user@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has finished successfully. ░░ ░░ The job identifier is 588. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-1.scope - Session 1 of User root. ░░ Subject: A start job for unit session-1.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-1.scope has finished successfully. ░░ ░░ The job identifier is 669. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4455]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4471]: Received disconnect from 10.30.34.89 port 34758:11: disconnected by user Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4471]: Disconnected from user root 10.30.34.89 port 34758 Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4455]: pam_unix(sshd:session): session closed for user root Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: session-1.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-1.scope has successfully entered the 'dead' state. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: Session 1 logged out. Waiting for processes to exit. Jul 22 08:27:03 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: Removed session 1. ░░ Subject: Session 1 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 1 has been terminated. Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4508]: Accepted publickey for root from 10.31.9.41 port 42804 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4509]: Accepted publickey for root from 10.31.9.41 port 42820 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: New session 3 of user root. ░░ Subject: A new session 3 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 3 has been created for the user root. ░░ ░░ The leading process of the session is 4508. Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-3.scope - Session 3 of User root. ░░ Subject: A start job for unit session-3.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-3.scope has finished successfully. ░░ ░░ The job identifier is 751. Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: New session 4 of user root. ░░ Subject: A new session 4 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 4 has been created for the user root. ░░ ░░ The leading process of the session is 4509. Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4508]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-4.scope - Session 4 of User root. ░░ Subject: A start job for unit session-4.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-4.scope has finished successfully. ░░ ░░ The job identifier is 833. Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4509]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4515]: Received disconnect from 10.31.9.41 port 42820:11: disconnected by user Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4515]: Disconnected from user root 10.31.9.41 port 42820 Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com sshd-session[4509]: pam_unix(sshd:session): session closed for user root Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: session-4.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-4.scope has successfully entered the 'dead' state. Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: Session 4 logged out. Waiting for processes to exit. Jul 22 08:27:10 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd-logind[665]: Removed session 4. ░░ Subject: Session 4 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 4 has been terminated. Jul 22 08:27:27 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com unknown: Running test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1) with reboot count 0 and test restart count 0. (Be aware the test name is sanitized!) Jul 22 08:27:28 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... ░░ Subject: A start job for unit systemd-hostnamed.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has begun execution. ░░ ░░ The job identifier is 915. Jul 22 08:27:28 ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. ░░ Subject: A start job for unit systemd-hostnamed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has finished successfully. ░░ ░░ The job identifier is 915. Jul 22 08:27:28 managed-node1 systemd-hostnamed[6377]: Hostname set to (static) Jul 22 08:27:28 managed-node1 NetworkManager[720]: [1753187248.0997] hostname: static hostname changed from "ip-10-31-45-60.testing-farm.us-east-1.aws.redhat.com" to "managed-node1" Jul 22 08:27:28 managed-node1 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 993. Jul 22 08:27:28 managed-node1 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 993. Jul 22 08:27:29 managed-node1 unknown: Leaving test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1). (Be aware the test name is sanitized!) Jul 22 08:27:38 managed-node1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 22 08:27:58 managed-node1 systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 22 08:27:59 managed-node1 sshd-session[7425]: Accepted publickey for root from 10.31.42.212 port 49974 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:27:59 managed-node1 systemd-logind[665]: New session 5 of user root. ░░ Subject: A new session 5 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 5 has been created for the user root. ░░ ░░ The leading process of the session is 7425. Jul 22 08:27:59 managed-node1 systemd[1]: Started session-5.scope - Session 5 of User root. ░░ Subject: A start job for unit session-5.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-5.scope has finished successfully. ░░ ░░ The job identifier is 1072. Jul 22 08:27:59 managed-node1 sshd-session[7425]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:59 managed-node1 sshd-session[7428]: Received disconnect from 10.31.42.212 port 49974:11: disconnected by user Jul 22 08:27:59 managed-node1 sshd-session[7428]: Disconnected from user root 10.31.42.212 port 49974 Jul 22 08:27:59 managed-node1 sshd-session[7425]: pam_unix(sshd:session): session closed for user root Jul 22 08:27:59 managed-node1 systemd[1]: session-5.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-5.scope has successfully entered the 'dead' state. Jul 22 08:27:59 managed-node1 systemd-logind[665]: Session 5 logged out. Waiting for processes to exit. Jul 22 08:27:59 managed-node1 systemd-logind[665]: Removed session 5. ░░ Subject: Session 5 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 5 has been terminated. Jul 22 08:27:59 managed-node1 sshd-session[7453]: Accepted publickey for root from 10.31.42.212 port 49982 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:27:59 managed-node1 systemd-logind[665]: New session 6 of user root. ░░ Subject: A new session 6 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 6 has been created for the user root. ░░ ░░ The leading process of the session is 7453. Jul 22 08:27:59 managed-node1 systemd[1]: Started session-6.scope - Session 6 of User root. ░░ Subject: A start job for unit session-6.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-6.scope has finished successfully. ░░ ░░ The job identifier is 1154. Jul 22 08:27:59 managed-node1 sshd-session[7453]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:59 managed-node1 sshd-session[7456]: Received disconnect from 10.31.42.212 port 49982:11: disconnected by user Jul 22 08:27:59 managed-node1 sshd-session[7456]: Disconnected from user root 10.31.42.212 port 49982 Jul 22 08:27:59 managed-node1 sshd-session[7453]: pam_unix(sshd:session): session closed for user root Jul 22 08:27:59 managed-node1 systemd-logind[665]: Session 6 logged out. Waiting for processes to exit. Jul 22 08:27:59 managed-node1 systemd[1]: session-6.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-6.scope has successfully entered the 'dead' state. Jul 22 08:27:59 managed-node1 systemd-logind[665]: Removed session 6. ░░ Subject: Session 6 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 6 has been terminated. Jul 22 08:28:05 managed-node1 sshd-session[7485]: Accepted publickey for root from 10.31.42.212 port 52332 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:05 managed-node1 systemd-logind[665]: New session 7 of user root. ░░ Subject: A new session 7 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 7 has been created for the user root. ░░ ░░ The leading process of the session is 7485. Jul 22 08:28:05 managed-node1 systemd[1]: Started session-7.scope - Session 7 of User root. ░░ Subject: A start job for unit session-7.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-7.scope has finished successfully. ░░ ░░ The job identifier is 1236. Jul 22 08:28:05 managed-node1 sshd-session[7485]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:05 managed-node1 sshd-session[7488]: Received disconnect from 10.31.42.212 port 52332:11: disconnected by user Jul 22 08:28:05 managed-node1 sshd-session[7488]: Disconnected from user root 10.31.42.212 port 52332 Jul 22 08:28:05 managed-node1 sshd-session[7485]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:05 managed-node1 systemd[1]: session-7.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-7.scope has successfully entered the 'dead' state. Jul 22 08:28:05 managed-node1 systemd-logind[665]: Session 7 logged out. Waiting for processes to exit. Jul 22 08:28:05 managed-node1 systemd-logind[665]: Removed session 7. ░░ Subject: Session 7 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 7 has been terminated. Jul 22 08:28:07 managed-node1 sshd-session[7514]: Accepted publickey for root from 10.31.42.212 port 52340 ssh2: ECDSA SHA256:WU7noZiQSxkQHAT4JsTwkz7sTow5ig7aO2gcgaqEwOg Jul 22 08:28:07 managed-node1 systemd-logind[665]: New session 8 of user root. ░░ Subject: A new session 8 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 8 has been created for the user root. ░░ ░░ The leading process of the session is 7514. Jul 22 08:28:07 managed-node1 systemd[1]: Started session-8.scope - Session 8 of User root. ░░ Subject: A start job for unit session-8.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-8.scope has finished successfully. ░░ ░░ The job identifier is 1318. Jul 22 08:28:07 managed-node1 sshd-session[7514]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:08 managed-node1 python3.12[7691]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:09 managed-node1 sudo[7869]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxikejpgydftscznbkusctescnvtkajm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187289.5050106-7893-102154301681293/AnsiballZ_setup.py' Jul 22 08:28:09 managed-node1 sudo[7869]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:09 managed-node1 python3.12[7872]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:10 managed-node1 sudo[7869]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:10 managed-node1 sudo[8050]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvrszaeoopzhvwlkopgakxsnxnexhthm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187290.5827734-7909-126970147825577/AnsiballZ_stat.py' Jul 22 08:28:10 managed-node1 sudo[8050]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:11 managed-node1 python3.12[8053]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:11 managed-node1 sudo[8050]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:11 managed-node1 sudo[8202]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsjtwriplkseqhwydkjmknfhxcotycip ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187291.3191879-7973-33589094560243/AnsiballZ_dnf.py' Jul 22 08:28:11 managed-node1 sudo[8202]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:11 managed-node1 python3.12[8205]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:22 managed-node1 groupadd[8311]: group added to /etc/group: name=clevis, GID=993 Jul 22 08:28:22 managed-node1 groupadd[8311]: group added to /etc/gshadow: name=clevis Jul 22 08:28:22 managed-node1 groupadd[8311]: new group: name=clevis, GID=993 Jul 22 08:28:22 managed-node1 useradd[8313]: new user: name=clevis, UID=993, GID=993, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none Jul 22 08:28:23 managed-node1 usermod[8317]: add 'clevis' to group 'tss' Jul 22 08:28:23 managed-node1 usermod[8317]: add 'clevis' to shadow group 'tss' Jul 22 08:28:23 managed-node1 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:23 managed-node1 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:23 managed-node1 groupadd[8324]: group added to /etc/group: name=polkitd, GID=114 Jul 22 08:28:23 managed-node1 groupadd[8324]: group added to /etc/gshadow: name=polkitd Jul 22 08:28:23 managed-node1 groupadd[8324]: new group: name=polkitd, GID=114 Jul 22 08:28:23 managed-node1 useradd[8327]: new user: name=polkitd, UID=114, GID=114, home=/, shell=/sbin/nologin, from=none Jul 22 08:28:23 managed-node1 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:23 managed-node1 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:23 managed-node1 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:23 managed-node1 systemd[1]: Listening on pcscd.socket - PC/SC Smart Card Daemon Activation Socket. ░░ Subject: A start job for unit pcscd.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit pcscd.socket has finished successfully. ░░ ░░ The job identifier is 1405. Jul 22 08:28:23 managed-node1 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:23 managed-node1 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:24 managed-node1 systemd[1]: Started run-p8357-i8657.service - [systemd-run] /usr/bin/systemctl start man-db-cache-update. ░░ Subject: A start job for unit run-p8357-i8657.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-p8357-i8657.service has finished successfully. ░░ ░░ The job identifier is 1486. Jul 22 08:28:24 managed-node1 systemctl[8358]: Warning: The unit file, source configuration file or drop-ins of man-db-cache-update.service changed on disk. Run 'systemctl daemon-reload' to reload units. Jul 22 08:28:24 managed-node1 systemd[1]: Starting man-db-cache-update.service... ░░ Subject: A start job for unit man-db-cache-update.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has begun execution. ░░ ░░ The job identifier is 1564. Jul 22 08:28:24 managed-node1 systemd[1]: Reload requested from client PID 8361 ('systemctl') (unit session-8.scope)... Jul 22 08:28:24 managed-node1 systemd[1]: Reloading... Jul 22 08:28:24 managed-node1 systemd-rc-local-generator[8402]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 22 08:28:24 managed-node1 systemd[1]: Reloading finished in 248 ms. Jul 22 08:28:24 managed-node1 systemd[1]: Queuing reload/restart jobs for marked units… Jul 22 08:28:24 managed-node1 systemd[1]: Reloading user@0.service - User Manager for UID 0... ░░ Subject: A reload job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A reload job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 1642. Jul 22 08:28:24 managed-node1 systemd[4460]: Received SIGRTMIN+25 from PID 1 (systemd). Jul 22 08:28:24 managed-node1 systemd[4460]: Reexecuting. Jul 22 08:28:24 managed-node1 systemd[1]: Reloaded user@0.service - User Manager for UID 0. ░░ Subject: A reload job for unit user@0.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A reload job for unit user@0.service has finished. ░░ ░░ The job identifier is 1642 and the job result is done. Jul 22 08:28:25 managed-node1 sudo[8202]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:26 managed-node1 sudo[9261]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upizijoqkjppznlpskqjjewhhlxkzanu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187305.98794-8925-62695392723685/AnsiballZ_blivet.py' Jul 22 08:28:26 managed-node1 sudo[9261]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:26 managed-node1 python3.12[9264]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:28:26 managed-node1 sudo[9261]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:27 managed-node1 sudo[9425]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtfzmccrzbzqskbebtyqmgnlydrpxok ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187307.1885679-8978-191341318344529/AnsiballZ_dnf.py' Jul 22 08:28:27 managed-node1 sudo[9425]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:27 managed-node1 python3.12[9428]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:28 managed-node1 systemd[1]: man-db-cache-update.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service has successfully entered the 'dead' state. Jul 22 08:28:28 managed-node1 systemd[1]: Finished man-db-cache-update.service. ░░ Subject: A start job for unit man-db-cache-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has finished successfully. ░░ ░░ The job identifier is 1564. Jul 22 08:28:28 managed-node1 systemd[1]: man-db-cache-update.service: Consumed 1.079s CPU time, 37.5M memory peak. ░░ Subject: Resources consumed by unit runtime ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service completed and consumed the indicated resources. Jul 22 08:28:28 managed-node1 systemd[1]: run-p8357-i8657.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-p8357-i8657.service has successfully entered the 'dead' state. Jul 22 08:28:28 managed-node1 sudo[9425]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:28 managed-node1 sudo[9588]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzisitxutpdkanfpoquckzgemebfravn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187308.2232907-9060-74153210161909/AnsiballZ_service_facts.py' Jul 22 08:28:28 managed-node1 sudo[9588]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:28 managed-node1 python3.12[9591]: ansible-service_facts Invoked Jul 22 08:28:30 managed-node1 sudo[9588]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:30 managed-node1 sudo[9858]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhdbmjddmnwinvlpactgnwuftxfuizt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187310.724813-9186-257100635106626/AnsiballZ_blivet.py' Jul 22 08:28:30 managed-node1 sudo[9858]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:31 managed-node1 python3.12[9861]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:28:31 managed-node1 sudo[9858]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:31 managed-node1 sudo[10018]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydxcvotceqrubiqzzfzrkouofwkmuxrp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187311.3730543-9281-254627723584894/AnsiballZ_stat.py' Jul 22 08:28:31 managed-node1 sudo[10018]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:31 managed-node1 python3.12[10021]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:31 managed-node1 sudo[10018]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:32 managed-node1 sudo[10178]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzpiqxuesyerqsdadosmnahxoqpctkue ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187312.503086-9451-194467746706407/AnsiballZ_stat.py' Jul 22 08:28:32 managed-node1 sudo[10178]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:32 managed-node1 python3.12[10181]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:32 managed-node1 sudo[10178]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:33 managed-node1 sudo[10338]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfkciwgunouaaoievhzykgmomiuwqcfp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187312.9871826-9465-230117686682464/AnsiballZ_setup.py' Jul 22 08:28:33 managed-node1 sudo[10338]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:33 managed-node1 python3.12[10341]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:33 managed-node1 sudo[10338]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:34 managed-node1 sudo[10525]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idssafxkiksfalftapqgeunxcofqazyi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187314.0017989-9492-184214269743989/AnsiballZ_dnf.py' Jul 22 08:28:34 managed-node1 sudo[10525]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:34 managed-node1 python3.12[10528]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:34 managed-node1 sudo[10525]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:35 managed-node1 sudo[10684]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmibtqvecxfbnjufopgowqxilxujanpi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187314.9198892-9573-108125138990038/AnsiballZ_find_unused_disk.py' Jul 22 08:28:35 managed-node1 sudo[10684]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:35 managed-node1 python3.12[10687]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:28:35 managed-node1 sudo[10684]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:36 managed-node1 sudo[10844]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otiltntsomkatultdlleeopwvvajvipb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187315.738383-9609-16893807277818/AnsiballZ_command.py' Jul 22 08:28:36 managed-node1 sudo[10844]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:36 managed-node1 python3.12[10847]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:28:36 managed-node1 sudo[10844]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:37 managed-node1 sshd-session[10875]: Accepted publickey for root from 10.31.42.212 port 46806 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:37 managed-node1 systemd-logind[665]: New session 9 of user root. ░░ Subject: A new session 9 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 9 has been created for the user root. ░░ ░░ The leading process of the session is 10875. Jul 22 08:28:37 managed-node1 systemd[1]: Started session-9.scope - Session 9 of User root. ░░ Subject: A start job for unit session-9.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-9.scope has finished successfully. ░░ ░░ The job identifier is 1643. Jul 22 08:28:37 managed-node1 sshd-session[10875]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:37 managed-node1 sshd-session[10878]: Received disconnect from 10.31.42.212 port 46806:11: disconnected by user Jul 22 08:28:37 managed-node1 sshd-session[10878]: Disconnected from user root 10.31.42.212 port 46806 Jul 22 08:28:37 managed-node1 sshd-session[10875]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:37 managed-node1 systemd[1]: session-9.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-9.scope has successfully entered the 'dead' state. Jul 22 08:28:37 managed-node1 systemd-logind[665]: Session 9 logged out. Waiting for processes to exit. Jul 22 08:28:37 managed-node1 systemd-logind[665]: Removed session 9. ░░ Subject: Session 9 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 9 has been terminated. Jul 22 08:28:37 managed-node1 sshd-session[10905]: Accepted publickey for root from 10.31.42.212 port 46814 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:37 managed-node1 systemd-logind[665]: New session 10 of user root. ░░ Subject: A new session 10 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 10 has been created for the user root. ░░ ░░ The leading process of the session is 10905. Jul 22 08:28:37 managed-node1 systemd[1]: Started session-10.scope - Session 10 of User root. ░░ Subject: A start job for unit session-10.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-10.scope has finished successfully. ░░ ░░ The job identifier is 1728. Jul 22 08:28:37 managed-node1 sshd-session[10905]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:37 managed-node1 sshd-session[10908]: Received disconnect from 10.31.42.212 port 46814:11: disconnected by user Jul 22 08:28:37 managed-node1 sshd-session[10908]: Disconnected from user root 10.31.42.212 port 46814 Jul 22 08:28:37 managed-node1 sshd-session[10905]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:37 managed-node1 systemd-logind[665]: Session 10 logged out. Waiting for processes to exit. Jul 22 08:28:37 managed-node1 systemd[1]: session-10.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-10.scope has successfully entered the 'dead' state. Jul 22 08:28:37 managed-node1 systemd-logind[665]: Removed session 10. ░░ Subject: Session 10 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 10 has been terminated. Jul 22 08:28:42 managed-node1 python3.12[11115]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:43 managed-node1 sudo[11299]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viyuqkcsfohnvpytvllyyuazsvbponri ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187323.0883253-10453-239366003414476/AnsiballZ_setup.py' Jul 22 08:28:43 managed-node1 sudo[11299]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:43 managed-node1 python3.12[11302]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:43 managed-node1 sudo[11299]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:45 managed-node1 sudo[11486]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ablciaphfmjrcmddvffkujkupsdnbhfm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187324.573618-10589-146512168430482/AnsiballZ_stat.py' Jul 22 08:28:45 managed-node1 sudo[11486]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:45 managed-node1 python3.12[11490]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:45 managed-node1 sudo[11486]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:47 managed-node1 sudo[11645]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urfahawsyrywoqmckybxpuwpnrfvtrdk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187325.9661164-10653-255646615336625/AnsiballZ_dnf.py' Jul 22 08:28:47 managed-node1 sudo[11645]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:47 managed-node1 python3.12[11648]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:47 managed-node1 sudo[11645]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:49 managed-node1 sudo[11804]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fagdtwimjsdyotqxbcxoltdcafguokwb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187328.5271795-10965-111226221590789/AnsiballZ_blivet.py' Jul 22 08:28:49 managed-node1 sudo[11804]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:50 managed-node1 python3.12[11807]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:28:50 managed-node1 sudo[11804]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:51 managed-node1 sudo[11964]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txonksbyqyovulvbyvqokeckhaccokfg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187331.4028208-11155-250456690835032/AnsiballZ_dnf.py' Jul 22 08:28:51 managed-node1 sudo[11964]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:51 managed-node1 python3.12[11967]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:52 managed-node1 sudo[11964]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:53 managed-node1 sudo[12123]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poplcikqzfhownxqqmkedwbugncwjopr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187332.5205944-11278-178792943131756/AnsiballZ_service_facts.py' Jul 22 08:28:53 managed-node1 sudo[12123]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:53 managed-node1 python3.12[12128]: ansible-service_facts Invoked Jul 22 08:28:55 managed-node1 sudo[12123]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:57 managed-node1 sudo[12395]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdrmjkvpdayhcwzvxvntglrgsuafjjw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187336.7665045-11775-27750162210296/AnsiballZ_blivet.py' Jul 22 08:28:57 managed-node1 sudo[12395]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:57 managed-node1 python3.12[12398]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:28:57 managed-node1 sudo[12395]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:58 managed-node1 sudo[12555]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bifaxhadejaswkabwmkqzibggvluxrbp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187338.2451663-11954-67558209381648/AnsiballZ_stat.py' Jul 22 08:28:58 managed-node1 sudo[12555]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:58 managed-node1 python3.12[12558]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:58 managed-node1 sudo[12555]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:01 managed-node1 sudo[12715]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adlwjtnropkojfzalctxwyvwenbyqyla ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187341.649897-12480-247948466222221/AnsiballZ_stat.py' Jul 22 08:29:01 managed-node1 sudo[12715]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:02 managed-node1 python3.12[12718]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:02 managed-node1 sudo[12715]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:02 managed-node1 sudo[12875]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcqznlpidabqwitybaewnkfvwpugqwgo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187342.4708774-12539-37943812504949/AnsiballZ_setup.py' Jul 22 08:29:02 managed-node1 sudo[12875]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:03 managed-node1 python3.12[12878]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:03 managed-node1 sudo[12875]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:05 managed-node1 sudo[13062]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdtgjeinluxngeqssiazwgnyqmmgwily ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187344.6465297-12693-199330469470943/AnsiballZ_dnf.py' Jul 22 08:29:05 managed-node1 sudo[13062]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:05 managed-node1 python3.12[13065]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:05 managed-node1 sudo[13062]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:07 managed-node1 sudo[13221]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxhqmfmtkzatdyusabsqiudmrcopefno ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187346.1515398-12859-127729480702695/AnsiballZ_find_unused_disk.py' Jul 22 08:29:07 managed-node1 sudo[13221]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:07 managed-node1 python3.12[13224]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=1 with_interface=scsi min_size=0 max_size=0 match_sector_size=False Jul 22 08:29:07 managed-node1 sudo[13221]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:09 managed-node1 sudo[13381]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tseuxhccsnsehrrwqbkufyfxvrbewltg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187347.9289756-13009-103912644170296/AnsiballZ_command.py' Jul 22 08:29:09 managed-node1 sudo[13381]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:09 managed-node1 python3.12[13384]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:29:09 managed-node1 sudo[13381]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:11 managed-node1 sshd-session[13412]: Accepted publickey for root from 10.31.42.212 port 54262 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:11 managed-node1 systemd-logind[665]: New session 11 of user root. ░░ Subject: A new session 11 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 11 has been created for the user root. ░░ ░░ The leading process of the session is 13412. Jul 22 08:29:11 managed-node1 systemd[1]: Started session-11.scope - Session 11 of User root. ░░ Subject: A start job for unit session-11.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-11.scope has finished successfully. ░░ ░░ The job identifier is 1813. Jul 22 08:29:11 managed-node1 sshd-session[13412]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:11 managed-node1 sshd-session[13415]: Received disconnect from 10.31.42.212 port 54262:11: disconnected by user Jul 22 08:29:11 managed-node1 sshd-session[13415]: Disconnected from user root 10.31.42.212 port 54262 Jul 22 08:29:11 managed-node1 sshd-session[13412]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:11 managed-node1 systemd[1]: session-11.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-11.scope has successfully entered the 'dead' state. Jul 22 08:29:11 managed-node1 systemd-logind[665]: Session 11 logged out. Waiting for processes to exit. Jul 22 08:29:11 managed-node1 systemd-logind[665]: Removed session 11. ░░ Subject: Session 11 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 11 has been terminated. Jul 22 08:29:11 managed-node1 sshd-session[13442]: Accepted publickey for root from 10.31.42.212 port 47338 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:11 managed-node1 systemd-logind[665]: New session 12 of user root. ░░ Subject: A new session 12 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 12 has been created for the user root. ░░ ░░ The leading process of the session is 13442. Jul 22 08:29:11 managed-node1 systemd[1]: Started session-12.scope - Session 12 of User root. ░░ Subject: A start job for unit session-12.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-12.scope has finished successfully. ░░ ░░ The job identifier is 1898. Jul 22 08:29:11 managed-node1 sshd-session[13442]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:11 managed-node1 sshd-session[13445]: Received disconnect from 10.31.42.212 port 47338:11: disconnected by user Jul 22 08:29:11 managed-node1 sshd-session[13445]: Disconnected from user root 10.31.42.212 port 47338 Jul 22 08:29:11 managed-node1 sshd-session[13442]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:11 managed-node1 systemd[1]: session-12.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-12.scope has successfully entered the 'dead' state. Jul 22 08:29:11 managed-node1 systemd-logind[665]: Session 12 logged out. Waiting for processes to exit. Jul 22 08:29:11 managed-node1 systemd-logind[665]: Removed session 12. ░░ Subject: Session 12 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 12 has been terminated. Jul 22 08:29:18 managed-node1 sudo[13652]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhyccpahpvrwmyedmwmvplgyjqbvvsqc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187356.3744054-14307-147534878327385/AnsiballZ_setup.py' Jul 22 08:29:18 managed-node1 sudo[13652]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:18 managed-node1 python3.12[13655]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:18 managed-node1 sudo[13652]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:20 managed-node1 sudo[13839]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckvbteqhsfwvdktdmwslbmzmhzrvjubm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187360.1138167-14768-73942653243505/AnsiballZ_stat.py' Jul 22 08:29:20 managed-node1 sudo[13839]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:21 managed-node1 python3.12[13842]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:21 managed-node1 sudo[13839]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:23 managed-node1 sudo[13997]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umxhtdafhsbfuybbrndwtdosqfsxnast ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187362.272365-14987-218390404353201/AnsiballZ_dnf.py' Jul 22 08:29:23 managed-node1 sudo[13997]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:23 managed-node1 python3.12[14000]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:23 managed-node1 sudo[13997]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:26 managed-node1 sudo[14156]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjaxofzmeswnnstazprssarymxpmvqkp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187364.5937047-15140-225801201265033/AnsiballZ_blivet.py' Jul 22 08:29:26 managed-node1 sudo[14156]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:26 managed-node1 python3.12[14159]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:29:26 managed-node1 sudo[14156]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:28 managed-node1 sudo[14316]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbazmvtbrtplqhxilcptqthwldfmqdhr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187367.8104692-15343-256893736084436/AnsiballZ_dnf.py' Jul 22 08:29:28 managed-node1 sudo[14316]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:28 managed-node1 python3.12[14319]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:28 managed-node1 sudo[14316]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:30 managed-node1 sudo[14475]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtmqbkxyoojalptcxfrrpuoihphzlgo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187369.1471193-15484-179023358169874/AnsiballZ_service_facts.py' Jul 22 08:29:30 managed-node1 sudo[14475]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:30 managed-node1 python3.12[14478]: ansible-service_facts Invoked Jul 22 08:29:32 managed-node1 sudo[14475]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:34 managed-node1 sudo[14745]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdxkhbipuvzjebnfcywqjgkgvffkhcta ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187373.7400515-15862-5715228022565/AnsiballZ_blivet.py' Jul 22 08:29:34 managed-node1 sudo[14745]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:34 managed-node1 python3.12[14748]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:29:34 managed-node1 sudo[14745]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:35 managed-node1 sudo[14905]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfvmqbfdwmvufufrpllvtyvzugiypjzz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187375.161194-16060-200436638440645/AnsiballZ_stat.py' Jul 22 08:29:35 managed-node1 sudo[14905]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:35 managed-node1 python3.12[14908]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:35 managed-node1 sudo[14905]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:38 managed-node1 sudo[15065]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlujxpfbeifkubqvzvgkjztldoifvavo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187378.5936067-16397-246914680323863/AnsiballZ_stat.py' Jul 22 08:29:38 managed-node1 sudo[15065]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:39 managed-node1 python3.12[15068]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:39 managed-node1 sudo[15065]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:40 managed-node1 sudo[15225]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyfsvwgejbwcgehvobmtulikrifgbpbe ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187379.6793857-16485-141753480769954/AnsiballZ_setup.py' Jul 22 08:29:40 managed-node1 sudo[15225]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:40 managed-node1 python3.12[15228]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:40 managed-node1 sudo[15225]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:42 managed-node1 sudo[15412]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjpcmxchcfgdylbjimexhixgwbjojeoq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187382.1756427-16682-277283411214680/AnsiballZ_dnf.py' Jul 22 08:29:42 managed-node1 sudo[15412]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:42 managed-node1 python3.12[15415]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:43 managed-node1 sudo[15412]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:45 managed-node1 sudo[15571]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmipwrszmskhabvkgbjnonxhpgypfytz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187383.675105-16775-242069300989646/AnsiballZ_find_unused_disk.py' Jul 22 08:29:45 managed-node1 sudo[15571]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:45 managed-node1 python3.12[15574]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:29:45 managed-node1 sudo[15571]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:47 managed-node1 sudo[15731]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vojezwffialkptaanorhwufsctnuetxe ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187385.6810286-16993-6161900817509/AnsiballZ_command.py' Jul 22 08:29:47 managed-node1 sudo[15731]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:47 managed-node1 python3.12[15734]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:29:47 managed-node1 sudo[15731]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:49 managed-node1 sshd-session[15762]: Accepted publickey for root from 10.31.42.212 port 48828 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:49 managed-node1 systemd-logind[665]: New session 13 of user root. ░░ Subject: A new session 13 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 13 has been created for the user root. ░░ ░░ The leading process of the session is 15762. Jul 22 08:29:49 managed-node1 systemd[1]: Started session-13.scope - Session 13 of User root. ░░ Subject: A start job for unit session-13.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-13.scope has finished successfully. ░░ ░░ The job identifier is 1983. Jul 22 08:29:49 managed-node1 sshd-session[15762]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:49 managed-node1 sshd-session[15765]: Received disconnect from 10.31.42.212 port 48828:11: disconnected by user Jul 22 08:29:49 managed-node1 sshd-session[15765]: Disconnected from user root 10.31.42.212 port 48828 Jul 22 08:29:49 managed-node1 sshd-session[15762]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:49 managed-node1 systemd[1]: session-13.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-13.scope has successfully entered the 'dead' state. Jul 22 08:29:49 managed-node1 systemd-logind[665]: Session 13 logged out. Waiting for processes to exit. Jul 22 08:29:49 managed-node1 systemd-logind[665]: Removed session 13. ░░ Subject: Session 13 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 13 has been terminated. Jul 22 08:29:50 managed-node1 sshd-session[15792]: Accepted publickey for root from 10.31.42.212 port 48834 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:50 managed-node1 systemd-logind[665]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 15792. Jul 22 08:29:50 managed-node1 systemd[1]: Started session-14.scope - Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 2068. Jul 22 08:29:50 managed-node1 sshd-session[15792]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:50 managed-node1 sshd-session[15795]: Received disconnect from 10.31.42.212 port 48834:11: disconnected by user Jul 22 08:29:50 managed-node1 sshd-session[15795]: Disconnected from user root 10.31.42.212 port 48834 Jul 22 08:29:50 managed-node1 sshd-session[15792]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:50 managed-node1 systemd-logind[665]: Session 14 logged out. Waiting for processes to exit. Jul 22 08:29:50 managed-node1 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Jul 22 08:29:50 managed-node1 systemd-logind[665]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Jul 22 08:29:55 managed-node1 sshd-session[15822]: Accepted publickey for root from 10.31.42.212 port 44940 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:55 managed-node1 systemd-logind[665]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 15822. Jul 22 08:29:55 managed-node1 systemd[1]: Started session-15.scope - Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 2153. Jul 22 08:29:55 managed-node1 sshd-session[15822]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:55 managed-node1 sshd-session[15825]: Received disconnect from 10.31.42.212 port 44940:11: disconnected by user Jul 22 08:29:55 managed-node1 sshd-session[15825]: Disconnected from user root 10.31.42.212 port 44940 Jul 22 08:29:55 managed-node1 sshd-session[15822]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:55 managed-node1 systemd[1]: session-15.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-15.scope has successfully entered the 'dead' state. Jul 22 08:29:55 managed-node1 systemd-logind[665]: Session 15 logged out. Waiting for processes to exit. Jul 22 08:29:55 managed-node1 systemd-logind[665]: Removed session 15. ░░ Subject: Session 15 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 15 has been terminated. Jul 22 08:30:02 managed-node1 python3.12[16032]: ansible-setup Invoked with gather_subset=['!all', '!min', 'architecture', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:05 managed-node1 python3.12[16190]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:07 managed-node1 python3.12[16345]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Jul 22 08:30:08 managed-node1 python3.12[16425]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:11 managed-node1 python3.12[16581]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:30:14 managed-node1 python3.12[16738]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Jul 22 08:30:14 managed-node1 python3.12[16818]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:16 managed-node1 python3.12[16974]: ansible-service_facts Invoked Jul 22 08:30:20 managed-node1 python3.12[17241]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:30:21 managed-node1 python3.12[17399]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:23 managed-node1 python3.12[17556]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:24 managed-node1 python3.12[17713]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:25 managed-node1 sshd-session[17768]: Accepted publickey for root from 10.31.42.212 port 49450 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:25 managed-node1 systemd-logind[665]: New session 16 of user root. ░░ Subject: A new session 16 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 16 has been created for the user root. ░░ ░░ The leading process of the session is 17768. Jul 22 08:30:25 managed-node1 systemd[1]: Started session-16.scope - Session 16 of User root. ░░ Subject: A start job for unit session-16.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-16.scope has finished successfully. ░░ ░░ The job identifier is 2238. Jul 22 08:30:25 managed-node1 sshd-session[17768]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:25 managed-node1 sshd-session[17771]: Received disconnect from 10.31.42.212 port 49450:11: disconnected by user Jul 22 08:30:25 managed-node1 sshd-session[17771]: Disconnected from user root 10.31.42.212 port 49450 Jul 22 08:30:25 managed-node1 sshd-session[17768]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:25 managed-node1 systemd[1]: session-16.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-16.scope has successfully entered the 'dead' state. Jul 22 08:30:25 managed-node1 systemd-logind[665]: Session 16 logged out. Waiting for processes to exit. Jul 22 08:30:25 managed-node1 systemd-logind[665]: Removed session 16. ░░ Subject: Session 16 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 16 has been terminated. Jul 22 08:30:30 managed-node1 sudo[17978]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkjcjymtcfjzsaidltkzsatdgdigapkl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187429.7277029-22605-169324923559528/AnsiballZ_setup.py' Jul 22 08:30:30 managed-node1 sudo[17978]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:31 managed-node1 python3.12[17981]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:31 managed-node1 sudo[17978]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:33 managed-node1 sudo[18165]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egngofuphkeayedvukzfuexvjzouiomz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187432.8886745-22912-145767608330960/AnsiballZ_stat.py' Jul 22 08:30:33 managed-node1 sudo[18165]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:33 managed-node1 python3.12[18170]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:33 managed-node1 sudo[18165]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:35 managed-node1 sudo[18325]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tavxbahbmgbhybwccqbzwifbctolavyi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187434.555066-23069-137148777169442/AnsiballZ_dnf.py' Jul 22 08:30:35 managed-node1 sudo[18325]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:35 managed-node1 python3.12[18328]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:35 managed-node1 sudo[18325]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:37 managed-node1 sudo[18484]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxwmvpqnlilfpuaraahmenmhszyvgrno ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187436.8286252-23298-267926382363176/AnsiballZ_blivet.py' Jul 22 08:30:37 managed-node1 sudo[18484]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:38 managed-node1 python3.12[18487]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:30:38 managed-node1 sudo[18484]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:39 managed-node1 sudo[18644]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqwonnimzbjrtbjrinlzfkwljhmzbap ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187439.3901389-23656-230843946274519/AnsiballZ_dnf.py' Jul 22 08:30:39 managed-node1 sudo[18644]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:39 managed-node1 python3.12[18647]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:40 managed-node1 sudo[18644]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:41 managed-node1 sudo[18803]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iomjoaoycwctuzojatoteihgqmtfanti ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187440.5945814-23736-107349519350648/AnsiballZ_service_facts.py' Jul 22 08:30:41 managed-node1 sudo[18803]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:41 managed-node1 python3.12[18806]: ansible-service_facts Invoked Jul 22 08:30:43 managed-node1 sudo[18803]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:44 managed-node1 sudo[19073]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utezzhgfcrigqomwxrfsnnxfgvtjtblt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187444.4858382-24269-220127278983784/AnsiballZ_blivet.py' Jul 22 08:30:44 managed-node1 sudo[19073]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:45 managed-node1 python3.12[19076]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:30:45 managed-node1 sudo[19073]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:45 managed-node1 sudo[19233]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iposxkresyrabjkyxqvoqprzlqhwrboj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187445.5346177-24445-280069227930865/AnsiballZ_stat.py' Jul 22 08:30:45 managed-node1 sudo[19233]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:45 managed-node1 python3.12[19236]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:45 managed-node1 sudo[19233]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:48 managed-node1 sudo[19393]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkqczdythvktxjssdohyomfylzgtpwr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187448.1529999-24701-56627844494914/AnsiballZ_stat.py' Jul 22 08:30:48 managed-node1 sudo[19393]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:48 managed-node1 python3.12[19396]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:48 managed-node1 sudo[19393]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:49 managed-node1 sudo[19553]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagyzpkdmbbbjliraitghfokdtxphzxm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187449.0775352-24843-158471205585106/AnsiballZ_setup.py' Jul 22 08:30:49 managed-node1 sudo[19553]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:49 managed-node1 python3.12[19556]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:49 managed-node1 sudo[19553]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:51 managed-node1 sudo[19740]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcwlkwtoiuliozgeqltvmqurpxdcpul ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187450.9811902-25181-83315031814396/AnsiballZ_dnf.py' Jul 22 08:30:51 managed-node1 sudo[19740]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:51 managed-node1 python3.12[19743]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:51 managed-node1 sudo[19740]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:52 managed-node1 sudo[19899]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gssvbthrgsxjbrwgnzdmtbzefbkptabx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187452.0067797-25287-231615887582331/AnsiballZ_find_unused_disk.py' Jul 22 08:30:52 managed-node1 sudo[19899]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:53 managed-node1 python3.12[19902]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=2 match_sector_size=True min_size=0 max_size=0 with_interface=None Jul 22 08:30:54 managed-node1 sudo[19899]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:54 managed-node1 sshd-session[19930]: Accepted publickey for root from 10.31.42.212 port 46544 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:54 managed-node1 systemd-logind[665]: New session 17 of user root. ░░ Subject: A new session 17 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 17 has been created for the user root. ░░ ░░ The leading process of the session is 19930. Jul 22 08:30:54 managed-node1 systemd[1]: Started session-17.scope - Session 17 of User root. ░░ Subject: A start job for unit session-17.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-17.scope has finished successfully. ░░ ░░ The job identifier is 2323. Jul 22 08:30:54 managed-node1 sshd-session[19930]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:54 managed-node1 sshd-session[19933]: Received disconnect from 10.31.42.212 port 46544:11: disconnected by user Jul 22 08:30:54 managed-node1 sshd-session[19933]: Disconnected from user root 10.31.42.212 port 46544 Jul 22 08:30:54 managed-node1 sshd-session[19930]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:54 managed-node1 systemd-logind[665]: Session 17 logged out. Waiting for processes to exit. Jul 22 08:30:54 managed-node1 systemd[1]: session-17.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-17.scope has successfully entered the 'dead' state. Jul 22 08:30:54 managed-node1 systemd-logind[665]: Removed session 17. ░░ Subject: Session 17 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 17 has been terminated. Jul 22 08:30:55 managed-node1 sshd-session[19960]: Accepted publickey for root from 10.31.42.212 port 46550 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:55 managed-node1 systemd-logind[665]: New session 18 of user root. ░░ Subject: A new session 18 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 18 has been created for the user root. ░░ ░░ The leading process of the session is 19960. Jul 22 08:30:55 managed-node1 systemd[1]: Started session-18.scope - Session 18 of User root. ░░ Subject: A start job for unit session-18.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-18.scope has finished successfully. ░░ ░░ The job identifier is 2408. Jul 22 08:30:55 managed-node1 sshd-session[19960]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:55 managed-node1 sshd-session[19963]: Received disconnect from 10.31.42.212 port 46550:11: disconnected by user Jul 22 08:30:55 managed-node1 sshd-session[19963]: Disconnected from user root 10.31.42.212 port 46550 Jul 22 08:30:55 managed-node1 sshd-session[19960]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:55 managed-node1 systemd[1]: session-18.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-18.scope has successfully entered the 'dead' state. Jul 22 08:30:55 managed-node1 systemd-logind[665]: Session 18 logged out. Waiting for processes to exit. Jul 22 08:30:55 managed-node1 systemd-logind[665]: Removed session 18. ░░ Subject: Session 18 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 18 has been terminated. Jul 22 08:30:58 managed-node1 python3.12[20170]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:59 managed-node1 python3.12[20354]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:13 managed-node1 python3.12[20538]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:15 managed-node1 python3.12[20693]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:18 managed-node1 python3.12[20849]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:19 managed-node1 python3.12[21006]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:21 managed-node1 python3.12[21162]: ansible-service_facts Invoked Jul 22 08:31:23 managed-node1 python3.12[21429]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:24 managed-node1 python3.12[21586]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:28 managed-node1 python3.12[21743]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:29 managed-node1 python3.12[21900]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:30 managed-node1 sshd-session[21955]: Accepted publickey for root from 10.31.42.212 port 55388 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:30 managed-node1 systemd-logind[665]: New session 19 of user root. ░░ Subject: A new session 19 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 19 has been created for the user root. ░░ ░░ The leading process of the session is 21955. Jul 22 08:31:30 managed-node1 systemd[1]: Started session-19.scope - Session 19 of User root. ░░ Subject: A start job for unit session-19.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-19.scope has finished successfully. ░░ ░░ The job identifier is 2493. Jul 22 08:31:30 managed-node1 sshd-session[21955]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:30 managed-node1 sshd-session[21958]: Received disconnect from 10.31.42.212 port 55388:11: disconnected by user Jul 22 08:31:30 managed-node1 sshd-session[21958]: Disconnected from user root 10.31.42.212 port 55388 Jul 22 08:31:30 managed-node1 sshd-session[21955]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:30 managed-node1 systemd[1]: session-19.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-19.scope has successfully entered the 'dead' state. Jul 22 08:31:30 managed-node1 systemd-logind[665]: Session 19 logged out. Waiting for processes to exit. Jul 22 08:31:30 managed-node1 systemd-logind[665]: Removed session 19. ░░ Subject: Session 19 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 19 has been terminated. Jul 22 08:31:33 managed-node1 sshd-session[21985]: Accepted publickey for root from 10.31.42.212 port 51334 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:33 managed-node1 systemd-logind[665]: New session 20 of user root. ░░ Subject: A new session 20 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 20 has been created for the user root. ░░ ░░ The leading process of the session is 21985. Jul 22 08:31:33 managed-node1 systemd[1]: Started session-20.scope - Session 20 of User root. ░░ Subject: A start job for unit session-20.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-20.scope has finished successfully. ░░ ░░ The job identifier is 2578. Jul 22 08:31:33 managed-node1 sshd-session[21985]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:33 managed-node1 sshd-session[21988]: Received disconnect from 10.31.42.212 port 51334:11: disconnected by user Jul 22 08:31:33 managed-node1 sshd-session[21988]: Disconnected from user root 10.31.42.212 port 51334 Jul 22 08:31:33 managed-node1 sshd-session[21985]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:33 managed-node1 systemd[1]: session-20.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-20.scope has successfully entered the 'dead' state. Jul 22 08:31:33 managed-node1 systemd-logind[665]: Session 20 logged out. Waiting for processes to exit. Jul 22 08:31:33 managed-node1 systemd-logind[665]: Removed session 20. ░░ Subject: Session 20 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 20 has been terminated. Jul 22 08:31:39 managed-node1 sudo[22195]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmvjildnrqnhlbtnnbmtpyyndijxarqg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187498.220983-30745-251852744374365/AnsiballZ_setup.py' Jul 22 08:31:39 managed-node1 sudo[22195]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:40 managed-node1 python3.12[22198]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:40 managed-node1 sudo[22195]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:42 managed-node1 sudo[22382]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayisjfhryxlqstdcpdthroqmxyvfmvet ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187501.73536-30996-275755916118926/AnsiballZ_stat.py' Jul 22 08:31:42 managed-node1 sudo[22382]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:42 managed-node1 python3.12[22385]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:42 managed-node1 sudo[22382]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:44 managed-node1 sudo[22540]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlslfocfskdmodruyjhtqkqdedllzmpn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187503.6865232-31294-278182915417160/AnsiballZ_dnf.py' Jul 22 08:31:44 managed-node1 sudo[22540]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:44 managed-node1 python3.12[22543]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:45 managed-node1 sudo[22540]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:47 managed-node1 sudo[22699]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyowddefffvllcayjqczwtwyegrpkdar ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187505.9346056-31594-140083708201553/AnsiballZ_blivet.py' Jul 22 08:31:47 managed-node1 sudo[22699]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:47 managed-node1 python3.12[22702]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:47 managed-node1 sudo[22699]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:49 managed-node1 sudo[22859]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okapkuetovcbpnazthamtchcpmygboqr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187508.8322294-31841-43052225814864/AnsiballZ_dnf.py' Jul 22 08:31:49 managed-node1 sudo[22859]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:49 managed-node1 python3.12[22862]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:49 managed-node1 sudo[22859]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:50 managed-node1 sudo[23018]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaikormvyjgwqgqwimsrwvgodqltfoni ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187509.9976726-32068-178379251812481/AnsiballZ_service_facts.py' Jul 22 08:31:50 managed-node1 sudo[23018]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:51 managed-node1 python3.12[23021]: ansible-service_facts Invoked Jul 22 08:31:52 managed-node1 sudo[23018]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:54 managed-node1 sudo[23288]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnfereohvzefxbtuifgkpzmnqdchtvmi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187513.829118-32528-220714967588984/AnsiballZ_blivet.py' Jul 22 08:31:54 managed-node1 sudo[23288]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:54 managed-node1 python3.12[23291]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:54 managed-node1 sudo[23288]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:55 managed-node1 sudo[23448]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jehspfgdjpytxhwriuvutubimlknevco ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187514.9048011-32596-224261858530589/AnsiballZ_stat.py' Jul 22 08:31:55 managed-node1 sudo[23448]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:55 managed-node1 python3.12[23451]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:55 managed-node1 sudo[23448]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:57 managed-node1 sudo[23608]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqfywsxjptqumxvzuyqiaxorhkknfabr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187517.4276502-32894-63938633615768/AnsiballZ_stat.py' Jul 22 08:31:57 managed-node1 sudo[23608]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:57 managed-node1 python3.12[23611]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:57 managed-node1 sudo[23608]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:58 managed-node1 sudo[23768]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcooxejkmvopaizenzrveikkyxnbbxqm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187518.2735956-33007-151301397206817/AnsiballZ_setup.py' Jul 22 08:31:58 managed-node1 sudo[23768]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:58 managed-node1 python3.12[23771]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:59 managed-node1 sudo[23768]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:00 managed-node1 sudo[23956]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckujvccflfwmtycohurnafnkkloaboqm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187519.9143977-33156-240635731629412/AnsiballZ_dnf.py' Jul 22 08:32:00 managed-node1 sudo[23956]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:00 managed-node1 python3.12[23959]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:00 managed-node1 sudo[23956]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:01 managed-node1 sudo[24115]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmcaxqzyqslluuqbpqnmvccbxekwngc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187521.2361045-33378-220431241718648/AnsiballZ_find_unused_disk.py' Jul 22 08:32:01 managed-node1 sudo[24115]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:02 managed-node1 python3.12[24118]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:32:02 managed-node1 sudo[24115]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:03 managed-node1 sudo[24275]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahnuaiiypqtnvfarxdgnhtjpanuqemyu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187522.367874-33495-186548938042830/AnsiballZ_command.py' Jul 22 08:32:03 managed-node1 sudo[24275]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:03 managed-node1 python3.12[24278]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:32:03 managed-node1 sudo[24275]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:05 managed-node1 sshd-session[24306]: Accepted publickey for root from 10.31.42.212 port 53782 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:05 managed-node1 systemd-logind[665]: New session 21 of user root. ░░ Subject: A new session 21 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 21 has been created for the user root. ░░ ░░ The leading process of the session is 24306. Jul 22 08:32:05 managed-node1 systemd[1]: Started session-21.scope - Session 21 of User root. ░░ Subject: A start job for unit session-21.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-21.scope has finished successfully. ░░ ░░ The job identifier is 2663. Jul 22 08:32:05 managed-node1 sshd-session[24306]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:05 managed-node1 sshd-session[24309]: Received disconnect from 10.31.42.212 port 53782:11: disconnected by user Jul 22 08:32:05 managed-node1 sshd-session[24309]: Disconnected from user root 10.31.42.212 port 53782 Jul 22 08:32:05 managed-node1 sshd-session[24306]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:05 managed-node1 systemd[1]: session-21.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-21.scope has successfully entered the 'dead' state. Jul 22 08:32:05 managed-node1 systemd-logind[665]: Session 21 logged out. Waiting for processes to exit. Jul 22 08:32:05 managed-node1 systemd-logind[665]: Removed session 21. ░░ Subject: Session 21 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 21 has been terminated. Jul 22 08:32:05 managed-node1 sshd-session[24336]: Accepted publickey for root from 10.31.42.212 port 53788 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:05 managed-node1 systemd-logind[665]: New session 22 of user root. ░░ Subject: A new session 22 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 22 has been created for the user root. ░░ ░░ The leading process of the session is 24336. Jul 22 08:32:05 managed-node1 systemd[1]: Started session-22.scope - Session 22 of User root. ░░ Subject: A start job for unit session-22.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-22.scope has finished successfully. ░░ ░░ The job identifier is 2748. Jul 22 08:32:05 managed-node1 sshd-session[24336]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:05 managed-node1 sshd-session[24340]: Received disconnect from 10.31.42.212 port 53788:11: disconnected by user Jul 22 08:32:05 managed-node1 sshd-session[24340]: Disconnected from user root 10.31.42.212 port 53788 Jul 22 08:32:05 managed-node1 sshd-session[24336]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:05 managed-node1 systemd[1]: session-22.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-22.scope has successfully entered the 'dead' state. Jul 22 08:32:05 managed-node1 systemd-logind[665]: Session 22 logged out. Waiting for processes to exit. Jul 22 08:32:05 managed-node1 systemd-logind[665]: Removed session 22. ░░ Subject: Session 22 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 22 has been terminated. Jul 22 08:32:09 managed-node1 sudo[24548]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzklstzzkeeonihxwlfutctgkyfqcgg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187527.7136889-34187-188463079789123/AnsiballZ_setup.py' Jul 22 08:32:09 managed-node1 sudo[24548]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:09 managed-node1 python3.12[24551]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:10 managed-node1 sudo[24548]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:12 managed-node1 sudo[24735]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwjdzcminfubdvzvzenliwukolhqwaqz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187531.5762882-34625-21958428842353/AnsiballZ_stat.py' Jul 22 08:32:12 managed-node1 sudo[24735]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:12 managed-node1 python3.12[24738]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:12 managed-node1 sudo[24735]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:14 managed-node1 sudo[24893]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpydaclpfdfkyydtkfshjxtfrdywfgyk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187533.4157908-35038-237265268089893/AnsiballZ_dnf.py' Jul 22 08:32:14 managed-node1 sudo[24893]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:14 managed-node1 python3.12[24896]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:14 managed-node1 sudo[24893]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:16 managed-node1 sudo[25052]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhiounxswicwlisakapnajsxmdytpaox ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187535.1559052-35103-152852521728232/AnsiballZ_blivet.py' Jul 22 08:32:16 managed-node1 sudo[25052]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:16 managed-node1 python3.12[25055]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:32:16 managed-node1 sudo[25052]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:18 managed-node1 sudo[25212]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyxjazwwsdpbnneqtlqiyeoyeopixqwz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187537.7516932-35260-163978836494256/AnsiballZ_dnf.py' Jul 22 08:32:18 managed-node1 sudo[25212]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:18 managed-node1 python3.12[25215]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:18 managed-node1 sudo[25212]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:20 managed-node1 sudo[25371]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptfefxszjmvoltbovrrcsacchpstujix ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187538.9893339-35431-134410450634941/AnsiballZ_service_facts.py' Jul 22 08:32:20 managed-node1 sudo[25371]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:20 managed-node1 python3.12[25374]: ansible-service_facts Invoked Jul 22 08:32:21 managed-node1 sudo[25371]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:22 managed-node1 sudo[25641]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdhvdyubnafldwolijhdaraduugfbhxb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187542.6290746-35996-181136214293786/AnsiballZ_blivet.py' Jul 22 08:32:22 managed-node1 sudo[25641]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:23 managed-node1 python3.12[25644]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:32:23 managed-node1 sudo[25641]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:23 managed-node1 sudo[25801]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxlgelgdbjiicfquzgrxtxrneqzsuojo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187543.607004-36083-80378421253856/AnsiballZ_stat.py' Jul 22 08:32:23 managed-node1 sudo[25801]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:24 managed-node1 python3.12[25804]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:24 managed-node1 sudo[25801]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:25 managed-node1 systemd[4460]: Created slice background.slice - User Background Tasks Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 14. Jul 22 08:32:25 managed-node1 systemd[4460]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 13. Jul 22 08:32:25 managed-node1 systemd[4460]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 13. Jul 22 08:32:26 managed-node1 sudo[25963]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlepaxwxcvumyrjanqtftmevmbrbvmaq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187546.5193486-36328-40700754592066/AnsiballZ_stat.py' Jul 22 08:32:26 managed-node1 sudo[25963]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:26 managed-node1 python3.12[25966]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:26 managed-node1 sudo[25963]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:27 managed-node1 sudo[26123]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqoysnywkbhojxnvjaakvkmlndjfphxy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187547.2686357-36415-100195800552587/AnsiballZ_setup.py' Jul 22 08:32:27 managed-node1 sudo[26123]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:27 managed-node1 python3.12[26126]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:28 managed-node1 sudo[26123]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:28 managed-node1 sudo[26310]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzmyoubtqhgakmvmiqaoggoxsidmvwsd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187548.6690793-36699-237558520385656/AnsiballZ_dnf.py' Jul 22 08:32:28 managed-node1 sudo[26310]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:29 managed-node1 python3.12[26313]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:29 managed-node1 sudo[26310]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:30 managed-node1 sudo[26469]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcyfsrcmengjrflzscacjelyehjnutsi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187549.9040072-36871-182403922529287/AnsiballZ_find_unused_disk.py' Jul 22 08:32:30 managed-node1 sudo[26469]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:30 managed-node1 python3.12[26472]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:32:30 managed-node1 sudo[26469]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:32 managed-node1 sudo[26629]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brmkezigqcjjpjplrimzbebbeqayhtpb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187551.1223876-36982-219410507638334/AnsiballZ_command.py' Jul 22 08:32:32 managed-node1 sudo[26629]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:32 managed-node1 python3.12[26632]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:32:32 managed-node1 sudo[26629]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:33 managed-node1 sshd-session[26660]: Accepted publickey for root from 10.31.42.212 port 37930 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:33 managed-node1 systemd-logind[665]: New session 23 of user root. ░░ Subject: A new session 23 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 23 has been created for the user root. ░░ ░░ The leading process of the session is 26660. Jul 22 08:32:33 managed-node1 systemd[1]: Started session-23.scope - Session 23 of User root. ░░ Subject: A start job for unit session-23.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-23.scope has finished successfully. ░░ ░░ The job identifier is 2833. Jul 22 08:32:33 managed-node1 sshd-session[26660]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:33 managed-node1 sshd-session[26663]: Received disconnect from 10.31.42.212 port 37930:11: disconnected by user Jul 22 08:32:33 managed-node1 sshd-session[26663]: Disconnected from user root 10.31.42.212 port 37930 Jul 22 08:32:33 managed-node1 sshd-session[26660]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:33 managed-node1 systemd-logind[665]: Session 23 logged out. Waiting for processes to exit. Jul 22 08:32:33 managed-node1 systemd[1]: session-23.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-23.scope has successfully entered the 'dead' state. Jul 22 08:32:33 managed-node1 systemd-logind[665]: Removed session 23. ░░ Subject: Session 23 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 23 has been terminated. Jul 22 08:32:34 managed-node1 sshd-session[26690]: Accepted publickey for root from 10.31.42.212 port 37946 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:34 managed-node1 systemd-logind[665]: New session 24 of user root. ░░ Subject: A new session 24 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 24 has been created for the user root. ░░ ░░ The leading process of the session is 26690. Jul 22 08:32:34 managed-node1 systemd[1]: Started session-24.scope - Session 24 of User root. ░░ Subject: A start job for unit session-24.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-24.scope has finished successfully. ░░ ░░ The job identifier is 2918. Jul 22 08:32:34 managed-node1 sshd-session[26690]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:34 managed-node1 sshd-session[26693]: Received disconnect from 10.31.42.212 port 37946:11: disconnected by user Jul 22 08:32:34 managed-node1 sshd-session[26693]: Disconnected from user root 10.31.42.212 port 37946 Jul 22 08:32:34 managed-node1 sshd-session[26690]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:34 managed-node1 systemd[1]: session-24.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-24.scope has successfully entered the 'dead' state. Jul 22 08:32:34 managed-node1 systemd-logind[665]: Session 24 logged out. Waiting for processes to exit. Jul 22 08:32:34 managed-node1 systemd-logind[665]: Removed session 24. ░░ Subject: Session 24 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 24 has been terminated. Jul 22 08:32:36 managed-node1 sudo[26900]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epksubbottkvhcmwftshdzuafvjizijv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187556.0548923-37958-30743675281504/AnsiballZ_setup.py' Jul 22 08:32:36 managed-node1 sudo[26900]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:37 managed-node1 python3.12[26903]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:37 managed-node1 sudo[26900]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:39 managed-node1 sudo[27087]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypiemkqukrquinyaqmseerhebxzfrmwj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187558.5200486-38083-105726272264862/AnsiballZ_stat.py' Jul 22 08:32:39 managed-node1 sudo[27087]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:39 managed-node1 python3.12[27090]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:39 managed-node1 sudo[27087]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:41 managed-node1 sudo[27245]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajqlyrsvckoakspzidgxvxsdoopzextj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187560.2093935-38292-62605543080824/AnsiballZ_dnf.py' Jul 22 08:32:41 managed-node1 sudo[27245]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:41 managed-node1 python3.12[27248]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:41 managed-node1 sudo[27245]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:43 managed-node1 sudo[27404]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsxigvimdepmmjzxzxzkixrqheworfuc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187562.4055035-38511-43036034194764/AnsiballZ_blivet.py' Jul 22 08:32:43 managed-node1 sudo[27404]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:43 managed-node1 python3.12[27407]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:32:43 managed-node1 sudo[27404]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:44 managed-node1 sudo[27564]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlswpfinpkrhaftemcedrgfpjowyzkoa ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187564.2797809-38867-215063390384850/AnsiballZ_dnf.py' Jul 22 08:32:44 managed-node1 sudo[27564]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:44 managed-node1 python3.12[27567]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:45 managed-node1 sudo[27564]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:46 managed-node1 sudo[27723]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnavlfkokmfsnndhigmnujqvmmcfrrsf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187565.2815034-38925-140776897449191/AnsiballZ_service_facts.py' Jul 22 08:32:46 managed-node1 sudo[27723]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:46 managed-node1 python3.12[27726]: ansible-service_facts Invoked Jul 22 08:32:47 managed-node1 sudo[27723]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:49 managed-node1 sudo[27993]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wksokntmesalliifqnepvnofbqxqpzum ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187568.9494967-39270-265640134171612/AnsiballZ_blivet.py' Jul 22 08:32:49 managed-node1 sudo[27993]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:49 managed-node1 python3.12[27996]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:32:49 managed-node1 sudo[27993]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:50 managed-node1 sudo[28153]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldssyghuutwbolmjwvxhdsgmqxetwpiw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187569.950067-39332-159332036947937/AnsiballZ_stat.py' Jul 22 08:32:50 managed-node1 sudo[28153]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:50 managed-node1 python3.12[28156]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:50 managed-node1 sudo[28153]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:52 managed-node1 sudo[28313]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxsbfepqkkywggmnrxjftfedwzqhswmy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187572.1230147-39494-89191869188075/AnsiballZ_stat.py' Jul 22 08:32:52 managed-node1 sudo[28313]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:52 managed-node1 python3.12[28316]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:52 managed-node1 sudo[28313]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:53 managed-node1 sudo[28473]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbutrqbgbbytsydqktkddywajjurrlhd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187572.8413265-39541-70110967278941/AnsiballZ_setup.py' Jul 22 08:32:53 managed-node1 sudo[28473]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:53 managed-node1 python3.12[28476]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:53 managed-node1 sudo[28473]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:54 managed-node1 sudo[28660]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavempgabheufixbwyvpfdmiqloxrsxe ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187574.5314772-39769-148718007146632/AnsiballZ_dnf.py' Jul 22 08:32:54 managed-node1 sudo[28660]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:55 managed-node1 python3.12[28663]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:55 managed-node1 sudo[28660]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:56 managed-node1 sudo[28819]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgemhyszrlbxpdfnynymcgbqllpjeszn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187575.7844272-40012-85186726516115/AnsiballZ_find_unused_disk.py' Jul 22 08:32:56 managed-node1 sudo[28819]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:56 managed-node1 python3.12[28823]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:32:56 managed-node1 sudo[28819]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:57 managed-node1 sudo[28980]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoeqqyjrnbybgpaqbprezivfqvzygabg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187576.9013078-40194-125664680743472/AnsiballZ_command.py' Jul 22 08:32:57 managed-node1 sudo[28980]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:57 managed-node1 python3.12[28983]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:29 Tuesday 22 July 2025 08:32:58 -0400 (0:00:01.453) 0:00:22.210 ********** skipping: [managed-node1] => { "changed": false, "false_condition": "'Unable to find unused disk' not in unused_disks_return.disks", "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34 Tuesday 22 July 2025 08:32:58 -0400 (0:00:00.133) 0:00:22.344 ********** fatal: [managed-node1]: FAILED! => { "changed": false } MSG: Unable to find enough unused disks. Exiting playbook. PLAY RECAP ********************************************************************* managed-node1 : ok=29 changed=0 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.13", "end_time": "2025-07-22T12:32:58.516800+00:00Z", "host": "managed-node1", "message": "Unable to find enough unused disks. Exiting playbook.", "start_time": "2025-07-22T12:32:58.330609+00:00Z", "task_name": "Exit playbook when there's not enough unused disks in the system", "task_path": "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Tuesday 22 July 2025 08:32:58 -0400 (0:00:00.205) 0:00:22.549 ********** =============================================================================== fedora.linux_system_roles.storage : Get service facts ------------------- 3.05s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Make sure blivet is available ------- 1.82s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Gathering Facts --------------------------------------------------------- 1.67s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:2 Debug why there are no unused disks ------------------------------------- 1.45s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 fedora.linux_system_roles.storage : Get required packages --------------- 1.34s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 fedora.linux_system_roles.storage : Check if system is ostree ----------- 1.29s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 fedora.linux_system_roles.storage : Update facts ------------------------ 1.26s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Ensure test packages ---------------------------------------------------- 1.25s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Find unused disks in the system ----------------------------------------- 1.19s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 fedora.linux_system_roles.storage : Make sure required packages are installed --- 1.04s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 1.03s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Check if /etc/fstab is present ------ 0.69s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file --- 0.66s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 fedora.linux_system_roles.storage : Set storage_cryptsetup_services ----- 0.30s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.24s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present --- 0.21s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab --- 0.21s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 fedora.linux_system_roles.storage : Show storage_pools ------------------ 0.21s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Exit playbook when there's not enough unused disks in the system -------- 0.21s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34 fedora.linux_system_roles.storage : Show storage_volumes ---------------- 0.19s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Jul 22 08:32:34 managed-node1 sshd-session[26690]: Accepted publickey for root from 10.31.42.212 port 37946 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:34 managed-node1 systemd-logind[665]: New session 24 of user root. ░░ Subject: A new session 24 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 24 has been created for the user root. ░░ ░░ The leading process of the session is 26690. Jul 22 08:32:34 managed-node1 systemd[1]: Started session-24.scope - Session 24 of User root. ░░ Subject: A start job for unit session-24.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-24.scope has finished successfully. ░░ ░░ The job identifier is 2918. Jul 22 08:32:34 managed-node1 sshd-session[26690]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:34 managed-node1 sshd-session[26693]: Received disconnect from 10.31.42.212 port 37946:11: disconnected by user Jul 22 08:32:34 managed-node1 sshd-session[26693]: Disconnected from user root 10.31.42.212 port 37946 Jul 22 08:32:34 managed-node1 sshd-session[26690]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:34 managed-node1 systemd[1]: session-24.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-24.scope has successfully entered the 'dead' state. Jul 22 08:32:34 managed-node1 systemd-logind[665]: Session 24 logged out. Waiting for processes to exit. Jul 22 08:32:34 managed-node1 systemd-logind[665]: Removed session 24. ░░ Subject: Session 24 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 24 has been terminated. Jul 22 08:32:36 managed-node1 sudo[26900]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epksubbottkvhcmwftshdzuafvjizijv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187556.0548923-37958-30743675281504/AnsiballZ_setup.py' Jul 22 08:32:36 managed-node1 sudo[26900]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:37 managed-node1 python3.12[26903]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:37 managed-node1 sudo[26900]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:39 managed-node1 sudo[27087]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypiemkqukrquinyaqmseerhebxzfrmwj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187558.5200486-38083-105726272264862/AnsiballZ_stat.py' Jul 22 08:32:39 managed-node1 sudo[27087]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:39 managed-node1 python3.12[27090]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:39 managed-node1 sudo[27087]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:41 managed-node1 sudo[27245]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajqlyrsvckoakspzidgxvxsdoopzextj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187560.2093935-38292-62605543080824/AnsiballZ_dnf.py' Jul 22 08:32:41 managed-node1 sudo[27245]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:41 managed-node1 python3.12[27248]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:41 managed-node1 sudo[27245]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:43 managed-node1 sudo[27404]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsxigvimdepmmjzxzxzkixrqheworfuc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187562.4055035-38511-43036034194764/AnsiballZ_blivet.py' Jul 22 08:32:43 managed-node1 sudo[27404]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:43 managed-node1 python3.12[27407]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:32:43 managed-node1 sudo[27404]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:44 managed-node1 sudo[27564]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlswpfinpkrhaftemcedrgfpjowyzkoa ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187564.2797809-38867-215063390384850/AnsiballZ_dnf.py' Jul 22 08:32:44 managed-node1 sudo[27564]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:44 managed-node1 python3.12[27567]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:45 managed-node1 sudo[27564]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:46 managed-node1 sudo[27723]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnavlfkokmfsnndhigmnujqvmmcfrrsf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187565.2815034-38925-140776897449191/AnsiballZ_service_facts.py' Jul 22 08:32:46 managed-node1 sudo[27723]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:46 managed-node1 python3.12[27726]: ansible-service_facts Invoked Jul 22 08:32:47 managed-node1 sudo[27723]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:49 managed-node1 sudo[27993]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wksokntmesalliifqnepvnofbqxqpzum ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187568.9494967-39270-265640134171612/AnsiballZ_blivet.py' Jul 22 08:32:49 managed-node1 sudo[27993]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:49 managed-node1 python3.12[27996]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:32:49 managed-node1 sudo[27993]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:50 managed-node1 sudo[28153]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldssyghuutwbolmjwvxhdsgmqxetwpiw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187569.950067-39332-159332036947937/AnsiballZ_stat.py' Jul 22 08:32:50 managed-node1 sudo[28153]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:50 managed-node1 python3.12[28156]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:50 managed-node1 sudo[28153]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:52 managed-node1 sudo[28313]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxsbfepqkkywggmnrxjftfedwzqhswmy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187572.1230147-39494-89191869188075/AnsiballZ_stat.py' Jul 22 08:32:52 managed-node1 sudo[28313]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:52 managed-node1 python3.12[28316]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:32:52 managed-node1 sudo[28313]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:53 managed-node1 sudo[28473]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbutrqbgbbytsydqktkddywajjurrlhd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187572.8413265-39541-70110967278941/AnsiballZ_setup.py' Jul 22 08:32:53 managed-node1 sudo[28473]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:53 managed-node1 python3.12[28476]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:32:53 managed-node1 sudo[28473]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:54 managed-node1 sudo[28660]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavempgabheufixbwyvpfdmiqloxrsxe ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187574.5314772-39769-148718007146632/AnsiballZ_dnf.py' Jul 22 08:32:54 managed-node1 sudo[28660]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:55 managed-node1 python3.12[28663]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:32:55 managed-node1 sudo[28660]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:56 managed-node1 sudo[28819]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgemhyszrlbxpdfnynymcgbqllpjeszn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187575.7844272-40012-85186726516115/AnsiballZ_find_unused_disk.py' Jul 22 08:32:56 managed-node1 sudo[28819]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:56 managed-node1 python3.12[28823]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:32:56 managed-node1 sudo[28819]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:57 managed-node1 sudo[28980]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoeqqyjrnbybgpaqbprezivfqvzygabg ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187576.9013078-40194-125664680743472/AnsiballZ_command.py' Jul 22 08:32:57 managed-node1 sudo[28980]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:57 managed-node1 python3.12[28983]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:32:57 managed-node1 sudo[28980]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:59 managed-node1 sshd-session[29011]: Accepted publickey for root from 10.31.42.212 port 58312 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:59 managed-node1 systemd-logind[665]: New session 25 of user root. ░░ Subject: A new session 25 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 25 has been created for the user root. ░░ ░░ The leading process of the session is 29011. Jul 22 08:32:59 managed-node1 systemd[1]: Started session-25.scope - Session 25 of User root. ░░ Subject: A start job for unit session-25.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-25.scope has finished successfully. ░░ ░░ The job identifier is 3003. Jul 22 08:32:59 managed-node1 sshd-session[29011]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:32:59 managed-node1 sshd-session[29014]: Received disconnect from 10.31.42.212 port 58312:11: disconnected by user Jul 22 08:32:59 managed-node1 sshd-session[29014]: Disconnected from user root 10.31.42.212 port 58312 Jul 22 08:32:59 managed-node1 sshd-session[29011]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:59 managed-node1 systemd[1]: session-25.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-25.scope has successfully entered the 'dead' state. Jul 22 08:32:59 managed-node1 systemd-logind[665]: Session 25 logged out. Waiting for processes to exit. Jul 22 08:32:59 managed-node1 systemd-logind[665]: Removed session 25. ░░ Subject: Session 25 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 25 has been terminated. Jul 22 08:32:59 managed-node1 sshd-session[29041]: Accepted publickey for root from 10.31.42.212 port 58318 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:59 managed-node1 systemd-logind[665]: New session 26 of user root. ░░ Subject: A new session 26 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 26 has been created for the user root. ░░ ░░ The leading process of the session is 29041. Jul 22 08:32:59 managed-node1 systemd[1]: Started session-26.scope - Session 26 of User root. ░░ Subject: A start job for unit session-26.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-26.scope has finished successfully. ░░ ░░ The job identifier is 3088. Jul 22 08:32:59 managed-node1 sshd-session[29041]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)