ansible-playbook [core 2.17.13] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-nSC executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Jun 4 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-8)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/create-test-file.yml statically imported: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/verify-data-preservation.yml Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_luks.yml ******************************************************* 1 plays in /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml PLAY [Test LUKS] *************************************************************** TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:2 Tuesday 22 July 2025 08:31:22 -0400 (0:00:00.046) 0:00:00.046 ********** [WARNING]: Platform linux on host managed-node12 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node12] TASK [Enable FIPS mode] ******************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:20 Tuesday 22 July 2025 08:31:24 -0400 (0:00:02.066) 0:00:02.113 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Reboot] ****************************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:28 Tuesday 22 July 2025 08:31:24 -0400 (0:00:00.117) 0:00:02.230 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Enable FIPS mode - 2] **************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:39 Tuesday 22 July 2025 08:31:24 -0400 (0:00:00.139) 0:00:02.370 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Reboot - 2] ************************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:43 Tuesday 22 July 2025 08:31:24 -0400 (0:00:00.106) 0:00:02.476 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Ensure dracut-fips] ****************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:53 Tuesday 22 July 2025 08:31:24 -0400 (0:00:00.131) 0:00:02.608 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Configure boot for FIPS] ************************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:59 Tuesday 22 July 2025 08:31:24 -0400 (0:00:00.081) 0:00:02.689 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Reboot - 3] ************************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:68 Tuesday 22 July 2025 08:31:24 -0400 (0:00:00.083) 0:00:02.773 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "lookup(\"env\", \"SYSTEM_ROLES_TEST_FIPS\") == \"true\"", "skip_reason": "Conditional result was False" } TASK [Run the role] ************************************************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:72 Tuesday 22 July 2025 08:31:24 -0400 (0:00:00.086) 0:00:02.860 ********** included: fedora.linux_system_roles.storage for managed-node12 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Tuesday 22 July 2025 08:31:25 -0400 (0:00:00.271) 0:00:03.131 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node12 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Tuesday 22 July 2025 08:31:25 -0400 (0:00:00.182) 0:00:03.314 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Tuesday 22 July 2025 08:31:25 -0400 (0:00:00.279) 0:00:03.593 ********** skipping: [managed-node12] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node12] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node12] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed-node12] => (item=CentOS_10.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Tuesday 22 July 2025 08:31:26 -0400 (0:00:00.432) 0:00:04.026 ********** ok: [managed-node12] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Tuesday 22 July 2025 08:31:27 -0400 (0:00:01.449) 0:00:05.476 ********** ok: [managed-node12] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Tuesday 22 July 2025 08:31:27 -0400 (0:00:00.123) 0:00:05.600 ********** ok: [managed-node12] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Tuesday 22 July 2025 08:31:27 -0400 (0:00:00.096) 0:00:05.696 ********** ok: [managed-node12] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Tuesday 22 July 2025 08:31:27 -0400 (0:00:00.119) 0:00:05.815 ********** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node12 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Tuesday 22 July 2025 08:31:28 -0400 (0:00:00.329) 0:00:06.145 ********** ok: [managed-node12] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Tuesday 22 July 2025 08:31:29 -0400 (0:00:01.736) 0:00:07.883 ********** ok: [managed-node12] => { "storage_pools | d([])": [] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Tuesday 22 July 2025 08:31:30 -0400 (0:00:00.164) 0:00:08.048 ********** ok: [managed-node12] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Tuesday 22 July 2025 08:31:30 -0400 (0:00:00.159) 0:00:08.207 ********** [WARNING]: Module invocation had junk after the JSON data: sys:1: DeprecationWarning: builtin type swigvarlink has no __module__ attribute ok: [managed-node12] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Tuesday 22 July 2025 08:31:32 -0400 (0:00:01.815) 0:00:10.023 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node12 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Tuesday 22 July 2025 08:31:32 -0400 (0:00:00.209) 0:00:10.232 ********** skipping: [managed-node12] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Tuesday 22 July 2025 08:31:32 -0400 (0:00:00.140) 0:00:10.373 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Tuesday 22 July 2025 08:31:32 -0400 (0:00:00.183) 0:00:10.556 ********** skipping: [managed-node12] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Tuesday 22 July 2025 08:31:32 -0400 (0:00:00.107) 0:00:10.663 ********** ok: [managed-node12] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Tuesday 22 July 2025 08:31:33 -0400 (0:00:01.110) 0:00:11.773 ********** ok: [managed-node12] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_multipath.service": { "name": "modprobe@dm_multipath.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "inactive", "status": "static" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-migrate.service": { "name": "rpmdb-migrate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup-with-network@.service": { "name": "stratis-fstab-setup-with-network@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Tuesday 22 July 2025 08:31:37 -0400 (0:00:03.456) 0:00:15.230 ********** ok: [managed-node12] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Tuesday 22 July 2025 08:31:37 -0400 (0:00:00.267) 0:00:15.497 ********** skipping: [managed-node12] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Tuesday 22 July 2025 08:31:37 -0400 (0:00:00.086) 0:00:15.584 ********** ok: [managed-node12] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Tuesday 22 July 2025 08:31:38 -0400 (0:00:01.058) 0:00:16.643 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "storage_udevadm_trigger | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Tuesday 22 July 2025 08:31:38 -0400 (0:00:00.230) 0:00:16.873 ********** ok: [managed-node12] => { "changed": false, "stat": { "atime": 1753187096.8499324, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "016bd7ce6cb6b233647ba6b5c21ac99bb7146610", "ctime": 1750750281.8033595, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194435, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1750750281.8033595, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1344, "uid": 0, "version": "3162749339", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Tuesday 22 July 2025 08:31:39 -0400 (0:00:00.641) 0:00:17.514 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Tuesday 22 July 2025 08:31:39 -0400 (0:00:00.133) 0:00:17.648 ********** skipping: [managed-node12] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Tuesday 22 July 2025 08:31:39 -0400 (0:00:00.089) 0:00:17.738 ********** ok: [managed-node12] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Tuesday 22 July 2025 08:31:39 -0400 (0:00:00.144) 0:00:17.882 ********** ok: [managed-node12] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Tuesday 22 July 2025 08:31:40 -0400 (0:00:00.132) 0:00:18.015 ********** ok: [managed-node12] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Tuesday 22 July 2025 08:31:40 -0400 (0:00:00.165) 0:00:18.180 ********** skipping: [managed-node12] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Tuesday 22 July 2025 08:31:40 -0400 (0:00:00.302) 0:00:18.483 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Tuesday 22 July 2025 08:31:40 -0400 (0:00:00.354) 0:00:18.837 ********** skipping: [managed-node12] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Tuesday 22 July 2025 08:31:41 -0400 (0:00:00.337) 0:00:19.175 ********** skipping: [managed-node12] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Tuesday 22 July 2025 08:31:41 -0400 (0:00:00.259) 0:00:19.434 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Tuesday 22 July 2025 08:31:41 -0400 (0:00:00.235) 0:00:19.669 ********** ok: [managed-node12] => { "changed": false, "stat": { "atime": 1753187325.915505, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1750749389.405, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194436, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1750749068.122, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "1830666913", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Tuesday 22 July 2025 08:31:42 -0400 (0:00:00.666) 0:00:20.336 ********** skipping: [managed-node12] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Tuesday 22 July 2025 08:31:42 -0400 (0:00:00.078) 0:00:20.415 ********** ok: [managed-node12] TASK [Get unused disks] ******************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:76 Tuesday 22 July 2025 08:31:43 -0400 (0:00:01.453) 0:00:21.868 ********** included: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml for managed-node12 TASK [Ensure test packages] **************************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Tuesday 22 July 2025 08:31:44 -0400 (0:00:00.215) 0:00:22.090 ********** ok: [managed-node12] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Tuesday 22 July 2025 08:31:45 -0400 (0:00:01.249) 0:00:23.340 ********** ok: [managed-node12] => { "changed": false, "disks": "Unable to find unused disk", "info": [ "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "filename [xvda2] is a partition", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Tuesday 22 July 2025 08:31:46 -0400 (0:00:01.528) 0:00:24.868 ********** ok: [managed-node12] => { "changed": false, "cmd": "set -x\nexec 1>&2\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC\njournalctl -ex\n", "delta": "0:00:00.029828", "end": "2025-07-22 08:31:48.009938", "rc": 0, "start": "2025-07-22 08:31:47.980110" } STDERR: + exec + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda1" TYPE="part" SIZE="1048576" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda2" TYPE="part" SIZE="268433341952" FSTYPE="xfs" LOG-SEC="512" + journalctl -ex Jul 22 08:24:45 localhost augenrules[632]: lost 0 Jul 22 08:24:45 localhost augenrules[632]: backlog 3 Jul 22 08:24:45 localhost augenrules[632]: backlog_wait_time 60000 Jul 22 08:24:45 localhost augenrules[632]: backlog_wait_time_actual 0 Jul 22 08:24:45 localhost augenrules[632]: enabled 1 Jul 22 08:24:45 localhost augenrules[632]: failure 1 Jul 22 08:24:45 localhost augenrules[632]: pid 581 Jul 22 08:24:45 localhost augenrules[632]: rate_limit 0 Jul 22 08:24:45 localhost augenrules[632]: backlog_limit 8192 Jul 22 08:24:45 localhost augenrules[632]: lost 0 Jul 22 08:24:45 localhost augenrules[632]: backlog 4 Jul 22 08:24:45 localhost augenrules[632]: backlog_wait_time 60000 Jul 22 08:24:45 localhost augenrules[632]: backlog_wait_time_actual 0 Jul 22 08:24:45 localhost augenrules[632]: enabled 1 Jul 22 08:24:45 localhost augenrules[632]: failure 1 Jul 22 08:24:45 localhost augenrules[632]: pid 581 Jul 22 08:24:45 localhost augenrules[632]: rate_limit 0 Jul 22 08:24:45 localhost augenrules[632]: backlog_limit 8192 Jul 22 08:24:45 localhost augenrules[632]: lost 0 Jul 22 08:24:45 localhost augenrules[632]: backlog 4 Jul 22 08:24:45 localhost augenrules[632]: backlog_wait_time 60000 Jul 22 08:24:45 localhost augenrules[632]: backlog_wait_time_actual 0 Jul 22 08:24:45 localhost systemd[1]: audit-rules.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit audit-rules.service has successfully entered the 'dead' state. Jul 22 08:24:45 localhost systemd[1]: Finished audit-rules.service - Load Audit Rules. ░░ Subject: A start job for unit audit-rules.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit audit-rules.service has finished successfully. ░░ ░░ The job identifier is 274. Jul 22 08:24:45 localhost kernel: cirrus-qemu 0000:00:02.0: vgaarb: deactivate vga console Jul 22 08:24:45 localhost kernel: Console: switching to colour dummy device 80x25 Jul 22 08:24:45 localhost kernel: [drm] Initialized cirrus-qemu 2.0.0 for 0000:00:02.0 on minor 0 Jul 22 08:24:45 localhost kernel: fbcon: cirrus-qemudrmf (fb0) is primary device Jul 22 08:24:45 localhost kernel: Console: switching to colour frame buffer device 128x48 Jul 22 08:24:45 localhost kernel: cirrus-qemu 0000:00:02.0: [drm] fb0: cirrus-qemudrmf frame buffer device Jul 22 08:24:45 localhost (udev-worker)[606]: Network interface NamePolicy= disabled on kernel command line. Jul 22 08:24:45 localhost systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... ░░ Subject: A start job for unit systemd-vconsole-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has begun execution. ░░ ░░ The job identifier is 308. Jul 22 08:24:45 localhost kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 655360 ms ovfl timer Jul 22 08:24:45 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-vconsole-setup.service has successfully entered the 'dead' state. Jul 22 08:24:45 localhost systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. ░░ Subject: A stop job for unit systemd-vconsole-setup.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit systemd-vconsole-setup.service has finished. ░░ ░░ The job identifier is 308 and the job result is done. Jul 22 08:24:45 localhost systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... ░░ Subject: A start job for unit systemd-vconsole-setup.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has begun execution. ░░ ░░ The job identifier is 308. Jul 22 08:24:45 localhost systemd[1]: Started dbus-broker.service - D-Bus System Message Bus. ░░ Subject: A start job for unit dbus-broker.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dbus-broker.service has finished successfully. ░░ ░░ The job identifier is 207. Jul 22 08:24:45 localhost dbus-broker-launch[620]: Ready Jul 22 08:24:45 localhost systemd[1]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit basic.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit basic.target has finished successfully. ░░ ░░ The job identifier is 122. Jul 22 08:24:45 localhost systemd[1]: Starting chronyd.service - NTP client/server... ░░ Subject: A start job for unit chronyd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit chronyd.service has begun execution. ░░ ░░ The job identifier is 275. Jul 22 08:24:45 localhost systemd[1]: Starting cloud-init-local.service - Cloud-init: Local Stage (pre-network)... ░░ Subject: A start job for unit cloud-init-local.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init-local.service has begun execution. ░░ ░░ The job identifier is 230. Jul 22 08:24:45 localhost systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... ░░ Subject: A start job for unit dracut-shutdown.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dracut-shutdown.service has begun execution. ░░ ░░ The job identifier is 168. Jul 22 08:24:45 localhost systemd[1]: Started irqbalance.service - irqbalance daemon. ░░ Subject: A start job for unit irqbalance.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit irqbalance.service has finished successfully. ░░ ░░ The job identifier is 281. Jul 22 08:24:45 localhost systemd[1]: Started rngd.service - Hardware RNG Entropy Gatherer Daemon. ░░ Subject: A start job for unit rngd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rngd.service has finished successfully. ░░ ░░ The job identifier is 279. Jul 22 08:24:45 localhost systemd[1]: ssh-host-keys-migration.service - Update OpenSSH host key permissions was skipped because of an unmet condition check (ConditionPathExists=!/var/lib/.ssh-host-keys-migration). ░░ Subject: A start job for unit ssh-host-keys-migration.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit ssh-host-keys-migration.service has finished successfully. ░░ ░░ The job identifier is 235. Jul 22 08:24:45 localhost systemd[1]: sshd-keygen@ecdsa.service - OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ecdsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ecdsa.service has finished successfully. ░░ ░░ The job identifier is 239. Jul 22 08:24:45 localhost systemd[1]: sshd-keygen@ed25519.service - OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@ed25519.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@ed25519.service has finished successfully. ░░ ░░ The job identifier is 240. Jul 22 08:24:45 localhost systemd[1]: sshd-keygen@rsa.service - OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). ░░ Subject: A start job for unit sshd-keygen@rsa.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen@rsa.service has finished successfully. ░░ ░░ The job identifier is 237. Jul 22 08:24:45 localhost systemd[1]: Reached target sshd-keygen.target. ░░ Subject: A start job for unit sshd-keygen.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd-keygen.target has finished successfully. ░░ ░░ The job identifier is 236. Jul 22 08:24:45 localhost systemd[1]: sssd.service - System Security Services Daemon was skipped because no trigger condition checks were met. ░░ Subject: A start job for unit sssd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sssd.service has finished successfully. ░░ ░░ The job identifier is 267. Jul 22 08:24:45 localhost systemd[1]: Reached target nss-user-lookup.target - User and Group Name Lookups. ░░ Subject: A start job for unit nss-user-lookup.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit nss-user-lookup.target has finished successfully. ░░ ░░ The job identifier is 268. Jul 22 08:24:45 localhost systemd[1]: Starting systemd-logind.service - User Login Management... ░░ Subject: A start job for unit systemd-logind.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-logind.service has begun execution. ░░ ░░ The job identifier is 242. Jul 22 08:24:45 localhost systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. ░░ Subject: A start job for unit dracut-shutdown.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit dracut-shutdown.service has finished successfully. ░░ ░░ The job identifier is 168. Jul 22 08:24:45 localhost systemd-logind[663]: New seat seat0. ░░ Subject: A new seat seat0 is now available ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new seat seat0 has been configured and is now available. Jul 22 08:24:45 localhost systemd-logind[663]: Watching system buttons on /dev/input/event0 (Power Button) Jul 22 08:24:45 localhost systemd-logind[663]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 22 08:24:45 localhost systemd-logind[663]: Watching system buttons on /dev/input/event2 (AT Translated Set 2 keyboard) Jul 22 08:24:45 localhost systemd[1]: Started systemd-logind.service - User Login Management. ░░ Subject: A start job for unit systemd-logind.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-logind.service has finished successfully. ░░ ░░ The job identifier is 242. Jul 22 08:24:45 localhost systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. ░░ Subject: A start job for unit systemd-vconsole-setup.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-vconsole-setup.service has finished successfully. ░░ ░░ The job identifier is 308. Jul 22 08:24:46 localhost rngd[660]: Disabling 7: PKCS11 Entropy generator (pkcs11) Jul 22 08:24:46 localhost rngd[660]: Disabling 5: NIST Network Entropy Beacon (nist) Jul 22 08:24:46 localhost rngd[660]: Disabling 9: Qrypt quantum entropy beacon (qrypt) Jul 22 08:24:46 localhost rngd[660]: Disabling 10: Named pipe entropy input (namedpipe) Jul 22 08:24:46 localhost rngd[660]: Initializing available sources Jul 22 08:24:46 localhost rngd[660]: [hwrng ]: Initialization Failed Jul 22 08:24:46 localhost rngd[660]: [rdrand]: Enabling RDRAND rng support Jul 22 08:24:46 localhost rngd[660]: [rdrand]: Initialized Jul 22 08:24:46 localhost rngd[660]: [jitter]: JITTER timeout set to 5 sec Jul 22 08:24:46 localhost chronyd[678]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Jul 22 08:24:46 localhost rngd[660]: [jitter]: Initializing AES buffer Jul 22 08:24:46 localhost chronyd[678]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift Jul 22 08:24:46 localhost chronyd[678]: Loaded seccomp filter (level 2) Jul 22 08:24:46 localhost systemd[1]: Started chronyd.service - NTP client/server. ░░ Subject: A start job for unit chronyd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit chronyd.service has finished successfully. ░░ ░░ The job identifier is 275. Jul 22 08:24:51 localhost rngd[660]: [jitter]: Unable to obtain AES key, disabling JITTER source Jul 22 08:24:51 localhost rngd[660]: [jitter]: Initialization Failed Jul 22 08:24:51 localhost rngd[660]: Process privileges have been dropped to 2:2 Jul 22 08:24:52 localhost cloud-init[684]: Cloud-init v. 24.4-5.el10 running 'init-local' at Tue, 22 Jul 2025 12:24:52 +0000. Up 17.74 seconds. Jul 22 08:24:52 localhost dhcpcd[687]: dhcpcd-10.0.6 starting Jul 22 08:24:52 localhost kernel: 8021q: 802.1Q VLAN Support v1.8 Jul 22 08:24:52 localhost systemd[1]: Listening on systemd-rfkill.socket - Load/Save RF Kill Switch Status /dev/rfkill Watch. ░░ Subject: A start job for unit systemd-rfkill.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-rfkill.socket has finished successfully. ░░ ░░ The job identifier is 317. Jul 22 08:24:52 localhost kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database Jul 22 08:24:52 localhost kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7' Jul 22 08:24:52 localhost kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600' Jul 22 08:24:52 localhost dhcpcd[690]: DUID 00:01:00:01:30:12:3f:94:12:25:92:53:66:05 Jul 22 08:24:52 localhost dhcpcd[690]: eth0: IAID 92:53:66:05 Jul 22 08:24:52 localhost kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 Jul 22 08:24:52 localhost kernel: cfg80211: failed to load regulatory.db Jul 22 08:24:54 localhost dhcpcd[690]: eth0: soliciting a DHCP lease Jul 22 08:24:54 localhost dhcpcd[690]: eth0: offered 10.31.11.8 from 10.31.8.1 Jul 22 08:24:54 localhost dhcpcd[690]: eth0: leased 10.31.11.8 for 3600 seconds Jul 22 08:24:54 localhost dhcpcd[690]: eth0: adding route to 10.31.8.0/22 Jul 22 08:24:54 localhost dhcpcd[690]: eth0: adding default route via 10.31.8.1 Jul 22 08:24:54 localhost dhcpcd[690]: control command: dhcpcd --dumplease --ipv4only eth0 Jul 22 08:24:54 localhost systemd[1]: Starting systemd-hostnamed.service - Hostname Service... ░░ Subject: A start job for unit systemd-hostnamed.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has begun execution. ░░ ░░ The job identifier is 326. Jul 22 08:24:54 localhost systemd[1]: Started systemd-hostnamed.service - Hostname Service. ░░ Subject: A start job for unit systemd-hostnamed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has finished successfully. ░░ ░░ The job identifier is 326. Jul 22 08:24:54 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-hostnamed[711]: Hostname set to (static) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-init-local.service - Cloud-init: Local Stage (pre-network). ░░ Subject: A start job for unit cloud-init-local.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init-local.service has finished successfully. ░░ ░░ The job identifier is 230. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target network-pre.target - Preparation for Network. ░░ Subject: A start job for unit network-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network-pre.target has finished successfully. ░░ ░░ The job identifier is 180. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager.service - Network Manager... ░░ Subject: A start job for unit NetworkManager.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager.service has begun execution. ░░ ░░ The job identifier is 206. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4376] NetworkManager (version 1.53.91-1.el10) is starting... (boot:0bae23eb-05e0-4323-bdec-21f629e29e9a) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4378] Read config: /etc/NetworkManager/NetworkManager.conf, /etc/NetworkManager/conf.d/30-cloud-init-ip6-addr-gen-mode.conf Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4727] manager[0x55d7e82549c0]: monitoring kernel firmware directory '/lib/firmware'. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4755] hostname: hostname: using hostnamed Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4756] hostname: static hostname changed from (none) to "ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com" Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4764] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4770] manager[0x55d7e82549c0]: rfkill: Wi-Fi hardware radio set enabled Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4770] manager[0x55d7e82549c0]: rfkill: WWAN hardware radio set enabled Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4817] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4818] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4819] manager: Networking is enabled by state file Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.4825] settings: Loaded settings plugin: keyfile (internal) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 404. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5345] dhcp: init: Using DHCP client 'internal' Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5347] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5358] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5364] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5369] device (lo): Activation: starting connection 'lo' (2bcefe6a-855a-4fc0-8a75-8e1e5e473870) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5376] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5379] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager.service - Network Manager. ░░ Subject: A start job for unit NetworkManager.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager.service has finished successfully. ░░ ░░ The job identifier is 206. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5403] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target network.target - Network. ░░ Subject: A start job for unit network.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network.target has finished successfully. ░░ ░░ The job identifier is 209. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5434] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5440] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5442] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting NetworkManager-wait-online.service - Network Manager Wait Online... ░░ Subject: A start job for unit NetworkManager-wait-online.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-wait-online.service has begun execution. ░░ ░░ The job identifier is 205. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5465] device (eth0): carrier: link connected Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5486] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5498] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5503] policy: auto-activating connection 'cloud-init eth0' (1dd9a779-d327-56e1-8454-c65e2556c12c) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5511] device (eth0): Activation: starting connection 'cloud-init eth0' (1dd9a779-d327-56e1-8454-c65e2556c12c) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5514] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting gssproxy.service - GSSAPI Proxy Daemon... ░░ Subject: A start job for unit gssproxy.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit gssproxy.service has begun execution. ░░ ░░ The job identifier is 253. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5529] manager: NetworkManager state is now CONNECTING Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5534] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5545] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5551] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.5565] dhcp4 (eth0): state changed new lease, address=10.31.11.8, acd pending Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 404. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.6195] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.6199] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 0 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 0 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 48 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 48 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 49 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 49 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 50 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 50 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.6212] device (lo): Activation: successful, device activated. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 51 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 51 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 52 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 52 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 53 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 53 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 54 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 54 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 55 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 55 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 56 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 56 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 57 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 57 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 58 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 58 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: Cannot change IRQ 59 affinity: Permission denied Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com irqbalance[659]: IRQ 59 affinity is now unmanaged Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started gssproxy.service - GSSAPI Proxy Daemon. ░░ Subject: A start job for unit gssproxy.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit gssproxy.service has finished successfully. ░░ ░░ The job identifier is 253. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: rpc-gssd.service - RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). ░░ Subject: A start job for unit rpc-gssd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-gssd.service has finished successfully. ░░ ░░ The job identifier is 249. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target nfs-client.target - NFS client services. ░░ Subject: A start job for unit nfs-client.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit nfs-client.target has finished successfully. ░░ ░░ The job identifier is 245. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. ░░ Subject: A start job for unit remote-fs-pre.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-fs-pre.target has finished successfully. ░░ ░░ The job identifier is 247. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. ░░ Subject: A start job for unit remote-cryptsetup.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-cryptsetup.target has finished successfully. ░░ ░░ The job identifier is 278. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target remote-fs.target - Remote File Systems. ░░ Subject: A start job for unit remote-fs.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit remote-fs.target has finished successfully. ░░ ░░ The job identifier is 280. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-pcrphase.service - TPM PCR Barrier (User) was skipped because of an unmet condition check (ConditionSecurity=measured-uki). ░░ Subject: A start job for unit systemd-pcrphase.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-pcrphase.service has finished successfully. ░░ ░░ The job identifier is 182. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7095] dhcp4 (eth0): state changed new lease, address=10.31.11.8 Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7105] policy: set 'cloud-init eth0' (eth0) as default for IPv4 routing and DNS Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7153] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7183] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7186] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full') Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7191] manager: NetworkManager state is now CONNECTED_SITE Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7195] device (eth0): Activation: successful, device activated. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7202] manager: NetworkManager state is now CONNECTED_GLOBAL Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com NetworkManager[718]: [1753187095.7207] manager: startup complete Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished NetworkManager-wait-online.service - Network Manager Wait Online. ░░ Subject: A start job for unit NetworkManager-wait-online.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-wait-online.service has finished successfully. ░░ ░░ The job identifier is 205. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-init.service - Cloud-init: Network Stage... ░░ Subject: A start job for unit cloud-init.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.service has begun execution. ░░ ░░ The job identifier is 233. Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Added source 10.11.160.238 Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Added source 10.18.100.10 Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Added source 10.2.32.37 Jul 22 08:24:55 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Added source 10.2.32.38 Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Cloud-init v. 24.4-5.el10 running 'init' at Tue, 22 Jul 2025 12:24:56 +0000. Up 21.40 seconds. Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | eth0 | True | 10.31.11.8 | 255.255.252.0 | global | 12:25:92:53:66:05 | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | eth0 | True | fe80::1025:92ff:fe53:6605/64 | . | link | 12:25:92:53:66:05 | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | lo | True | ::1/128 | . | host | . | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: ++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | 0 | 0.0.0.0 | 10.31.8.1 | 0.0.0.0 | eth0 | UG | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | 1 | 10.31.8.0 | 0.0.0.0 | 255.255.252.0 | eth0 | U | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +-------+-------------+-----------+---------------+-----------+-------+ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | Route | Destination | Gateway | Interface | Flags | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | 0 | fe80::/64 | :: | eth0 | U | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: | 2 | multicast | :: | eth0 | U | Jul 22 08:24:56 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: ci-info: +-------+-------------+---------+-----------+-------+ Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Generating public/private rsa key pair. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: The key fingerprint is: Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: SHA256:bUbaAkgZdlIvOBWHua7F+bTegpIeTEiqNJceZaPE+w0 root@ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: The key's randomart image is: Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: +---[RSA 3072]----+ Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | =+=+. | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | .o.*oo | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | .o++o.. . | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | o..*.oo = | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |.o.*.E .S = | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |o +oo B .+ | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |. .o= = . | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | +.. +. | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | ... .... | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: +----[SHA256]-----+ Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Generating public/private ecdsa key pair. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: The key fingerprint is: Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: SHA256:KB7WhiTG0YpD+Bv0ytm6sszOqqOmTGi7m1tme/WwDUE root@ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: The key's randomart image is: Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: +---[ECDSA 256]---+ Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |. .. | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |.o... E | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |.++o. . | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |o.+o.o o | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | o *= + S | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |. =o.+ + | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |.o =. . * | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |X.B .. . o | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |&/=o. | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: +----[SHA256]-----+ Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Generating public/private ed25519 key pair. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: The key fingerprint is: Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: SHA256:8WeNU/OzoYikEmXVGOUfxCrigoxrAOqAfMiXvII++gU root@ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: The key's randomart image is: Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: +--[ED25519 256]--+ Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | o=... | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | ...... | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: | o. ...o | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |. o. + ..+.o | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |* E o.. S.o =..o.| Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |== O ...o .o... +| Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |= + o... . . . . | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |.* o . | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: |=o+ | Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[806]: +----[SHA256]-----+ Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-init.service - Cloud-init: Network Stage. ░░ Subject: A start job for unit cloud-init.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.service has finished successfully. ░░ ░░ The job identifier is 233. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target cloud-config.target - Cloud-config availability. ░░ Subject: A start job for unit cloud-config.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.target has finished successfully. ░░ ░░ The job identifier is 232. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target network-online.target - Network is Online. ░░ Subject: A start job for unit network-online.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit network-online.target has finished successfully. ░░ ░░ The job identifier is 204. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-config.service - Cloud-init: Config Stage... ░░ Subject: A start job for unit cloud-config.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.service has begun execution. ░░ ░░ The job identifier is 231. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting kdump.service - Crash recovery kernel arming... ░░ Subject: A start job for unit kdump.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has begun execution. ░░ ░░ The job identifier is 258. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting restraintd.service - The restraint harness.... ░░ Subject: A start job for unit restraintd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit restraintd.service has begun execution. ░░ ░░ The job identifier is 254. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting rpc-statd-notify.service - Notify NFS peers of a restart... ░░ Subject: A start job for unit rpc-statd-notify.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-statd-notify.service has begun execution. ░░ ░░ The job identifier is 246. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting rsyslog.service - System Logging Service... ░░ Subject: A start job for unit rsyslog.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rsyslog.service has begun execution. ░░ ░░ The job identifier is 257. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sm-notify[889]: Version 2.8.3 starting Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 234. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... ░░ Subject: A start job for unit systemd-user-sessions.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-user-sessions.service has begun execution. ░░ ░░ The job identifier is 256. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd[891]: Server listening on 0.0.0.0 port 22. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd[891]: Server listening on :: port 22. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 234. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started rpc-statd-notify.service - Notify NFS peers of a restart. ░░ Subject: A start job for unit rpc-statd-notify.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rpc-statd-notify.service has finished successfully. ░░ ░░ The job identifier is 246. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. ░░ Subject: A start job for unit systemd-user-sessions.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-user-sessions.service has finished successfully. ░░ ░░ The job identifier is 256. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started restraintd.service - The restraint harness.. ░░ Subject: A start job for unit restraintd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit restraintd.service has finished successfully. ░░ ░░ The job identifier is 254. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started crond.service - Command Scheduler. ░░ Subject: A start job for unit crond.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit crond.service has finished successfully. ░░ ░░ The job identifier is 270. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started getty@tty1.service - Getty on tty1. ░░ Subject: A start job for unit getty@tty1.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit getty@tty1.service has finished successfully. ░░ ░░ The job identifier is 260. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. ░░ Subject: A start job for unit serial-getty@ttyS0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit serial-getty@ttyS0.service has finished successfully. ░░ ░░ The job identifier is 264. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target getty.target - Login Prompts. ░░ Subject: A start job for unit getty.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit getty.target has finished successfully. ░░ ░░ The job identifier is 259. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com rsyslogd[890]: [origin software="rsyslogd" swVersion="8.2506.0-1.el10" x-pid="890" x-info="https://www.rsyslog.com"] start Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started rsyslog.service - System Logging Service. ░░ Subject: A start job for unit rsyslog.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit rsyslog.service has finished successfully. ░░ ░░ The job identifier is 257. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target multi-user.target - Multi-User System. ░░ Subject: A start job for unit multi-user.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit multi-user.target has finished successfully. ░░ ░░ The job identifier is 121. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com crond[903]: (CRON) STARTUP (1.7.0) Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... ░░ Subject: A start job for unit systemd-update-utmp-runlevel.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp-runlevel.service has begun execution. ░░ ░░ The job identifier is 271. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com crond[903]: (CRON) INFO (Syslog will be used instead of sendmail.) Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com crond[903]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 1% if used.) Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com crond[903]: (CRON) INFO (running with inotify support) Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com rsyslogd[890]: imjournal: journal files changed, reloading... [v8.2506.0-1.el10 try https://www.rsyslog.com/e/0 ] Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-update-utmp-runlevel.service has successfully entered the 'dead' state. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. ░░ Subject: A start job for unit systemd-update-utmp-runlevel.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-update-utmp-runlevel.service has finished successfully. ░░ ░░ The job identifier is 271. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com restraintd[896]: Listening on http://localhost:8081 Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[948]: Cloud-init v. 24.4-5.el10 running 'modules:config' at Tue, 22 Jul 2025 12:24:57 +0000. Up 23.28 seconds. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopping sshd.service - OpenSSH server daemon... ░░ Subject: A stop job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 507. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd[891]: Received signal 15; terminating. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: sshd.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit sshd.service has successfully entered the 'dead' state. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopped sshd.service - OpenSSH server daemon. ░░ Subject: A stop job for unit sshd.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A stop job for unit sshd.service has finished. ░░ ░░ The job identifier is 507 and the job result is done. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... ░░ Subject: A start job for unit sshd.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has begun execution. ░░ ░░ The job identifier is 507. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd[952]: Server listening on 0.0.0.0 port 22. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd[952]: Server listening on :: port 22. Jul 22 08:24:57 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. ░░ Subject: A start job for unit sshd.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit sshd.service has finished successfully. ░░ ░░ The job identifier is 507. Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-config.service - Cloud-init: Config Stage. ░░ Subject: A start job for unit cloud-config.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-config.service has finished successfully. ░░ ░░ The job identifier is 231. Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting cloud-final.service - Cloud-init: Final Stage... ░░ Subject: A start job for unit cloud-final.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-final.service has begun execution. ░░ ░░ The job identifier is 241. Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com kdumpctl[908]: kdump: Detected change(s) in the following file(s): /etc/fstab Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1044]: Cloud-init v. 24.4-5.el10 running 'modules:final' at Tue, 22 Jul 2025 12:24:58 +0000. Up 23.74 seconds. Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1084]: ############################################################# Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1087]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1099]: 256 SHA256:KB7WhiTG0YpD+Bv0ytm6sszOqqOmTGi7m1tme/WwDUE root@ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com (ECDSA) Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1105]: 256 SHA256:8WeNU/OzoYikEmXVGOUfxCrigoxrAOqAfMiXvII++gU root@ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com (ED25519) Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1107]: 3072 SHA256:bUbaAkgZdlIvOBWHua7F+bTegpIeTEiqNJceZaPE+w0 root@ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com (RSA) Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1108]: -----END SSH HOST KEY FINGERPRINTS----- Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1109]: ############################################################# Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com cloud-init[1044]: Cloud-init v. 24.4-5.el10 finished at Tue, 22 Jul 2025 12:24:58 +0000. Datasource DataSourceEc2Local. Up 23.88 seconds Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished cloud-final.service - Cloud-init: Final Stage. ░░ Subject: A start job for unit cloud-final.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-final.service has finished successfully. ░░ ░░ The job identifier is 241. Jul 22 08:24:58 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target cloud-init.target - Cloud-init target. ░░ Subject: A start job for unit cloud-init.target has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit cloud-init.target has finished successfully. ░░ ░░ The job identifier is 229. Jul 22 08:25:00 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com kernel: block xvda: the capability attribute has been deprecated. Jul 22 08:25:00 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com kdumpctl[908]: kdump: Rebuilding /boot/initramfs-6.12.0-98.el10.x86_64kdump.img Jul 22 08:25:00 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1424]: dracut-105-4.el10 Jul 22 08:25:01 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1427]: Executing: /usr/bin/dracut --list-modules Jul 22 08:25:01 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1483]: dracut-105-4.el10 Jul 22 08:25:01 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1486]: Executing: /usr/bin/dracut --list-modules Jul 22 08:25:01 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1547]: dracut-105-4.el10 Jul 22 08:25:01 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics --aggressive-strip --mount "/dev/disk/by-uuid/0a4c0384-ac05-49a1-bf2b-0105495224f1 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --add squash-erofs --squash-compressor lzma --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-6.12.0-98.el10.x86_64kdump.img 6.12.0-98.el10.x86_64 Jul 22 08:25:01 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com chronyd[678]: Selected source 66.59.198.94 (2.centos.pool.ntp.org) Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-bsod' will not be installed, because command '/usr/lib/systemd/systemd-bsod' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-pcrphase' will not be installed, because command '/usr/lib/systemd/systemd-pcrphase' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'plymouth' will not be installed, because it's in the list to be omitted! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'crypt-gpg' will not be installed, because command 'gpg' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'pcsc' will not be installed, because command 'pcscd' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'hwdb' will not be installed, because it's in the list to be omitted! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'resume' will not be installed, because it's in the list to be omitted! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'squash-squashfs' will not be installed, because command 'mksquashfs' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'squash-squashfs' will not be installed, because command 'unsquashfs' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'earlykdump' will not be installed, because it's in the list to be omitted! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-bsod' will not be installed, because command '/usr/lib/systemd/systemd-bsod' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-pcrphase' will not be installed, because command '/usr/lib/systemd/systemd-pcrphase' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-portabled' will not be installed, because command 'portablectl' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-portabled' will not be installed, because command '/usr/lib/systemd/systemd-portabled' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'connman' will not be installed, because command 'connmand' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'connman' will not be installed, because command 'connmanctl' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'btrfs' will not be installed, because command 'btrfs' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'crypt-gpg' will not be installed, because command 'gpg' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'pcsc' will not be installed, because command 'pcscd' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'iscsi' will not be installed, because command 'iscsid' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'nvmf' will not be installed, because command 'nvme' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'squash-squashfs' will not be installed, because command 'mksquashfs' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'squash-squashfs' will not be installed, because command 'unsquashfs' could not be found! Jul 22 08:25:02 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: bash *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: shell-interpreter *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: fips *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: fips-crypto-policies *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd-ask-password *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd-initrd *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd-journald *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd-modules-load *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd-sysctl *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd-sysusers *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd-tmpfiles *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: systemd-udevd *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: rngd *** Jul 22 08:25:03 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: i18n *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: drm *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: prefixdevname *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: kernel-modules *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: kernel-modules-extra *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: kernel-modules-extra: configuration source "/run/depmod.d" does not exist Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: kernel-modules-extra: configuration source "/lib/depmod.d" does not exist Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf" Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: fstab-sys *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: rootfs-block *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: squash-erofs *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: terminfo *** Jul 22 08:25:04 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: udev-rules *** Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: dracut-systemd *** Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: usrmount *** Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: base *** Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: fs-lib *** Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: kdumpbase *** Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: memstrack *** Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: microcode_ctl-fw_dir_override *** Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl module: mangling fw_dir Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware" Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl: intel: caveats check for kernel version "6.12.0-98.el10.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Jul 22 08:25:05 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl: configuration "intel-06-4f-01" is ignored Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"... Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl: configuration "intel-06-8f-08" is ignored Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates /lib/firmware" Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: openssl *** Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: shutdown *** Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including module: squash-lib *** Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Including modules done *** Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Installing kernel module dependencies *** Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Installing kernel module dependencies done *** Jul 22 08:25:06 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Resolving executable dependencies *** Jul 22 08:25:07 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Resolving executable dependencies done *** Jul 22 08:25:07 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Hardlinking files *** Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Mode: real Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Method: sha256 Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Files: 550 Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Linked: 26 files Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Compared: 0 xattrs Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Compared: 44 files Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Saved: 14.25 MiB Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Duration: 0.179732 seconds Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Hardlinking files done *** Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Generating early-microcode cpio image *** Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Constructing GenuineIntel.bin *** Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Constructing GenuineIntel.bin *** Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Store current command line parameters *** Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: Stored kernel commandline: Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: No dracut internal kernel commandline stored in the initramfs Jul 22 08:25:08 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Squashing the files inside the initramfs *** Jul 22 08:25:22 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Squashing the files inside the initramfs done *** Jul 22 08:25:22 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Creating image file '/boot/initramfs-6.12.0-98.el10.x86_64kdump.img' *** Jul 22 08:25:23 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com dracut[1550]: *** Creating initramfs image file '/boot/initramfs-6.12.0-98.el10.x86_64kdump.img' done *** Jul 22 08:25:23 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com kdumpctl[908]: kdump: kexec: loaded kdump kernel Jul 22 08:25:23 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com kdumpctl[908]: kdump: Starting kdump: [OK] Jul 22 08:25:23 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished kdump.service - Crash recovery kernel arming. ░░ Subject: A start job for unit kdump.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit kdump.service has finished successfully. ░░ ░░ The job identifier is 258. Jul 22 08:25:23 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Startup finished in 1.240s (kernel) + 4.214s (initrd) + 43.930s (userspace) = 49.385s. ░░ Subject: System start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ All system services necessary queued for starting at boot have been ░░ started. Note that this does not mean that the machine is now idle as services ░░ might still be busy with completing start-up. ░░ ░░ Kernel start-up required 1240194 microseconds. ░░ ░░ Initrd start-up required 4214855 microseconds. ░░ ░░ Userspace start-up required 43930358 microseconds. Jul 22 08:25:25 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 22 08:27:11 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4452]: Accepted publickey for root from 10.30.33.122 port 48780 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:11 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-logind[663]: New session 1 of user root. ░░ Subject: A new session 1 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 1 has been created for the user root. ░░ ░░ The leading process of the session is 4452. Jul 22 08:27:11 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Created slice user-0.slice - User Slice of UID 0. ░░ Subject: A start job for unit user-0.slice has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-0.slice has finished successfully. ░░ ░░ The job identifier is 509. Jul 22 08:27:11 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0... ░░ Subject: A start job for unit user-runtime-dir@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has begun execution. ░░ ░░ The job identifier is 508. Jul 22 08:27:11 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0. ░░ Subject: A start job for unit user-runtime-dir@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user-runtime-dir@0.service has finished successfully. ░░ ░░ The job identifier is 508. Jul 22 08:27:11 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting user@0.service - User Manager for UID 0... ░░ Subject: A start job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 588. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-logind[663]: New session 2 of user root. ░░ Subject: A new session 2 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 2 has been created for the user root. ░░ ░░ The leading process of the session is 4457. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com (systemd)[4457]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Queued start job for default target default.target. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Created slice app.slice - User Application Slice. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 4. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 11. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 12. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Reached target paths.target - Paths. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 7. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Reached target timers.target - Timers. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 10. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Starting dbus.socket - D-Bus User Message Bus Socket... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 9. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... ░░ Subject: A start job for unit UNIT has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has begun execution. ░░ ░░ The job identifier is 3. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Listening on dbus.socket - D-Bus User Message Bus Socket. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 9. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 3. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Reached target sockets.target - Sockets. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 8. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Reached target basic.target - Basic System. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 2. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Reached target default.target - Main User Target. ░░ Subject: A start job for unit UNIT has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit UNIT has finished successfully. ░░ ░░ The job identifier is 1. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[4457]: Startup finished in 105ms. ░░ Subject: User manager start-up is now complete ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The user manager instance for user 0 has been started. All services queued ░░ for starting have been started. Note that other services might still be starting ░░ up or be started at any later time. ░░ ░░ Startup of the manager took 105801 microseconds. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started user@0.service - User Manager for UID 0. ░░ Subject: A start job for unit user@0.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit user@0.service has finished successfully. ░░ ░░ The job identifier is 588. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-1.scope - Session 1 of User root. ░░ Subject: A start job for unit session-1.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-1.scope has finished successfully. ░░ ░░ The job identifier is 669. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4452]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4468]: Received disconnect from 10.30.33.122 port 48780:11: disconnected by user Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4468]: Disconnected from user root 10.30.33.122 port 48780 Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4452]: pam_unix(sshd:session): session closed for user root Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: session-1.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-1.scope has successfully entered the 'dead' state. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-logind[663]: Session 1 logged out. Waiting for processes to exit. Jul 22 08:27:12 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-logind[663]: Removed session 1. ░░ Subject: Session 1 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 1 has been terminated. Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4505]: Accepted publickey for root from 10.31.9.41 port 54516 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4506]: Accepted publickey for root from 10.31.9.41 port 54520 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-logind[663]: New session 3 of user root. ░░ Subject: A new session 3 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 3 has been created for the user root. ░░ ░░ The leading process of the session is 4505. Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-3.scope - Session 3 of User root. ░░ Subject: A start job for unit session-3.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-3.scope has finished successfully. ░░ ░░ The job identifier is 751. Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-logind[663]: New session 4 of user root. ░░ Subject: A new session 4 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 4 has been created for the user root. ░░ ░░ The leading process of the session is 4506. Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started session-4.scope - Session 4 of User root. ░░ Subject: A start job for unit session-4.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-4.scope has finished successfully. ░░ ░░ The job identifier is 833. Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4505]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4506]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4512]: Received disconnect from 10.31.9.41 port 54520:11: disconnected by user Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4512]: Disconnected from user root 10.31.9.41 port 54520 Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com sshd-session[4506]: pam_unix(sshd:session): session closed for user root Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: session-4.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-4.scope has successfully entered the 'dead' state. Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-logind[663]: Session 4 logged out. Waiting for processes to exit. Jul 22 08:27:20 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd-logind[663]: Removed session 4. ░░ Subject: Session 4 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 4 has been terminated. Jul 22 08:27:27 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com unknown: Running test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1) with reboot count 0 and test restart count 0. (Be aware the test name is sanitized!) Jul 22 08:27:28 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... ░░ Subject: A start job for unit systemd-hostnamed.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has begun execution. ░░ ░░ The job identifier is 915. Jul 22 08:27:28 ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. ░░ Subject: A start job for unit systemd-hostnamed.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit systemd-hostnamed.service has finished successfully. ░░ ░░ The job identifier is 915. Jul 22 08:27:28 managed-node12 systemd-hostnamed[6375]: Hostname set to (static) Jul 22 08:27:28 managed-node12 NetworkManager[718]: [1753187248.0603] hostname: static hostname changed from "ip-10-31-11-8.testing-farm.us-east-1.aws.redhat.com" to "managed-node12" Jul 22 08:27:28 managed-node12 systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... ░░ Subject: A start job for unit NetworkManager-dispatcher.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has begun execution. ░░ ░░ The job identifier is 993. Jul 22 08:27:28 managed-node12 systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. ░░ Subject: A start job for unit NetworkManager-dispatcher.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit NetworkManager-dispatcher.service has finished successfully. ░░ ░░ The job identifier is 993. Jul 22 08:27:29 managed-node12 unknown: Leaving test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1). (Be aware the test name is sanitized!) Jul 22 08:27:38 managed-node12 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Jul 22 08:27:58 managed-node12 systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Jul 22 08:28:00 managed-node12 sshd-session[7432]: Accepted publickey for root from 10.31.42.212 port 52474 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:00 managed-node12 systemd-logind[663]: New session 5 of user root. ░░ Subject: A new session 5 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 5 has been created for the user root. ░░ ░░ The leading process of the session is 7432. Jul 22 08:28:00 managed-node12 systemd[1]: Started session-5.scope - Session 5 of User root. ░░ Subject: A start job for unit session-5.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-5.scope has finished successfully. ░░ ░░ The job identifier is 1072. Jul 22 08:28:00 managed-node12 sshd-session[7432]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:00 managed-node12 sshd-session[7435]: Received disconnect from 10.31.42.212 port 52474:11: disconnected by user Jul 22 08:28:00 managed-node12 sshd-session[7435]: Disconnected from user root 10.31.42.212 port 52474 Jul 22 08:28:00 managed-node12 sshd-session[7432]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:00 managed-node12 systemd[1]: session-5.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-5.scope has successfully entered the 'dead' state. Jul 22 08:28:00 managed-node12 systemd-logind[663]: Session 5 logged out. Waiting for processes to exit. Jul 22 08:28:00 managed-node12 systemd-logind[663]: Removed session 5. ░░ Subject: Session 5 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 5 has been terminated. Jul 22 08:28:00 managed-node12 sshd-session[7460]: Accepted publickey for root from 10.31.42.212 port 52484 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:00 managed-node12 systemd-logind[663]: New session 6 of user root. ░░ Subject: A new session 6 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 6 has been created for the user root. ░░ ░░ The leading process of the session is 7460. Jul 22 08:28:00 managed-node12 systemd[1]: Started session-6.scope - Session 6 of User root. ░░ Subject: A start job for unit session-6.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-6.scope has finished successfully. ░░ ░░ The job identifier is 1154. Jul 22 08:28:00 managed-node12 sshd-session[7460]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:00 managed-node12 sshd-session[7463]: Received disconnect from 10.31.42.212 port 52484:11: disconnected by user Jul 22 08:28:00 managed-node12 sshd-session[7463]: Disconnected from user root 10.31.42.212 port 52484 Jul 22 08:28:00 managed-node12 sshd-session[7460]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:00 managed-node12 systemd[1]: session-6.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-6.scope has successfully entered the 'dead' state. Jul 22 08:28:00 managed-node12 systemd-logind[663]: Session 6 logged out. Waiting for processes to exit. Jul 22 08:28:00 managed-node12 systemd-logind[663]: Removed session 6. ░░ Subject: Session 6 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 6 has been terminated. Jul 22 08:28:15 managed-node12 sshd-session[7490]: Accepted publickey for root from 10.31.42.212 port 39946 ssh2: ECDSA SHA256:WU7noZiQSxkQHAT4JsTwkz7sTow5ig7aO2gcgaqEwOg Jul 22 08:28:15 managed-node12 systemd-logind[663]: New session 7 of user root. ░░ Subject: A new session 7 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 7 has been created for the user root. ░░ ░░ The leading process of the session is 7490. Jul 22 08:28:15 managed-node12 systemd[1]: Started session-7.scope - Session 7 of User root. ░░ Subject: A start job for unit session-7.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-7.scope has finished successfully. ░░ ░░ The job identifier is 1236. Jul 22 08:28:15 managed-node12 sshd-session[7490]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:17 managed-node12 sudo[7667]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nletgbtfmlzkyoswyqgajbaqiwkqytiu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187296.3526263-8235-236960251229972/AnsiballZ_setup.py' Jul 22 08:28:17 managed-node12 sudo[7667]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:17 managed-node12 python3.12[7670]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:17 managed-node12 sudo[7667]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:18 managed-node12 sudo[7848]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qghczhczlnwzpdnwegxrgwgcxgcfpjmt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187298.1620102-8426-119315829818735/AnsiballZ_stat.py' Jul 22 08:28:18 managed-node12 sudo[7848]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:18 managed-node12 python3.12[7851]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:18 managed-node12 sudo[7848]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:19 managed-node12 sudo[8000]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chlkjovdvhqztjaypewowwkqfroqujmb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187298.7461765-8448-129987442455556/AnsiballZ_dnf.py' Jul 22 08:28:19 managed-node12 sudo[8000]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:19 managed-node12 python3.12[8003]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:30 managed-node12 groupadd[8108]: group added to /etc/group: name=clevis, GID=993 Jul 22 08:28:30 managed-node12 groupadd[8108]: group added to /etc/gshadow: name=clevis Jul 22 08:28:30 managed-node12 groupadd[8108]: new group: name=clevis, GID=993 Jul 22 08:28:30 managed-node12 useradd[8110]: new user: name=clevis, UID=993, GID=993, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none Jul 22 08:28:30 managed-node12 usermod[8114]: add 'clevis' to group 'tss' Jul 22 08:28:30 managed-node12 usermod[8114]: add 'clevis' to shadow group 'tss' Jul 22 08:28:30 managed-node12 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:30 managed-node12 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:30 managed-node12 groupadd[8121]: group added to /etc/group: name=polkitd, GID=114 Jul 22 08:28:30 managed-node12 groupadd[8121]: group added to /etc/gshadow: name=polkitd Jul 22 08:28:30 managed-node12 groupadd[8121]: new group: name=polkitd, GID=114 Jul 22 08:28:31 managed-node12 useradd[8124]: new user: name=polkitd, UID=114, GID=114, home=/, shell=/sbin/nologin, from=none Jul 22 08:28:31 managed-node12 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:31 managed-node12 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:31 managed-node12 systemd[1]: Listening on pcscd.socket - PC/SC Smart Card Daemon Activation Socket. ░░ Subject: A start job for unit pcscd.socket has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit pcscd.socket has finished successfully. ░░ ░░ The job identifier is 1322. Jul 22 08:28:31 managed-node12 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:31 managed-node12 dbus-broker-launch[620]: Noticed file-system modification, trigger reload. ░░ Subject: A configuration directory was written to ░░ Defined-By: dbus-broker ░░ Support: https://groups.google.com/forum/#!forum/bus1-devel ░░ ░░ A write was detected to one of the directories containing D-Bus configuration ░░ files, triggering a configuration reload. ░░ ░░ This functionality exists for backwards compatibility to pick up changes to ░░ D-Bus configuration without an explicit reolad request. Typically when ░░ installing or removing third-party software causes D-Bus configuration files ░░ to be added or removed. ░░ ░░ It is worth noting that this may cause partial configuration to be loaded in ░░ case dispatching this notification races with the writing of the configuration ░░ files. However, a future notification will then cause the configuration to be ░░ reladed again. Jul 22 08:28:35 managed-node12 systemd[1]: Started run-p8154-i8454.service - [systemd-run] /usr/bin/systemctl start man-db-cache-update. ░░ Subject: A start job for unit run-p8154-i8454.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit run-p8154-i8454.service has finished successfully. ░░ ░░ The job identifier is 1403. Jul 22 08:28:35 managed-node12 systemd[1]: Reload requested from client PID 8158 ('systemctl') (unit session-7.scope)... Jul 22 08:28:35 managed-node12 systemd[1]: Reloading... Jul 22 08:28:35 managed-node12 systemd-rc-local-generator[8195]: /etc/rc.d/rc.local is not marked executable, skipping. Jul 22 08:28:35 managed-node12 systemd[1]: Reloading finished in 198 ms. Jul 22 08:28:35 managed-node12 systemd[1]: Starting man-db-cache-update.service... ░░ Subject: A start job for unit man-db-cache-update.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has begun execution. ░░ ░░ The job identifier is 1481. Jul 22 08:28:35 managed-node12 systemd[1]: Queuing reload/restart jobs for marked units… Jul 22 08:28:35 managed-node12 systemd[1]: Reloading user@0.service - User Manager for UID 0... ░░ Subject: A reload job for unit user@0.service has begun execution ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A reload job for unit user@0.service has begun execution. ░░ ░░ The job identifier is 1562. Jul 22 08:28:35 managed-node12 systemd[4457]: Received SIGRTMIN+25 from PID 1 (systemd). Jul 22 08:28:35 managed-node12 systemd[4457]: Reexecuting. Jul 22 08:28:35 managed-node12 systemd[1]: Reloaded user@0.service - User Manager for UID 0. ░░ Subject: A reload job for unit user@0.service has finished ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A reload job for unit user@0.service has finished. ░░ ░░ The job identifier is 1562 and the job result is done. Jul 22 08:28:36 managed-node12 sudo[8000]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:37 managed-node12 sudo[8750]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mygojpahswuobiqvujgaqfqaigogfuxi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187316.7781928-9659-209068170690998/AnsiballZ_blivet.py' Jul 22 08:28:37 managed-node12 sudo[8750]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:38 managed-node12 python3.12[8753]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:28:38 managed-node12 sudo[8750]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:39 managed-node12 sudo[8914]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slxdnbpwqcqfpqimccuqfdnxjrydvkxa ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187319.0663211-9893-27069676392682/AnsiballZ_dnf.py' Jul 22 08:28:39 managed-node12 sudo[8914]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:39 managed-node12 python3.12[8917]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:40 managed-node12 sudo[8914]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:40 managed-node12 systemd[1]: man-db-cache-update.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit man-db-cache-update.service has successfully entered the 'dead' state. Jul 22 08:28:40 managed-node12 systemd[1]: Finished man-db-cache-update.service. ░░ Subject: A start job for unit man-db-cache-update.service has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit man-db-cache-update.service has finished successfully. ░░ ░░ The job identifier is 1481. Jul 22 08:28:40 managed-node12 systemd[1]: run-p8154-i8454.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit run-p8154-i8454.service has successfully entered the 'dead' state. Jul 22 08:28:40 managed-node12 sudo[9077]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-forhhudiskvnescihpkutdzivqzqaxnb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187320.2406383-10089-54346220762519/AnsiballZ_service_facts.py' Jul 22 08:28:40 managed-node12 sudo[9077]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:40 managed-node12 python3.12[9080]: ansible-service_facts Invoked Jul 22 08:28:42 managed-node12 sudo[9077]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:43 managed-node12 sudo[9347]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znomilciqdobjuqqoyedqfibuxeepwmz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187323.0702999-10430-52332909408/AnsiballZ_blivet.py' Jul 22 08:28:43 managed-node12 sudo[9347]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:43 managed-node12 python3.12[9350]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:28:43 managed-node12 sudo[9347]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:44 managed-node12 sudo[9507]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzuckiawmqijanmyfthbjwpctjriqfwb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187323.8330836-10521-160921096237366/AnsiballZ_stat.py' Jul 22 08:28:44 managed-node12 sudo[9507]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:44 managed-node12 python3.12[9510]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:44 managed-node12 sudo[9507]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:45 managed-node12 sudo[9667]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdivupiksurtkmgmgfazzndlyomqkqlr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187325.5619192-10631-264268707390536/AnsiballZ_stat.py' Jul 22 08:28:45 managed-node12 sudo[9667]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:45 managed-node12 python3.12[9670]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:28:45 managed-node12 sudo[9667]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:46 managed-node12 sudo[9827]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfypylrayfahchavlmyzruqbcokxkiqh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187326.3263235-10701-18152902840379/AnsiballZ_setup.py' Jul 22 08:28:46 managed-node12 sudo[9827]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:46 managed-node12 python3.12[9830]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:28:47 managed-node12 sudo[9827]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:47 managed-node12 sudo[10014]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raaxevhaxflvpvjrscioapimhasbmgwb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187327.6835806-10886-216547091600496/AnsiballZ_dnf.py' Jul 22 08:28:47 managed-node12 sudo[10014]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:48 managed-node12 python3.12[10017]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:28:48 managed-node12 sudo[10014]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:50 managed-node12 sudo[10173]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlpthbhovabuhyuvsdffjbnqrezoreqc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187329.0085914-10983-232312183891591/AnsiballZ_find_unused_disk.py' Jul 22 08:28:50 managed-node12 sudo[10173]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:50 managed-node12 python3.12[10176]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:28:50 managed-node12 sudo[10173]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:51 managed-node12 sudo[10333]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlamfbiobgrjpwawvkqqmqernozmbuam ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187330.5133302-11123-189234312465411/AnsiballZ_command.py' Jul 22 08:28:51 managed-node12 sudo[10333]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:51 managed-node12 python3.12[10336]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:28:51 managed-node12 sudo[10333]: pam_unix(sudo:session): session closed for user root Jul 22 08:28:53 managed-node12 sshd-session[10364]: Accepted publickey for root from 10.31.42.212 port 34536 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:53 managed-node12 systemd-logind[663]: New session 8 of user root. ░░ Subject: A new session 8 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 8 has been created for the user root. ░░ ░░ The leading process of the session is 10364. Jul 22 08:28:53 managed-node12 systemd[1]: Started session-8.scope - Session 8 of User root. ░░ Subject: A start job for unit session-8.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-8.scope has finished successfully. ░░ ░░ The job identifier is 1563. Jul 22 08:28:53 managed-node12 sshd-session[10364]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:53 managed-node12 sshd-session[10367]: Received disconnect from 10.31.42.212 port 34536:11: disconnected by user Jul 22 08:28:53 managed-node12 sshd-session[10367]: Disconnected from user root 10.31.42.212 port 34536 Jul 22 08:28:53 managed-node12 sshd-session[10364]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:53 managed-node12 systemd[1]: session-8.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-8.scope has successfully entered the 'dead' state. Jul 22 08:28:53 managed-node12 systemd-logind[663]: Session 8 logged out. Waiting for processes to exit. Jul 22 08:28:53 managed-node12 systemd-logind[663]: Removed session 8. ░░ Subject: Session 8 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 8 has been terminated. Jul 22 08:28:53 managed-node12 sshd-session[10394]: Accepted publickey for root from 10.31.42.212 port 34538 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:28:53 managed-node12 systemd-logind[663]: New session 9 of user root. ░░ Subject: A new session 9 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 9 has been created for the user root. ░░ ░░ The leading process of the session is 10394. Jul 22 08:28:53 managed-node12 systemd[1]: Started session-9.scope - Session 9 of User root. ░░ Subject: A start job for unit session-9.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-9.scope has finished successfully. ░░ ░░ The job identifier is 1648. Jul 22 08:28:53 managed-node12 sshd-session[10394]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:28:53 managed-node12 sshd-session[10397]: Received disconnect from 10.31.42.212 port 34538:11: disconnected by user Jul 22 08:28:53 managed-node12 sshd-session[10397]: Disconnected from user root 10.31.42.212 port 34538 Jul 22 08:28:53 managed-node12 sshd-session[10394]: pam_unix(sshd:session): session closed for user root Jul 22 08:28:53 managed-node12 systemd[1]: session-9.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-9.scope has successfully entered the 'dead' state. Jul 22 08:28:53 managed-node12 systemd-logind[663]: Session 9 logged out. Waiting for processes to exit. Jul 22 08:28:53 managed-node12 systemd-logind[663]: Removed session 9. ░░ Subject: Session 9 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 9 has been terminated. Jul 22 08:29:02 managed-node12 python3.12[10604]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:03 managed-node12 sudo[10788]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhlczfvfglirvqwvprwtarxihfkbgory ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187343.2140403-12613-87357199387234/AnsiballZ_setup.py' Jul 22 08:29:03 managed-node12 sudo[10788]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:03 managed-node12 python3.12[10792]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:04 managed-node12 sudo[10788]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:06 managed-node12 sudo[10976]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beigwhbxenmfozxhqhodhgjcyfzeanjo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187345.4491827-12826-262784654586620/AnsiballZ_stat.py' Jul 22 08:29:06 managed-node12 sudo[10976]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:06 managed-node12 python3.12[10979]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:06 managed-node12 sudo[10976]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:09 managed-node12 sudo[11134]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afoyprnqiwgalujyfxzvzvcnwqvpthim ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187348.0980816-13006-143967474758861/AnsiballZ_dnf.py' Jul 22 08:29:09 managed-node12 sudo[11134]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:09 managed-node12 python3.12[11137]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:10 managed-node12 sudo[11134]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:12 managed-node12 sudo[11293]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohvuzsfmbkbgwfpunvmipmesneernkyf ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187351.1391966-13292-82808335488666/AnsiballZ_blivet.py' Jul 22 08:29:12 managed-node12 sudo[11293]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:12 managed-node12 python3.12[11296]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:29:12 managed-node12 sudo[11293]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:14 managed-node12 sudo[11453]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdfedkjcneubyvpbnjdpqmpwnjybaiin ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187354.207348-14026-193647181272961/AnsiballZ_dnf.py' Jul 22 08:29:14 managed-node12 sudo[11453]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:14 managed-node12 python3.12[11456]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:15 managed-node12 sudo[11453]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:16 managed-node12 sudo[11612]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xevlhjvpizuujbrfmhmvcfyvkkqfjmba ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187355.2468772-14164-153718175425388/AnsiballZ_service_facts.py' Jul 22 08:29:16 managed-node12 sudo[11612]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:16 managed-node12 python3.12[11616]: ansible-service_facts Invoked Jul 22 08:29:17 managed-node12 sudo[11612]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:19 managed-node12 sudo[11883]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smzfqzaftuzryfogyzzfbpzpvtzjpxzo ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187359.2213848-14594-97041287549764/AnsiballZ_blivet.py' Jul 22 08:29:19 managed-node12 sudo[11883]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:19 managed-node12 python3.12[11887]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:29:19 managed-node12 sudo[11883]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:20 managed-node12 sudo[12044]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuvvihqnigwybvhsvszimgiottdswola ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187360.35107-14779-40802599141681/AnsiballZ_stat.py' Jul 22 08:29:20 managed-node12 sudo[12044]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:20 managed-node12 python3.12[12047]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:20 managed-node12 sudo[12044]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:23 managed-node12 sudo[12204]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqsxhlkxkbxdlduzfimqlhhxusecmitx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187362.9420452-15017-85578760896198/AnsiballZ_stat.py' Jul 22 08:29:23 managed-node12 sudo[12204]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:23 managed-node12 python3.12[12207]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:23 managed-node12 sudo[12204]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:23 managed-node12 sudo[12364]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxrzqwadoaexpxmhlmbdulikhsvpppgk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187363.6854064-15068-195981157691181/AnsiballZ_setup.py' Jul 22 08:29:24 managed-node12 sudo[12364]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:24 managed-node12 python3.12[12367]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:24 managed-node12 sudo[12364]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:26 managed-node12 sudo[12551]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbvmsjammfqmmugxphztydakahxvemgd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187365.9755447-15191-35522789581812/AnsiballZ_dnf.py' Jul 22 08:29:26 managed-node12 sudo[12551]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:26 managed-node12 python3.12[12554]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:26 managed-node12 sudo[12551]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:28 managed-node12 sudo[12710]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbkmdtdyvewcrlmxutabboggnvvkyqig ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187367.5120945-15332-101359185648095/AnsiballZ_find_unused_disk.py' Jul 22 08:29:28 managed-node12 sudo[12710]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:29 managed-node12 python3.12[12713]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=10g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:29:29 managed-node12 sudo[12710]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:30 managed-node12 sudo[12870]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svyrbytimjpooqbcyxnjdsldpscocusv ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187369.5099466-15516-3115110961744/AnsiballZ_command.py' Jul 22 08:29:30 managed-node12 sudo[12870]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:30 managed-node12 python3.12[12873]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:29:31 managed-node12 sudo[12870]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:32 managed-node12 sshd-session[12901]: Accepted publickey for root from 10.31.42.212 port 60934 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:32 managed-node12 systemd-logind[663]: New session 10 of user root. ░░ Subject: A new session 10 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 10 has been created for the user root. ░░ ░░ The leading process of the session is 12901. Jul 22 08:29:32 managed-node12 systemd[1]: Started session-10.scope - Session 10 of User root. ░░ Subject: A start job for unit session-10.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-10.scope has finished successfully. ░░ ░░ The job identifier is 1733. Jul 22 08:29:32 managed-node12 sshd-session[12901]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:33 managed-node12 sshd-session[12904]: Received disconnect from 10.31.42.212 port 60934:11: disconnected by user Jul 22 08:29:33 managed-node12 sshd-session[12904]: Disconnected from user root 10.31.42.212 port 60934 Jul 22 08:29:33 managed-node12 sshd-session[12901]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:33 managed-node12 systemd[1]: session-10.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-10.scope has successfully entered the 'dead' state. Jul 22 08:29:33 managed-node12 systemd-logind[663]: Session 10 logged out. Waiting for processes to exit. Jul 22 08:29:33 managed-node12 systemd-logind[663]: Removed session 10. ░░ Subject: Session 10 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 10 has been terminated. Jul 22 08:29:33 managed-node12 sshd-session[12931]: Accepted publickey for root from 10.31.42.212 port 60946 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:33 managed-node12 systemd-logind[663]: New session 11 of user root. ░░ Subject: A new session 11 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 11 has been created for the user root. ░░ ░░ The leading process of the session is 12931. Jul 22 08:29:33 managed-node12 systemd[1]: Started session-11.scope - Session 11 of User root. ░░ Subject: A start job for unit session-11.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-11.scope has finished successfully. ░░ ░░ The job identifier is 1818. Jul 22 08:29:33 managed-node12 sshd-session[12931]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:33 managed-node12 sshd-session[12934]: Received disconnect from 10.31.42.212 port 60946:11: disconnected by user Jul 22 08:29:33 managed-node12 sshd-session[12934]: Disconnected from user root 10.31.42.212 port 60946 Jul 22 08:29:33 managed-node12 sshd-session[12931]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:33 managed-node12 systemd-logind[663]: Session 11 logged out. Waiting for processes to exit. Jul 22 08:29:33 managed-node12 systemd[1]: session-11.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-11.scope has successfully entered the 'dead' state. Jul 22 08:29:33 managed-node12 systemd-logind[663]: Removed session 11. ░░ Subject: Session 11 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 11 has been terminated. Jul 22 08:29:38 managed-node12 sshd-session[12961]: Accepted publickey for root from 10.31.42.212 port 60960 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:38 managed-node12 systemd-logind[663]: New session 12 of user root. ░░ Subject: A new session 12 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 12 has been created for the user root. ░░ ░░ The leading process of the session is 12961. Jul 22 08:29:38 managed-node12 systemd[1]: Started session-12.scope - Session 12 of User root. ░░ Subject: A start job for unit session-12.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-12.scope has finished successfully. ░░ ░░ The job identifier is 1903. Jul 22 08:29:38 managed-node12 sshd-session[12961]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:38 managed-node12 sshd-session[12964]: Received disconnect from 10.31.42.212 port 60960:11: disconnected by user Jul 22 08:29:38 managed-node12 sshd-session[12964]: Disconnected from user root 10.31.42.212 port 60960 Jul 22 08:29:38 managed-node12 sshd-session[12961]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:38 managed-node12 systemd-logind[663]: Session 12 logged out. Waiting for processes to exit. Jul 22 08:29:38 managed-node12 systemd[1]: session-12.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-12.scope has successfully entered the 'dead' state. Jul 22 08:29:38 managed-node12 systemd-logind[663]: Removed session 12. ░░ Subject: Session 12 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 12 has been terminated. Jul 22 08:29:44 managed-node12 sudo[13171]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoplzkifamnbybuauxbuldteingjxqfs ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187381.5628417-16658-215054840484750/AnsiballZ_setup.py' Jul 22 08:29:44 managed-node12 sudo[13171]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:44 managed-node12 python3.12[13174]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:29:44 managed-node12 sudo[13171]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:48 managed-node12 sudo[13358]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmhandgzqjjisqgasdstggykdqmgike ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187386.9650419-17184-236788743613810/AnsiballZ_stat.py' Jul 22 08:29:48 managed-node12 sudo[13358]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:48 managed-node12 python3.12[13361]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:29:48 managed-node12 sudo[13358]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:51 managed-node12 sudo[13516]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqjmtaerocdsdfjbopsbztpdvghbcsyc ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187390.0310044-17550-33709480781104/AnsiballZ_dnf.py' Jul 22 08:29:51 managed-node12 sudo[13516]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:51 managed-node12 python3.12[13519]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:52 managed-node12 sudo[13516]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:54 managed-node12 sudo[13675]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eefunsaunbyqnisxijakzlsyhhdfdllb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187393.2844284-18158-181557127667373/AnsiballZ_blivet.py' Jul 22 08:29:54 managed-node12 sudo[13675]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:54 managed-node12 python3.12[13678]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:29:54 managed-node12 sudo[13675]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:55 managed-node12 sudo[13835]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaktblagrwkbajvxrqzwwfjpxsscfuix ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187395.7050936-18500-128420374797804/AnsiballZ_dnf.py' Jul 22 08:29:55 managed-node12 sudo[13835]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:56 managed-node12 python3.12[13838]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:29:56 managed-node12 sudo[13835]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:57 managed-node12 sudo[13994]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmmkpznhltskpdmlimvlxubpefjdwjgl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187396.7871468-18659-62556598330625/AnsiballZ_service_facts.py' Jul 22 08:29:57 managed-node12 sudo[13994]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:29:57 managed-node12 python3.12[13997]: ansible-service_facts Invoked Jul 22 08:29:59 managed-node12 sudo[13994]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:01 managed-node12 sudo[14264]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aisvderqefcvlmyyrvjodlnnwcvrxmvr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187400.7576227-19007-58039563271382/AnsiballZ_blivet.py' Jul 22 08:30:01 managed-node12 sudo[14264]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:01 managed-node12 python3.12[14267]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=True disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=False uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:30:01 managed-node12 sudo[14264]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:02 managed-node12 sudo[14424]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuzjeiwtntqdrdjwimebzlmnjvgxtxky ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187402.0691054-19234-103860448059442/AnsiballZ_stat.py' Jul 22 08:30:02 managed-node12 sudo[14424]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:02 managed-node12 python3.12[14427]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:02 managed-node12 sudo[14424]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:05 managed-node12 sudo[14584]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojaandczdncjxgmwvlboollqejpakpeu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187405.1624281-19583-6005366543196/AnsiballZ_stat.py' Jul 22 08:30:05 managed-node12 sudo[14584]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:05 managed-node12 python3.12[14587]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:05 managed-node12 sudo[14584]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:06 managed-node12 sudo[14744]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdehlmgdaclwtkpquxsulxfcnpdrqhm ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187406.0599916-19629-244360955227013/AnsiballZ_setup.py' Jul 22 08:30:06 managed-node12 sudo[14744]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:06 managed-node12 python3.12[14747]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:07 managed-node12 sudo[14744]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:08 managed-node12 sudo[14931]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpjpihcyfipkleilgghwchqqvmagcofz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187408.0686984-19741-262972595930622/AnsiballZ_dnf.py' Jul 22 08:30:08 managed-node12 sudo[14931]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:08 managed-node12 python3.12[14934]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:08 managed-node12 sudo[14931]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:10 managed-node12 sudo[15090]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjltugeetiemwqssvcvzzpurtknbmasw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187409.3796942-19812-48459579608172/AnsiballZ_find_unused_disk.py' Jul 22 08:30:10 managed-node12 sudo[15090]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:10 managed-node12 python3.12[15093]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=2 min_size=0 max_size=0 match_sector_size=False with_interface=None Jul 22 08:30:11 managed-node12 sudo[15090]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:13 managed-node12 sudo[15250]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwyotssskutqpkwvfyvwxtbdgqmuyevt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187411.9659946-20069-271937252143562/AnsiballZ_command.py' Jul 22 08:30:13 managed-node12 sudo[15250]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:13 managed-node12 python3.12[15253]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:30:13 managed-node12 sudo[15250]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:15 managed-node12 sshd-session[15281]: Accepted publickey for root from 10.31.42.212 port 46522 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:15 managed-node12 systemd-logind[663]: New session 13 of user root. ░░ Subject: A new session 13 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 13 has been created for the user root. ░░ ░░ The leading process of the session is 15281. Jul 22 08:30:15 managed-node12 systemd[1]: Started session-13.scope - Session 13 of User root. ░░ Subject: A start job for unit session-13.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-13.scope has finished successfully. ░░ ░░ The job identifier is 1988. Jul 22 08:30:15 managed-node12 sshd-session[15281]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:15 managed-node12 sshd-session[15284]: Received disconnect from 10.31.42.212 port 46522:11: disconnected by user Jul 22 08:30:15 managed-node12 sshd-session[15284]: Disconnected from user root 10.31.42.212 port 46522 Jul 22 08:30:15 managed-node12 sshd-session[15281]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:15 managed-node12 systemd[1]: session-13.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-13.scope has successfully entered the 'dead' state. Jul 22 08:30:15 managed-node12 systemd-logind[663]: Session 13 logged out. Waiting for processes to exit. Jul 22 08:30:15 managed-node12 systemd-logind[663]: Removed session 13. ░░ Subject: Session 13 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 13 has been terminated. Jul 22 08:30:16 managed-node12 sshd-session[15311]: Accepted publickey for root from 10.31.42.212 port 46524 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:16 managed-node12 systemd-logind[663]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 15311. Jul 22 08:30:16 managed-node12 systemd[1]: Started session-14.scope - Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 2073. Jul 22 08:30:16 managed-node12 sshd-session[15311]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:16 managed-node12 sshd-session[15314]: Received disconnect from 10.31.42.212 port 46524:11: disconnected by user Jul 22 08:30:16 managed-node12 sshd-session[15314]: Disconnected from user root 10.31.42.212 port 46524 Jul 22 08:30:16 managed-node12 sshd-session[15311]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:16 managed-node12 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Jul 22 08:30:16 managed-node12 systemd-logind[663]: Session 14 logged out. Waiting for processes to exit. Jul 22 08:30:16 managed-node12 systemd-logind[663]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Jul 22 08:30:23 managed-node12 python3.12[15521]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:24 managed-node12 sudo[15705]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psexbajhmdpnsqxvkkmopmtrrxzcvmrq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187424.4927926-21961-16065437194513/AnsiballZ_setup.py' Jul 22 08:30:24 managed-node12 sudo[15705]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:25 managed-node12 python3.12[15709]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:25 managed-node12 sudo[15705]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:27 managed-node12 sudo[15893]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbpmqpkqrylfqdbfyhglfihauuhgcjer ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187426.4102612-22277-144056283655581/AnsiballZ_stat.py' Jul 22 08:30:27 managed-node12 sudo[15893]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:27 managed-node12 python3.12[15896]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:27 managed-node12 sudo[15893]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:29 managed-node12 sudo[16051]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxiqwlqxonwvgbvzjjuycyfnpcbsteef ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187428.2450566-22518-132778686250342/AnsiballZ_dnf.py' Jul 22 08:30:29 managed-node12 sudo[16051]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:29 managed-node12 python3.12[16054]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:29 managed-node12 sudo[16051]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:31 managed-node12 sudo[16210]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebsifymetnqhrcjwirccwabvabxbhrec ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187430.4171627-22661-85135179147598/AnsiballZ_blivet.py' Jul 22 08:30:31 managed-node12 sudo[16210]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:31 managed-node12 python3.12[16213]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:30:31 managed-node12 sudo[16210]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:32 managed-node12 sudo[16370]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tloohgvkcxvytishjkdgpwutlvckdnej ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187432.3887148-22830-105603714238949/AnsiballZ_dnf.py' Jul 22 08:30:32 managed-node12 sudo[16370]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:32 managed-node12 python3.12[16373]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:33 managed-node12 sudo[16370]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:34 managed-node12 sudo[16529]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlsudenjqkptjgotqecethmtooektsdj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187433.5204585-23026-233356277231799/AnsiballZ_service_facts.py' Jul 22 08:30:34 managed-node12 sudo[16529]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:34 managed-node12 python3.12[16532]: ansible-service_facts Invoked Jul 22 08:30:36 managed-node12 sudo[16529]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:37 managed-node12 sudo[16800]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khauvolmyionhgijcxgeyijlzjitfyzl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187437.3017886-23388-61608644135891/AnsiballZ_blivet.py' Jul 22 08:30:37 managed-node12 sudo[16800]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:37 managed-node12 python3.12[16804]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:30:37 managed-node12 sudo[16800]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:38 managed-node12 sudo[16961]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxgtsvpojgttrfmgvsegurneeapovbgt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187438.4024286-23619-156165966528137/AnsiballZ_stat.py' Jul 22 08:30:38 managed-node12 sudo[16961]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:38 managed-node12 python3.12[16964]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:38 managed-node12 sudo[16961]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:41 managed-node12 sudo[17121]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzufomzldxtgphrqpmjcqpjlepwypadi ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187440.8607109-23785-273030261516100/AnsiballZ_stat.py' Jul 22 08:30:41 managed-node12 sudo[17121]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:41 managed-node12 python3.12[17124]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:41 managed-node12 sudo[17121]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:42 managed-node12 sudo[17281]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcigryzamljeufvigngiqtyqncokxiav ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187441.729605-23849-271642758010976/AnsiballZ_setup.py' Jul 22 08:30:42 managed-node12 sudo[17281]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:42 managed-node12 python3.12[17284]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:42 managed-node12 sudo[17281]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:43 managed-node12 sudo[17468]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpbgnjqanphbhpwkfsoxemnsiysbonhs ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187443.4897237-24100-94983145089427/AnsiballZ_dnf.py' Jul 22 08:30:43 managed-node12 sudo[17468]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:43 managed-node12 python3.12[17471]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:44 managed-node12 sudo[17468]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:45 managed-node12 sudo[17627]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvomyefxiginbjaurvpsoxmarnshhwqd ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187444.7849467-24277-177549560082155/AnsiballZ_find_unused_disk.py' Jul 22 08:30:45 managed-node12 sudo[17627]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:45 managed-node12 python3.12[17630]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:30:45 managed-node12 sudo[17627]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:46 managed-node12 sudo[17787]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhgocthizecyaqpcbmlkkrltxamlssve ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187446.0775106-24524-3966026378842/AnsiballZ_command.py' Jul 22 08:30:46 managed-node12 sudo[17787]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:47 managed-node12 python3.12[17790]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:30:47 managed-node12 sudo[17787]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:48 managed-node12 sshd-session[17818]: Accepted publickey for root from 10.31.42.212 port 44276 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:48 managed-node12 systemd-logind[663]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 17818. Jul 22 08:30:48 managed-node12 systemd[1]: Started session-15.scope - Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 2158. Jul 22 08:30:48 managed-node12 sshd-session[17818]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:48 managed-node12 sshd-session[17821]: Received disconnect from 10.31.42.212 port 44276:11: disconnected by user Jul 22 08:30:48 managed-node12 sshd-session[17821]: Disconnected from user root 10.31.42.212 port 44276 Jul 22 08:30:48 managed-node12 sshd-session[17818]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:48 managed-node12 systemd-logind[663]: Session 15 logged out. Waiting for processes to exit. Jul 22 08:30:48 managed-node12 systemd[1]: session-15.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-15.scope has successfully entered the 'dead' state. Jul 22 08:30:48 managed-node12 systemd-logind[663]: Removed session 15. ░░ Subject: Session 15 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 15 has been terminated. Jul 22 08:30:49 managed-node12 sshd-session[17848]: Accepted publickey for root from 10.31.42.212 port 44286 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:49 managed-node12 systemd-logind[663]: New session 16 of user root. ░░ Subject: A new session 16 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 16 has been created for the user root. ░░ ░░ The leading process of the session is 17848. Jul 22 08:30:49 managed-node12 systemd[1]: Started session-16.scope - Session 16 of User root. ░░ Subject: A start job for unit session-16.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-16.scope has finished successfully. ░░ ░░ The job identifier is 2243. Jul 22 08:30:49 managed-node12 sshd-session[17848]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:49 managed-node12 sshd-session[17851]: Received disconnect from 10.31.42.212 port 44286:11: disconnected by user Jul 22 08:30:49 managed-node12 sshd-session[17851]: Disconnected from user root 10.31.42.212 port 44286 Jul 22 08:30:49 managed-node12 sshd-session[17848]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:49 managed-node12 systemd[1]: session-16.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-16.scope has successfully entered the 'dead' state. Jul 22 08:30:49 managed-node12 systemd-logind[663]: Session 16 logged out. Waiting for processes to exit. Jul 22 08:30:49 managed-node12 systemd-logind[663]: Removed session 16. ░░ Subject: Session 16 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 16 has been terminated. Jul 22 08:30:53 managed-node12 sudo[18058]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amgdumiwtjbunucznyiqniqtuprnxlnt ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187451.784597-25282-57169976248609/AnsiballZ_setup.py' Jul 22 08:30:53 managed-node12 sudo[18058]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:53 managed-node12 python3.12[18061]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:30:53 managed-node12 sudo[18058]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:55 managed-node12 sudo[18245]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynhyvdielzjajniglpjbjqvcduqcyksh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187454.8897502-25563-130899003572423/AnsiballZ_stat.py' Jul 22 08:30:55 managed-node12 sudo[18245]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:55 managed-node12 python3.12[18248]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:30:55 managed-node12 sudo[18245]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:57 managed-node12 sudo[18403]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uehzaumapsqjbohkmlfhredhrsczqwnr ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187456.704238-25883-156668227823879/AnsiballZ_dnf.py' Jul 22 08:30:57 managed-node12 sudo[18403]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:57 managed-node12 python3.12[18406]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:30:58 managed-node12 sudo[18403]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:59 managed-node12 sudo[18562]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtogcayfboyqnprlmycgaylesmmotps ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187458.8491778-26122-117915022066637/AnsiballZ_blivet.py' Jul 22 08:30:59 managed-node12 sudo[18562]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:30:59 managed-node12 python3.12[18565]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:00 managed-node12 sudo[18562]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:01 managed-node12 sudo[18722]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxdimbmvsgrgcoghjunmlicxyfeafyyu ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187461.2228281-26282-151535537449772/AnsiballZ_dnf.py' Jul 22 08:31:01 managed-node12 sudo[18722]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:01 managed-node12 python3.12[18725]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:02 managed-node12 sudo[18722]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:03 managed-node12 sudo[18881]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qixgqjleorffxdifozwctfqakmrnadql ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187462.4694788-26395-166125324605259/AnsiballZ_service_facts.py' Jul 22 08:31:03 managed-node12 sudo[18881]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:03 managed-node12 python3.12[18884]: ansible-service_facts Invoked Jul 22 08:31:05 managed-node12 sudo[18881]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:06 managed-node12 sudo[19151]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boaleoimhkyzstqzbdtmqrorgsxtrtpz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187466.5736668-27034-29042463086825/AnsiballZ_blivet.py' Jul 22 08:31:06 managed-node12 sudo[19151]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:07 managed-node12 python3.12[19154]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:07 managed-node12 sudo[19151]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:07 managed-node12 sudo[19311]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjofgkfbjvypplsnkgrpkglkxjfcwitn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187467.6430044-27127-28496304968322/AnsiballZ_stat.py' Jul 22 08:31:07 managed-node12 sudo[19311]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:08 managed-node12 python3.12[19314]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:08 managed-node12 sudo[19311]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:10 managed-node12 sudo[19471]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzruavhbptvqtrsnuxjnunzekqfkckp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187470.3593578-27302-259951036081504/AnsiballZ_stat.py' Jul 22 08:31:10 managed-node12 sudo[19471]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:10 managed-node12 python3.12[19474]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:10 managed-node12 sudo[19471]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:11 managed-node12 sudo[19631]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cycjrjqmbffpjiyzyesxamxqlkzlsltp ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187471.305025-27388-139351418826203/AnsiballZ_setup.py' Jul 22 08:31:11 managed-node12 sudo[19631]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:11 managed-node12 python3.12[19634]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:12 managed-node12 sudo[19631]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:13 managed-node12 sudo[19818]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xavyqmlstwtktoflsiipyykztyaatnmj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187473.48329-27740-257789428230864/AnsiballZ_dnf.py' Jul 22 08:31:13 managed-node12 sudo[19818]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:14 managed-node12 python3.12[19821]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:14 managed-node12 sudo[19818]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:15 managed-node12 sudo[19977]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwafgnocfmfddphoyzqsdpzhxomnkbyk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187474.8536568-27981-255807436297791/AnsiballZ_find_unused_disk.py' Jul 22 08:31:15 managed-node12 sudo[19977]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:15 managed-node12 python3.12[19980]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=1 min_size=0 max_size=0 match_sector_size=False with_interface=None Jul 22 08:31:15 managed-node12 sudo[19977]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:16 managed-node12 sudo[20137]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyyqhngrmmmdumrdyenrjkbbarjapvyb ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187475.995491-28171-66597706382739/AnsiballZ_command.py' Jul 22 08:31:16 managed-node12 sudo[20137]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:17 managed-node12 python3.12[20140]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:31:17 managed-node12 sudo[20137]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:18 managed-node12 sshd-session[20168]: Accepted publickey for root from 10.31.42.212 port 60522 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:18 managed-node12 systemd-logind[663]: New session 17 of user root. ░░ Subject: A new session 17 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 17 has been created for the user root. ░░ ░░ The leading process of the session is 20168. Jul 22 08:31:18 managed-node12 systemd[1]: Started session-17.scope - Session 17 of User root. ░░ Subject: A start job for unit session-17.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-17.scope has finished successfully. ░░ ░░ The job identifier is 2328. Jul 22 08:31:18 managed-node12 sshd-session[20168]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:18 managed-node12 sshd-session[20171]: Received disconnect from 10.31.42.212 port 60522:11: disconnected by user Jul 22 08:31:18 managed-node12 sshd-session[20171]: Disconnected from user root 10.31.42.212 port 60522 Jul 22 08:31:18 managed-node12 sshd-session[20168]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:18 managed-node12 systemd[1]: session-17.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-17.scope has successfully entered the 'dead' state. Jul 22 08:31:18 managed-node12 systemd-logind[663]: Session 17 logged out. Waiting for processes to exit. Jul 22 08:31:18 managed-node12 systemd-logind[663]: Removed session 17. ░░ Subject: Session 17 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 17 has been terminated. Jul 22 08:31:18 managed-node12 sshd-session[20198]: Accepted publickey for root from 10.31.42.212 port 60534 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:18 managed-node12 systemd-logind[663]: New session 18 of user root. ░░ Subject: A new session 18 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 18 has been created for the user root. ░░ ░░ The leading process of the session is 20198. Jul 22 08:31:18 managed-node12 systemd[1]: Started session-18.scope - Session 18 of User root. ░░ Subject: A start job for unit session-18.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-18.scope has finished successfully. ░░ ░░ The job identifier is 2413. Jul 22 08:31:18 managed-node12 sshd-session[20198]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:18 managed-node12 sshd-session[20201]: Received disconnect from 10.31.42.212 port 60534:11: disconnected by user Jul 22 08:31:18 managed-node12 sshd-session[20201]: Disconnected from user root 10.31.42.212 port 60534 Jul 22 08:31:18 managed-node12 sshd-session[20198]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:18 managed-node12 systemd[1]: session-18.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-18.scope has successfully entered the 'dead' state. Jul 22 08:31:18 managed-node12 systemd-logind[663]: Session 18 logged out. Waiting for processes to exit. Jul 22 08:31:18 managed-node12 systemd-logind[663]: Removed session 18. ░░ Subject: Session 18 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 18 has been terminated. Jul 22 08:31:23 managed-node12 sudo[20408]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbprgxqymuuwfecdlddyxgjtuyxmrbyz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187482.1271799-28857-53603674760018/AnsiballZ_setup.py' Jul 22 08:31:23 managed-node12 sudo[20408]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:23 managed-node12 python3.12[20412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:23 managed-node12 sudo[20408]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:27 managed-node12 sudo[20596]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smpcosneojapciyengsfwlxnpgjkdffl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187486.228287-29150-249357175352697/AnsiballZ_stat.py' Jul 22 08:31:27 managed-node12 sudo[20596]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:27 managed-node12 python3.12[20599]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:27 managed-node12 sudo[20596]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:29 managed-node12 sudo[20754]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itbfgplcglvyuphavlryczwyegrmewcx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187488.4425972-29414-211618724628510/AnsiballZ_dnf.py' Jul 22 08:31:29 managed-node12 sudo[20754]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:29 managed-node12 python3.12[20757]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:29 managed-node12 sudo[20754]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:31 managed-node12 sudo[20913]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipatxornhrppcvakbcqgfjrhmwjdyyjk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187490.5656242-29664-172857751526422/AnsiballZ_blivet.py' Jul 22 08:31:31 managed-node12 sudo[20913]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:31 managed-node12 python3.12[20916]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:31 managed-node12 sudo[20913]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:33 managed-node12 sudo[21073]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgbzemttzzqvevguuijaqxaotovedvbh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187492.8642375-29969-43159084411532/AnsiballZ_dnf.py' Jul 22 08:31:33 managed-node12 sudo[21073]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:33 managed-node12 python3.12[21076]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:33 managed-node12 sudo[21073]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:35 managed-node12 sudo[21232]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieyodbatzgvdehdbhdagmrutecyptftn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187493.9835353-30099-82550404033705/AnsiballZ_service_facts.py' Jul 22 08:31:35 managed-node12 sudo[21232]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:35 managed-node12 python3.12[21235]: ansible-service_facts Invoked Jul 22 08:31:36 managed-node12 sudo[21232]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:38 managed-node12 sudo[21502]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abeigssxfhvmuicydelmikeqfmmmpcav ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187497.8987617-30613-241470706517178/AnsiballZ_blivet.py' Jul 22 08:31:38 managed-node12 sudo[21502]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:38 managed-node12 python3.12[21506]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:38 managed-node12 sudo[21502]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:39 managed-node12 sudo[21663]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veuzohzvchsuxbgchjsmqrrioytxwoyq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187498.9814034-30789-71884367601442/AnsiballZ_stat.py' Jul 22 08:31:39 managed-node12 sudo[21663]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:39 managed-node12 python3.12[21666]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:39 managed-node12 sudo[21663]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:42 managed-node12 sudo[21824]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlzjzgabcyuaymbnazvjvlrgevqwesft ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187501.8199399-31019-157428640136307/AnsiballZ_stat.py' Jul 22 08:31:42 managed-node12 sudo[21824]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:42 managed-node12 python3.12[21827]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:42 managed-node12 sudo[21824]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:42 managed-node12 sudo[21984]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-humzbkjtkwuqsdjyrlktfyfzvbwehyvw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187502.580713-31121-114100829355208/AnsiballZ_setup.py' Jul 22 08:31:42 managed-node12 sudo[21984]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:43 managed-node12 python3.12[21987]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:43 managed-node12 sudo[21984]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:44 managed-node12 sudo[22171]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnpueawhwuawdwcynzznpghjpdvjamvz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187504.3955383-31383-82381016253171/AnsiballZ_dnf.py' Jul 22 08:31:44 managed-node12 sudo[22171]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:44 managed-node12 python3.12[22174]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:45 managed-node12 sudo[22171]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:46 managed-node12 sudo[22330]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpngvwlxcicwdgvppzdqfkgfnbjghkcy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187505.6003256-31577-107370731081078/AnsiballZ_find_unused_disk.py' Jul 22 08:31:46 managed-node12 sudo[22330]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:46 managed-node12 python3.12[22333]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:31:46 managed-node12 sudo[22330]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:47 managed-node12 sudo[22490]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawkollqjjjsjrqoyrpjwapxnxoozwlj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187507.0185835-31689-264410958835245/AnsiballZ_command.py' Jul 22 08:31:47 managed-node12 sudo[22490]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:47 managed-node12 python3.12[22493]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:29 Tuesday 22 July 2025 08:31:48 -0400 (0:00:01.477) 0:00:26.346 ********** skipping: [managed-node12] => { "changed": false, "false_condition": "'Unable to find unused disk' not in unused_disks_return.disks", "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34 Tuesday 22 July 2025 08:31:48 -0400 (0:00:00.156) 0:00:26.502 ********** fatal: [managed-node12]: FAILED! => { "changed": false } MSG: Unable to find enough unused disks. Exiting playbook. PLAY RECAP ********************************************************************* managed-node12 : ok=28 changed=0 unreachable=0 failed=1 skipped=22 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.13", "end_time": "2025-07-22T12:31:48.786271+00:00Z", "host": "managed-node12", "message": "Unable to find enough unused disks. Exiting playbook.", "start_time": "2025-07-22T12:31:48.525305+00:00Z", "task_name": "Exit playbook when there's not enough unused disks in the system", "task_path": "/tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Tuesday 22 July 2025 08:31:48 -0400 (0:00:00.270) 0:00:26.773 ********** =============================================================================== fedora.linux_system_roles.storage : Get service facts ------------------- 3.46s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Gathering Facts --------------------------------------------------------- 2.07s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:2 fedora.linux_system_roles.storage : Get required packages --------------- 1.82s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 fedora.linux_system_roles.storage : Make sure blivet is available ------- 1.74s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Find unused disks in the system ----------------------------------------- 1.53s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Debug why there are no unused disks ------------------------------------- 1.48s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 fedora.linux_system_roles.storage : Update facts ------------------------ 1.45s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 fedora.linux_system_roles.storage : Check if system is ostree ----------- 1.45s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Ensure test packages ---------------------------------------------------- 1.25s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 fedora.linux_system_roles.storage : Make sure required packages are installed --- 1.11s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 1.06s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file --- 0.67s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 fedora.linux_system_roles.storage : Check if /etc/fstab is present ------ 0.64s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.43s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab --- 0.35s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 fedora.linux_system_roles.storage : Set up new/current mounts ----------- 0.34s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.33s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 fedora.linux_system_roles.storage : Remove obsolete mounts -------------- 0.30s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 fedora.linux_system_roles.storage : Ensure ansible_facts used by role --- 0.28s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Run the role ------------------------------------------------------------ 0.27s /tmp/collections-nSC/ansible_collections/fedora/linux_system_roles/tests/storage/tests_luks.yml:72 Jul 22 08:31:23 managed-node12 sudo[20408]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbprgxqymuuwfecdlddyxgjtuyxmrbyz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187482.1271799-28857-53603674760018/AnsiballZ_setup.py' Jul 22 08:31:23 managed-node12 sudo[20408]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:23 managed-node12 python3.12[20412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:23 managed-node12 sudo[20408]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:27 managed-node12 sudo[20596]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smpcosneojapciyengsfwlxnpgjkdffl ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187486.228287-29150-249357175352697/AnsiballZ_stat.py' Jul 22 08:31:27 managed-node12 sudo[20596]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:27 managed-node12 python3.12[20599]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:27 managed-node12 sudo[20596]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:29 managed-node12 sudo[20754]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itbfgplcglvyuphavlryczwyegrmewcx ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187488.4425972-29414-211618724628510/AnsiballZ_dnf.py' Jul 22 08:31:29 managed-node12 sudo[20754]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:29 managed-node12 python3.12[20757]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-fs', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev', 'vdo'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:29 managed-node12 sudo[20754]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:31 managed-node12 sudo[20913]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipatxornhrppcvakbcqgfjrhmwjdyyjk ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187490.5656242-29664-172857751526422/AnsiballZ_blivet.py' Jul 22 08:31:31 managed-node12 sudo[20913]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:31 managed-node12 python3.12[20916]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=False safe_mode=True diskvolume_mkfs_option_map={} Jul 22 08:31:31 managed-node12 sudo[20913]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:33 managed-node12 sudo[21073]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgbzemttzzqvevguuijaqxaotovedvbh ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187492.8642375-29969-43159084411532/AnsiballZ_dnf.py' Jul 22 08:31:33 managed-node12 sudo[21073]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:33 managed-node12 python3.12[21076]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:33 managed-node12 sudo[21073]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:35 managed-node12 sudo[21232]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieyodbatzgvdehdbhdagmrutecyptftn ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187493.9835353-30099-82550404033705/AnsiballZ_service_facts.py' Jul 22 08:31:35 managed-node12 sudo[21232]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:35 managed-node12 python3.12[21235]: ansible-service_facts Invoked Jul 22 08:31:36 managed-node12 sudo[21232]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:38 managed-node12 sudo[21502]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abeigssxfhvmuicydelmikeqfmmmpcav ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187497.8987617-30613-241470706517178/AnsiballZ_blivet.py' Jul 22 08:31:38 managed-node12 sudo[21502]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:38 managed-node12 python3.12[21506]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=False packages_only=False diskvolume_mkfs_option_map={} Jul 22 08:31:38 managed-node12 sudo[21502]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:39 managed-node12 sudo[21663]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veuzohzvchsuxbgchjsmqrrioytxwoyq ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187498.9814034-30789-71884367601442/AnsiballZ_stat.py' Jul 22 08:31:39 managed-node12 sudo[21663]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:39 managed-node12 python3.12[21666]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:39 managed-node12 sudo[21663]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:42 managed-node12 sudo[21824]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlzjzgabcyuaymbnazvjvlrgevqwesft ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187501.8199399-31019-157428640136307/AnsiballZ_stat.py' Jul 22 08:31:42 managed-node12 sudo[21824]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:42 managed-node12 python3.12[21827]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Jul 22 08:31:42 managed-node12 sudo[21824]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:42 managed-node12 sudo[21984]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-humzbkjtkwuqsdjyrlktfyfzvbwehyvw ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187502.580713-31121-114100829355208/AnsiballZ_setup.py' Jul 22 08:31:42 managed-node12 sudo[21984]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:43 managed-node12 python3.12[21987]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Jul 22 08:31:43 managed-node12 sudo[21984]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:44 managed-node12 sudo[22171]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnpueawhwuawdwcynzznpghjpdvjamvz ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187504.3955383-31383-82381016253171/AnsiballZ_dnf.py' Jul 22 08:31:44 managed-node12 sudo[22171]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:44 managed-node12 python3.12[22174]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Jul 22 08:31:45 managed-node12 sudo[22171]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:46 managed-node12 sudo[22330]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpngvwlxcicwdgvppzdqfkgfnbjghkcy ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187505.6003256-31577-107370731081078/AnsiballZ_find_unused_disk.py' Jul 22 08:31:46 managed-node12 sudo[22330]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:46 managed-node12 python3.12[22333]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with min_size=5g max_return=1 max_size=0 match_sector_size=False with_interface=None Jul 22 08:31:46 managed-node12 sudo[22330]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:47 managed-node12 sudo[22490]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawkollqjjjsjrqoyrpjwapxnxoozwlj ; /usr/bin/python3.12 /root/.ansible/tmp/ansible-tmp-1753187507.0185835-31689-264410958835245/AnsiballZ_command.py' Jul 22 08:31:47 managed-node12 sudo[22490]: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:47 managed-node12 python3.12[22493]: ansible-ansible.legacy.command Invoked with _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Jul 22 08:31:48 managed-node12 sudo[22490]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:49 managed-node12 sshd-session[22521]: Accepted publickey for root from 10.31.42.212 port 37930 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:49 managed-node12 systemd-logind[663]: New session 19 of user root. ░░ Subject: A new session 19 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 19 has been created for the user root. ░░ ░░ The leading process of the session is 22521. Jul 22 08:31:49 managed-node12 systemd[1]: Started session-19.scope - Session 19 of User root. ░░ Subject: A start job for unit session-19.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-19.scope has finished successfully. ░░ ░░ The job identifier is 2498. Jul 22 08:31:49 managed-node12 sshd-session[22521]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Jul 22 08:31:49 managed-node12 sshd-session[22524]: Received disconnect from 10.31.42.212 port 37930:11: disconnected by user Jul 22 08:31:49 managed-node12 sshd-session[22524]: Disconnected from user root 10.31.42.212 port 37930 Jul 22 08:31:49 managed-node12 sshd-session[22521]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:49 managed-node12 systemd[1]: session-19.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-19.scope has successfully entered the 'dead' state. Jul 22 08:31:49 managed-node12 systemd-logind[663]: Session 19 logged out. Waiting for processes to exit. Jul 22 08:31:49 managed-node12 systemd-logind[663]: Removed session 19. ░░ Subject: Session 19 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 19 has been terminated. Jul 22 08:31:49 managed-node12 sshd-session[22551]: Accepted publickey for root from 10.31.42.212 port 37940 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:49 managed-node12 systemd-logind[663]: New session 20 of user root. ░░ Subject: A new session 20 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 20 has been created for the user root. ░░ ░░ The leading process of the session is 22551. Jul 22 08:31:49 managed-node12 systemd[1]: Started session-20.scope - Session 20 of User root. ░░ Subject: A start job for unit session-20.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-20.scope has finished successfully. ░░ ░░ The job identifier is 2583. Jul 22 08:31:49 managed-node12 sshd-session[22551]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)