ansible-playbook [core 2.17.14] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-H6s executable location = /usr/local/bin/ansible-playbook python version = 3.12.12 (main, Nov 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-14)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_skip_toolkit.yml *********************************************** 1 plays in /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml PLAY [Verify if role configures a custom storage properly] ********************* TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:3 Saturday 13 December 2025 06:28:09 -0500 (0:00:00.017) 0:00:00.017 ***** [WARNING]: Platform linux on host managed-node1 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node1] TASK [Skip unsupported architectures] ****************************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:23 Saturday 13 December 2025 06:28:10 -0500 (0:00:01.068) 0:00:01.086 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/skip_unsupported_archs.yml for managed-node1 TASK [Gather architecture facts] *********************************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/skip_unsupported_archs.yml:3 Saturday 13 December 2025 06:28:10 -0500 (0:00:00.015) 0:00:01.101 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "\"architecture\" not in ansible_facts.keys() | list", "skip_reason": "Conditional result was False" } TASK [Skip unsupported architectures] ****************************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/skip_unsupported_archs.yml:8 Saturday 13 December 2025 06:28:10 -0500 (0:00:00.031) 0:00:01.133 ***** META: end_host conditional evaluated to False, continuing execution for managed-node1 skipping: [managed-node1] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node1" } MSG: end_host conditional evaluated to false, continuing execution for managed-node1 TASK [Ensure test packages] **************************************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:31 Saturday 13 December 2025 06:28:10 -0500 (0:00:00.007) 0:00:01.140 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: util-linux-core TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:40 Saturday 13 December 2025 06:28:12 -0500 (0:00:01.455) 0:00:02.595 ***** ok: [managed-node1] => { "changed": false, "disks": [ "sda", "sdb" ], "info": [ "Line: NAME=\"/dev/sda\" TYPE=\"disk\" SIZE=\"10737418240\" FSTYPE=\"\" LOG_SEC=\"512\"", "Line: NAME=\"/dev/sdb\" TYPE=\"disk\" SIZE=\"10737418240\" FSTYPE=\"\" LOG_SEC=\"512\"", "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG_SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"xfs\" LOG_SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"xfs\" LOG_SEC=\"512\"", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:49 Saturday 13 December 2025 06:28:13 -0500 (0:00:01.461) 0:00:04.057 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "'Unable to find unused disk' in unused_disks_return.disks", "skip_reason": "Conditional result was False" } TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:58 Saturday 13 December 2025 06:28:13 -0500 (0:00:00.013) 0:00:04.070 ***** ok: [managed-node1] => { "ansible_facts": { "unused_disks": [ "sda", "sdb" ] }, "changed": false } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:63 Saturday 13 December 2025 06:28:13 -0500 (0:00:00.015) 0:00:04.086 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "unused_disks | d([]) | length < disks_needed | d(1)", "skip_reason": "Conditional result was False" } TASK [Prepare storage] ********************************************************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:68 Saturday 13 December 2025 06:28:13 -0500 (0:00:00.030) 0:00:04.117 ***** included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Saturday 13 December 2025 06:28:13 -0500 (0:00:00.023) 0:00:04.140 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Saturday 13 December 2025 06:28:13 -0500 (0:00:00.018) 0:00:04.159 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Saturday 13 December 2025 06:28:13 -0500 (0:00:00.032) 0:00:04.191 ***** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Saturday 13 December 2025 06:28:14 -0500 (0:00:00.039) 0:00:04.230 ***** ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Saturday 13 December 2025 06:28:14 -0500 (0:00:00.424) 0:00:04.655 ***** ok: [managed-node1] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Saturday 13 December 2025 06:28:14 -0500 (0:00:00.021) 0:00:04.676 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Saturday 13 December 2025 06:28:14 -0500 (0:00:00.014) 0:00:04.690 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Saturday 13 December 2025 06:28:14 -0500 (0:00:00.013) 0:00:04.704 ***** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Saturday 13 December 2025 06:28:14 -0500 (0:00:00.039) 0:00:04.743 ***** changed: [managed-node1] => { "changed": true, "rc": 0, "results": [ "Installed: ndctl-libs-82-1.el9.x86_64", "Installed: libbytesize-2.5-3.el9.x86_64", "Installed: jose-14-1.el9.x86_64", "Installed: libblockdev-kbd-2.28-16.el9.x86_64", "Installed: stratis-cli-3.7.0-1.el9.noarch", "Installed: stratisd-3.7.3-1.el9.x86_64", "Installed: libblockdev-loop-2.28-16.el9.x86_64", "Installed: lsof-4.94.0-3.el9.x86_64", "Installed: daxctl-libs-82-1.el9.x86_64", "Installed: python3-psutil-5.8.0-12.el9.x86_64", "Installed: libaio-0.3.111-13.el9.x86_64", "Installed: blivet-data-1:3.6.0-29.el9.noarch", "Installed: libblockdev-lvm-2.28-16.el9.x86_64", "Installed: lvm2-9:2.03.32-2.el9.x86_64", "Installed: python3-wcwidth-0.2.5-8.el9.noarch", "Installed: libluksmeta-10-1.el9.x86_64", "Installed: python3-pyparted-1:3.12.0-1.el9.x86_64", "Installed: lvm2-libs-9:2.03.32-2.el9.x86_64", "Installed: libblockdev-mdraid-2.28-16.el9.x86_64", "Installed: python3-into-dbus-python-0.8.2-1.el9.noarch", "Installed: kmod-kvdo-8.2.6.3-182.el9.x86_64", "Installed: python3-dbus-signature-pyparsing-0.4.1-1.el9.noarch", "Installed: libblockdev-mpath-2.28-16.el9.x86_64", "Installed: device-mapper-event-9:1.02.206-2.el9.x86_64", "Installed: volume_key-libs-0.3.12-16.el9.x86_64", "Installed: libblockdev-nvdimm-2.28-16.el9.x86_64", "Installed: device-mapper-event-libs-9:1.02.206-2.el9.x86_64", "Installed: cryptsetup-2.8.1-2.el9.x86_64", "Installed: vdo-8.2.2.2-1.el9.x86_64", "Installed: python3-blivet-1:3.6.0-29.el9.noarch", "Installed: mdadm-4.4-3.el9.x86_64", "Installed: libnvme-1.16.1-2.el9.x86_64", "Installed: python3-blockdev-2.28-16.el9.x86_64", "Installed: python3-justbases-0.15.2-1.el9.noarch", "Installed: python3-justbytes-0.15.2-1.el9.noarch", "Installed: device-mapper-multipath-0.8.7-39.el9.x86_64", "Installed: libblockdev-part-2.28-16.el9.x86_64", "Installed: python3-bytesize-2.5-3.el9.x86_64", "Installed: device-mapper-multipath-libs-0.8.7-39.el9.x86_64", "Installed: libblockdev-2.28-16.el9.x86_64", "Installed: device-mapper-persistent-data-1.1.0-1.el9.x86_64", "Installed: clevis-21-208.el9.x86_64", "Installed: libblockdev-swap-2.28-16.el9.x86_64", "Installed: cxl-libs-82-1.el9.x86_64", "Installed: libblockdev-crypto-2.28-16.el9.x86_64", "Installed: libjose-14-1.el9.x86_64", "Installed: python3-packaging-20.9-5.el9.noarch", "Installed: clevis-luks-21-208.el9.x86_64", "Installed: tpm2-tools-5.2-7.el9.x86_64", "Installed: libblockdev-dm-2.28-16.el9.x86_64", "Installed: libblockdev-utils-2.28-16.el9.x86_64", "Installed: python3-dbus-client-gen-0.5.1-1.el9.noarch", "Installed: python3-dbus-python-client-gen-0.8.3-1.el9.noarch", "Installed: ndctl-82-1.el9.x86_64", "Installed: luksmeta-10-1.el9.x86_64", "Installed: libblockdev-fs-2.28-16.el9.x86_64" ] } lsrpackages: kmod-kvdo libblockdev libblockdev-crypto libblockdev-dm libblockdev-lvm libblockdev-mdraid libblockdev-swap python3-blivet stratis-cli stratisd vdo xfsprogs TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Saturday 13 December 2025 06:28:47 -0500 (0:00:32.643) 0:00:37.387 ***** ok: [managed-node1] => { "storage_pools | d([])": [ { "disks": [ "sda", "sdb" ], "grow_to_fill": true, "name": "rootvg", "volumes": [ { "mount_point": "/hpc-test1", "name": "rootlv", "size": "2G" }, { "mount_point": "/hpc-test2", "name": "usrlv", "size": "1G" } ] } ] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Saturday 13 December 2025 06:28:47 -0500 (0:00:00.039) 0:00:37.426 ***** ok: [managed-node1] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Saturday 13 December 2025 06:28:47 -0500 (0:00:00.033) 0:00:37.460 ***** ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [ "lvm2" ], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Saturday 13 December 2025 06:28:48 -0500 (0:00:01.001) 0:00:38.462 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Saturday 13 December 2025 06:28:48 -0500 (0:00:00.027) 0:00:38.489 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Saturday 13 December 2025 06:28:48 -0500 (0:00:00.029) 0:00:38.519 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Saturday 13 December 2025 06:28:48 -0500 (0:00:00.031) 0:00:38.550 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Saturday 13 December 2025 06:28:48 -0500 (0:00:00.029) 0:00:38.580 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kpartx lvm2 TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Saturday 13 December 2025 06:28:49 -0500 (0:00:01.288) 0:00:39.868 ***** ok: [managed-node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "ndctl-monitor.service": { "name": "ndctl-monitor.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "target.service": { "name": "target.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "targetclid.service": { "name": "targetclid.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Saturday 13 December 2025 06:28:53 -0500 (0:00:03.973) 0:00:43.841 ***** ok: [managed-node1] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Saturday 13 December 2025 06:28:53 -0500 (0:00:00.047) 0:00:43.889 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Saturday 13 December 2025 06:28:53 -0500 (0:00:00.015) 0:00:43.905 ***** changed: [managed-node1] => { "actions": [ { "action": "create format", "device": "/dev/sdb", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sda", "fs_type": "lvmpv" }, { "action": "create device", "device": "/dev/rootvg", "fs_type": null }, { "action": "create device", "device": "/dev/mapper/rootvg-usrlv", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/rootvg-usrlv", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/rootvg-rootlv", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/rootvg-rootlv", "fs_type": "xfs" } ], "changed": true, "crypts": [], "leaves": [ "/dev/xvda1", "/dev/mapper/rootvg-rootlv", "/dev/mapper/rootvg-usrlv" ], "mounts": [ { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "mounted" }, { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "mounted" } ], "packages": [ "lvm2", "xfsprogs" ], "pools": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_kernel_device": "/dev/dm-1", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_kernel_device": "/dev/dm-0", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "1G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Saturday 13 December 2025 06:28:56 -0500 (0:00:02.972) 0:00:46.877 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "storage_udevadm_trigger | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Saturday 13 December 2025 06:28:56 -0500 (0:00:00.033) 0:00:46.911 ***** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1765624882.119, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "8d3c587214ce2f8a5aa935bc649710346d11296a", "ctime": 1764330634.272, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194435, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1764330634.272, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1344, "uid": 0, "version": "496830786", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Saturday 13 December 2025 06:28:57 -0500 (0:00:00.359) 0:00:47.270 ***** changed: [managed-node1] => { "backup": "", "changed": true } MSG: line added TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Saturday 13 December 2025 06:28:57 -0500 (0:00:00.437) 0:00:47.708 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Saturday 13 December 2025 06:28:57 -0500 (0:00:00.016) 0:00:47.724 ***** ok: [managed-node1] => { "blivet_output": { "actions": [ { "action": "create format", "device": "/dev/sdb", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sda", "fs_type": "lvmpv" }, { "action": "create device", "device": "/dev/rootvg", "fs_type": null }, { "action": "create device", "device": "/dev/mapper/rootvg-usrlv", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/rootvg-usrlv", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/rootvg-rootlv", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/rootvg-rootlv", "fs_type": "xfs" } ], "changed": true, "crypts": [], "failed": false, "leaves": [ "/dev/xvda1", "/dev/mapper/rootvg-rootlv", "/dev/mapper/rootvg-usrlv" ], "mounts": [ { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "mounted" }, { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "mounted" } ], "packages": [ "lvm2", "xfsprogs" ], "pools": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_kernel_device": "/dev/dm-1", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_kernel_device": "/dev/dm-0", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "1G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Saturday 13 December 2025 06:28:57 -0500 (0:00:00.025) 0:00:47.749 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_kernel_device": "/dev/dm-1", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_kernel_device": "/dev/dm-0", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "1G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Saturday 13 December 2025 06:28:57 -0500 (0:00:00.022) 0:00:47.772 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Saturday 13 December 2025 06:28:57 -0500 (0:00:00.018) 0:00:47.790 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Saturday 13 December 2025 06:28:57 -0500 (0:00:00.035) 0:00:47.826 ***** ok: [managed-node1] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Saturday 13 December 2025 06:28:58 -0500 (0:00:00.866) 0:00:48.692 ***** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount changed: [managed-node1] => (item={'src': '/dev/mapper/rootvg-rootlv', 'path': '/hpc-test1', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted', 'owner': None, 'group': None, 'mode': None}) => { "ansible_loop_var": "mount_info", "backup_file": "", "boot": "yes", "changed": true, "dump": "0", "fstab": "/etc/fstab", "fstype": "xfs", "mount_info": { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "mounted" }, "name": "/hpc-test1", "opts": "defaults", "passno": "0", "src": "/dev/mapper/rootvg-rootlv" } redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount changed: [managed-node1] => (item={'src': '/dev/mapper/rootvg-usrlv', 'path': '/hpc-test2', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted', 'owner': None, 'group': None, 'mode': None}) => { "ansible_loop_var": "mount_info", "backup_file": "", "boot": "yes", "changed": true, "dump": "0", "fstab": "/etc/fstab", "fstype": "xfs", "mount_info": { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "mounted" }, "name": "/hpc-test2", "opts": "defaults", "passno": "0", "src": "/dev/mapper/rootvg-usrlv" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Saturday 13 December 2025 06:28:59 -0500 (0:00:00.829) 0:00:49.522 ***** skipping: [managed-node1] => (item={'src': '/dev/mapper/rootvg-rootlv', 'path': '/hpc-test1', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted', 'owner': None, 'group': None, 'mode': None}) => { "ansible_loop_var": "mount_info", "changed": false, "false_condition": "mount_info['owner'] != none or mount_info['group'] != none or mount_info['mode'] != none", "mount_info": { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "mounted" }, "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item={'src': '/dev/mapper/rootvg-usrlv', 'path': '/hpc-test2', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted', 'owner': None, 'group': None, 'mode': None}) => { "ansible_loop_var": "mount_info", "changed": false, "false_condition": "mount_info['owner'] != none or mount_info['group'] != none or mount_info['mode'] != none", "mount_info": { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "mounted" }, "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Saturday 13 December 2025 06:28:59 -0500 (0:00:00.050) 0:00:49.573 ***** ok: [managed-node1] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Saturday 13 December 2025 06:29:00 -0500 (0:00:00.681) 0:00:50.254 ***** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1765625057.9956648, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1764328113.166, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194436, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1764327821.524, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "3963487230", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Saturday 13 December 2025 06:29:00 -0500 (0:00:00.351) 0:00:50.605 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Saturday 13 December 2025 06:29:00 -0500 (0:00:00.015) 0:00:50.620 ***** ok: [managed-node1] TASK [Run the role] ************************************************************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:84 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.867) 0:00:51.488 ***** included: fedora.linux_system_roles.hpc for managed-node1 TASK [fedora.linux_system_roles.hpc : Set platform/version specific variables] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:3 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.108) 0:00:51.596 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.hpc : Ensure ansible_facts used by role] ******* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:2 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.028) 0:00:51.624 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "__hpc_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Check if system is ostree] *************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:10 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.039) 0:00:51.664 ***** ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.hpc : Set flag to indicate system is ostree] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:15 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.349) 0:00:52.014 ***** ok: [managed-node1] => { "ansible_facts": { "__hpc_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.hpc : Set platform/version specific variables] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/set_vars.yml:19 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.025) 0:00:52.040 ***** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__hpc_cuda_toolkit_packages": [ "cuda-toolkit-12-9" ], "__hpc_gdrcopy_info": { "distribution": "el9", "name": "gdrcopy", "sha256": "d6c61358adb52d9a9b71d4e05cb6ac9288ac18274e8d8177a6f197f0ee006130", "url": "https://github.com/NVIDIA/gdrcopy/archive/0f7366e73b019e7facf907381f6b0b2f5a1576e4.tar.gz", "version": "2.5.1-1" }, "__hpc_hpcx_info": { "name": "hpcx", "sha256": "92f746dd8cf293cf5b3955a0addd92e162dd012e1f8f728983a85c6c134e33b0", "url": "https://content.mellanox.com/hpc/hpc-x/v2.24.1_cuda12/hpcx-v2.24.1-gcc-inbox-redhat9-cuda12-x86_64.tbz", "version": "2.24.1" }, "__hpc_microsoft_prod_repo": { "baseurl": "https://packages.microsoft.com/rhel/9/prod/", "description": "Microsoft Production repository", "key": "https://packages.microsoft.com/keys/microsoft.asc", "name": "microsoft-prod" }, "__hpc_nvidia_cuda_repo": { "baseurl": "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64", "description": "NVIDIA CUDA repository", "key": "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub", "name": "nvidia-cuda" }, "__hpc_nvidia_driver_module": "nvidia-driver:575-dkms", "__hpc_nvidia_nccl_packages": [ "libnccl-2.27.5-1+cuda12.9", "libnccl-devel-2.27.5-1+cuda12.9" ], "__hpc_openmpi_info": { "name": "openmpi", "sha256": "53131e1a57e7270f645707f8b0b65ba56048f5b5ac3f68faabed3eb0d710e449", "url": "https://download.open-mpi.org/release/open-mpi/v5.0/openmpi-5.0.8.tar.bz2", "version": "5.0.8" }, "__hpc_pmix_info": { "name": "pmix", "sha256": "6b11f4fd5c9d7f8e55fc6ebdee9af04b839f44d06044e58cea38c87c168784b3", "url": "https://github.com/openpmix/openpmix/releases/download/v4.2.9/pmix-4.2.9.tar.bz2", "version": "4.2.9" }, "__hpc_rhel_epel_repo": { "description": "RHEL EPEL repository", "key": "https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9", "name": "RHEL EPEL repository", "rpm": "https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm" }, "__hpc_rhui_azure_rhel_9_eus_repo": { "baseurl": "https://rhui4-1.microsoft.com/pulp/repos/unprotected/microsoft-azure-rhel9-eus", "description": "Microsoft Azure RPMs for Red Hat Enterprise Linux 9 EUS", "key": "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-microsoft-azure-release", "name": "rhui-microsoft-azure-rhel9-eus" } }, "ansible_included_var_files": [ "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "__hpc_cuda_toolkit_packages": [ "cuda-toolkit-12-9" ], "__hpc_gdrcopy_info": { "distribution": "el9", "name": "gdrcopy", "sha256": "d6c61358adb52d9a9b71d4e05cb6ac9288ac18274e8d8177a6f197f0ee006130", "url": "https://github.com/NVIDIA/gdrcopy/archive/0f7366e73b019e7facf907381f6b0b2f5a1576e4.tar.gz", "version": "2.5.1-1" }, "__hpc_hpcx_info": { "name": "hpcx", "sha256": "92f746dd8cf293cf5b3955a0addd92e162dd012e1f8f728983a85c6c134e33b0", "url": "https://content.mellanox.com/hpc/hpc-x/v2.24.1_cuda12/hpcx-v2.24.1-gcc-inbox-redhat9-cuda12-x86_64.tbz", "version": "2.24.1" }, "__hpc_microsoft_prod_repo": { "baseurl": "https://packages.microsoft.com/rhel/9/prod/", "description": "Microsoft Production repository", "key": "https://packages.microsoft.com/keys/microsoft.asc", "name": "microsoft-prod" }, "__hpc_nvidia_cuda_repo": { "baseurl": "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64", "description": "NVIDIA CUDA repository", "key": "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub", "name": "nvidia-cuda" }, "__hpc_nvidia_driver_module": "nvidia-driver:575-dkms", "__hpc_nvidia_nccl_packages": [ "libnccl-2.27.5-1+cuda12.9", "libnccl-devel-2.27.5-1+cuda12.9" ], "__hpc_openmpi_info": { "name": "openmpi", "sha256": "53131e1a57e7270f645707f8b0b65ba56048f5b5ac3f68faabed3eb0d710e449", "url": "https://download.open-mpi.org/release/open-mpi/v5.0/openmpi-5.0.8.tar.bz2", "version": "5.0.8" }, "__hpc_pmix_info": { "name": "pmix", "sha256": "6b11f4fd5c9d7f8e55fc6ebdee9af04b839f44d06044e58cea38c87c168784b3", "url": "https://github.com/openpmix/openpmix/releases/download/v4.2.9/pmix-4.2.9.tar.bz2", "version": "4.2.9" }, "__hpc_rhel_epel_repo": { "description": "RHEL EPEL repository", "key": "https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9", "name": "RHEL EPEL repository", "rpm": "https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm" }, "__hpc_rhui_azure_rhel_9_eus_repo": { "baseurl": "https://rhui4-1.microsoft.com/pulp/repos/unprotected/microsoft-azure-rhel9-eus", "description": "Microsoft Azure RPMs for Red Hat Enterprise Linux 9 EUS", "key": "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-microsoft-azure-release", "name": "rhui-microsoft-azure-rhel9-eus" } }, "ansible_included_var_files": [ "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.hpc : Fail on unsupported architectures] ******* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:6 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.050) 0:00:52.091 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_architecture != 'x86_64'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Fail if role installs openmpi without cuda toolkit] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:14 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.017) 0:00:52.108 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_build_openmpi_w_nvidia_gpu_support", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Deploy GPG keys for repositories] ******** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:27 Saturday 13 December 2025 06:29:01 -0500 (0:00:00.016) 0:00:52.125 ***** ok: [managed-node1] => (item={'name': 'RHEL EPEL repository', 'description': 'RHEL EPEL repository', 'key': 'https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9', 'rpm': 'https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm'}) => { "ansible_loop_var": "item", "changed": false, "item": { "description": "RHEL EPEL repository", "key": "https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9", "name": "RHEL EPEL repository", "rpm": "https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm" } } ok: [managed-node1] => (item={'name': 'nvidia-cuda', 'description': 'NVIDIA CUDA repository', 'key': 'https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub', 'baseurl': 'https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64'}) => { "ansible_loop_var": "item", "changed": false, "item": { "baseurl": "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64", "description": "NVIDIA CUDA repository", "key": "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub", "name": "nvidia-cuda" } } ok: [managed-node1] => (item={'name': 'microsoft-prod', 'description': 'Microsoft Production repository', 'key': 'https://packages.microsoft.com/keys/microsoft.asc', 'baseurl': 'https://packages.microsoft.com/rhel/9/prod/'}) => { "ansible_loop_var": "item", "changed": false, "item": { "baseurl": "https://packages.microsoft.com/rhel/9/prod/", "description": "Microsoft Production repository", "key": "https://packages.microsoft.com/keys/microsoft.asc", "name": "microsoft-prod" } } TASK [fedora.linux_system_roles.hpc : Install EPEL release package] ************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:37 Saturday 13 December 2025 06:29:03 -0500 (0:00:01.526) 0:00:53.651 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [ "Installed /root/.ansible/tmp/ansible-tmp-1765625343.5108445-10505-166299159647803/epel-release-latest-9.noarchaao1fecr.rpm" ] } MSG: Nothing to do lsrpackages: https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm TASK [fedora.linux_system_roles.hpc : Configure repositories] ****************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:43 Saturday 13 December 2025 06:29:05 -0500 (0:00:01.605) 0:00:55.257 ***** redirecting (type: action) ansible.builtin.yum to ansible.builtin.dnf ok: [managed-node1] => (item={'name': 'nvidia-cuda', 'description': 'NVIDIA CUDA repository', 'key': 'https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub', 'baseurl': 'https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64'}) => { "ansible_loop_var": "item", "changed": false, "item": { "baseurl": "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64", "description": "NVIDIA CUDA repository", "key": "https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub", "name": "nvidia-cuda" }, "repo": "nvidia-cuda", "state": "present" } redirecting (type: action) ansible.builtin.yum to ansible.builtin.dnf ok: [managed-node1] => (item={'name': 'microsoft-prod', 'description': 'Microsoft Production repository', 'key': 'https://packages.microsoft.com/keys/microsoft.asc', 'baseurl': 'https://packages.microsoft.com/rhel/9/prod/'}) => { "ansible_loop_var": "item", "changed": false, "item": { "baseurl": "https://packages.microsoft.com/rhel/9/prod/", "description": "Microsoft Production repository", "key": "https://packages.microsoft.com/keys/microsoft.asc", "name": "microsoft-prod" }, "repo": "microsoft-prod", "state": "present" } TASK [fedora.linux_system_roles.hpc : Get list of installed repositories] ****** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:56 Saturday 13 December 2025 06:29:05 -0500 (0:00:00.773) 0:00:56.030 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_enable_eus_repo", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Ensure that the non-EUS RHUI Azure repository is not installed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:61 Saturday 13 December 2025 06:29:05 -0500 (0:00:00.017) 0:00:56.048 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_enable_eus_repo", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Create a temp file for the EUS repository configuration] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:72 Saturday 13 December 2025 06:29:05 -0500 (0:00:00.016) 0:00:56.065 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_enable_eus_repo", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Generate the repository configuration template] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:79 Saturday 13 December 2025 06:29:05 -0500 (0:00:00.016) 0:00:56.082 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_enable_eus_repo", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Add EUS repository] ********************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:87 Saturday 13 December 2025 06:29:05 -0500 (0:00:00.016) 0:00:56.098 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_enable_eus_repo", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Lock the RHEL minor release to the current minor release] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:94 Saturday 13 December 2025 06:29:05 -0500 (0:00:00.016) 0:00:56.114 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_enable_eus_repo", "skip_reason": "Conditional result was False" } TASK [Configure firewall to use trusted zone as default] *********************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:102 Saturday 13 December 2025 06:29:05 -0500 (0:00:00.017) 0:00:56.132 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_manage_firewall", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Install lvm2 to get lvs command] ********* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:114 Saturday 13 December 2025 06:29:05 -0500 (0:00:00.035) 0:00:56.168 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: lvm2 TASK [fedora.linux_system_roles.hpc : Get current LV size of rootlv] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:120 Saturday 13 December 2025 06:29:07 -0500 (0:00:01.319) 0:00:57.488 ***** ok: [managed-node1] => { "changed": false, "cmd": [ "lvs", "--noheadings", "--units", "g", "--nosuffix", "-o", "lv_size", "/dev/mapper/rootvg-rootlv" ], "delta": "0:00:00.030844", "end": "2025-12-13 06:29:07.690052", "rc": 0, "start": "2025-12-13 06:29:07.659208" } STDOUT: 2.00 TASK [fedora.linux_system_roles.hpc : Get current LV size of usrlv] ************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:127 Saturday 13 December 2025 06:29:07 -0500 (0:00:00.460) 0:00:57.948 ***** ok: [managed-node1] => { "changed": false, "cmd": [ "lvs", "--noheadings", "--units", "g", "--nosuffix", "-o", "lv_size", "/dev/mapper/rootvg-usrlv" ], "delta": "0:00:00.028380", "end": "2025-12-13 06:29:08.061883", "rc": 0, "start": "2025-12-13 06:29:08.033503" } STDOUT: 1.00 TASK [Configure storage] ******************************************************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:134 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.369) 0:00:58.318 ***** included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.032) 0:00:58.350 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.027) 0:00:58.378 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.038) 0:00:58.417 ***** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.049) 0:00:58.466 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.021) 0:00:58.487 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.022) 0:00:58.510 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.020) 0:00:58.530 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.020) 0:00:58.550 ***** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Saturday 13 December 2025 06:29:08 -0500 (0:00:00.047) 0:00:58.597 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kmod-kvdo libblockdev libblockdev-crypto libblockdev-dm libblockdev-lvm libblockdev-mdraid libblockdev-swap python3-blivet stratis-cli stratisd vdo xfsprogs TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Saturday 13 December 2025 06:29:09 -0500 (0:00:01.270) 0:00:59.868 ***** ok: [managed-node1] => { "storage_pools | d([])": [ { "grow_to_fill": true, "name": "rootvg", "volumes": [ { "mount_point": "/hpc-test1", "name": "rootlv", "size": "2G" }, { "mount_point": "/hpc-test2", "name": "usrlv", "size": "2G" } ] } ] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Saturday 13 December 2025 06:29:09 -0500 (0:00:00.051) 0:00:59.919 ***** ok: [managed-node1] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Saturday 13 December 2025 06:29:09 -0500 (0:00:00.039) 0:00:59.958 ***** ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [ "lvm2" ], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Saturday 13 December 2025 06:29:10 -0500 (0:00:01.229) 0:01:01.188 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Saturday 13 December 2025 06:29:11 -0500 (0:00:00.036) 0:01:01.225 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Saturday 13 December 2025 06:29:11 -0500 (0:00:00.036) 0:01:01.262 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Saturday 13 December 2025 06:29:11 -0500 (0:00:00.036) 0:01:01.299 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Saturday 13 December 2025 06:29:11 -0500 (0:00:00.034) 0:01:01.333 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kpartx lvm2 TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Saturday 13 December 2025 06:29:12 -0500 (0:00:01.309) 0:01:02.642 ***** ok: [managed-node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "ndctl-monitor.service": { "name": "ndctl-monitor.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "target.service": { "name": "target.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "targetclid.service": { "name": "targetclid.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Saturday 13 December 2025 06:29:14 -0500 (0:00:01.742) 0:01:04.385 ***** ok: [managed-node1] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Saturday 13 December 2025 06:29:14 -0500 (0:00:00.052) 0:01:04.437 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Saturday 13 December 2025 06:29:14 -0500 (0:00:00.016) 0:01:04.454 ***** changed: [managed-node1] => { "actions": [ { "action": "resize device", "device": "/dev/mapper/rootvg-usrlv", "fs_type": null }, { "action": "resize format", "device": "/dev/mapper/rootvg-usrlv", "fs_type": "xfs" } ], "changed": true, "crypts": [], "leaves": [ "/dev/mapper/rootvg-rootlv", "/dev/mapper/rootvg-usrlv", "/dev/xvda1" ], "mounts": [ { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "mounted" }, { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "mounted" } ], "packages": [ "xfsprogs", "lvm2" ], "pools": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_kernel_device": "/dev/dm-1", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_kernel_device": "/dev/dm-0", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Saturday 13 December 2025 06:29:15 -0500 (0:00:01.666) 0:01:06.120 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "storage_udevadm_trigger | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Saturday 13 December 2025 06:29:15 -0500 (0:00:00.039) 0:01:06.160 ***** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1765625339.2583668, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "9377586f1d232bf923bd728d683ed25ea92a0654", "ctime": 1765625339.2543669, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 71305522, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1765625339.2543669, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1473, "uid": 0, "version": "84410504", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Saturday 13 December 2025 06:29:16 -0500 (0:00:00.368) 0:01:06.529 ***** ok: [managed-node1] => { "backup": "", "changed": false } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Saturday 13 December 2025 06:29:16 -0500 (0:00:00.357) 0:01:06.886 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Saturday 13 December 2025 06:29:16 -0500 (0:00:00.016) 0:01:06.903 ***** ok: [managed-node1] => { "blivet_output": { "actions": [ { "action": "resize device", "device": "/dev/mapper/rootvg-usrlv", "fs_type": null }, { "action": "resize format", "device": "/dev/mapper/rootvg-usrlv", "fs_type": "xfs" } ], "changed": true, "crypts": [], "failed": false, "leaves": [ "/dev/mapper/rootvg-rootlv", "/dev/mapper/rootvg-usrlv", "/dev/xvda1" ], "mounts": [ { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "mounted" }, { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "mounted" } ], "packages": [ "xfsprogs", "lvm2" ], "pools": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_kernel_device": "/dev/dm-1", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_kernel_device": "/dev/dm-0", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Saturday 13 December 2025 06:29:16 -0500 (0:00:00.026) 0:01:06.929 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_kernel_device": "/dev/dm-1", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_kernel_device": "/dev/dm-0", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Saturday 13 December 2025 06:29:16 -0500 (0:00:00.025) 0:01:06.954 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Saturday 13 December 2025 06:29:16 -0500 (0:00:00.022) 0:01:06.977 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Saturday 13 December 2025 06:29:16 -0500 (0:00:00.039) 0:01:07.016 ***** ok: [managed-node1] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Saturday 13 December 2025 06:29:17 -0500 (0:00:00.685) 0:01:07.702 ***** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount ok: [managed-node1] => (item={'src': '/dev/mapper/rootvg-rootlv', 'path': '/hpc-test1', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted', 'owner': None, 'group': None, 'mode': None}) => { "ansible_loop_var": "mount_info", "backup_file": "", "boot": "yes", "changed": false, "dump": "0", "fstab": "/etc/fstab", "fstype": "xfs", "mount_info": { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "mounted" }, "name": "/hpc-test1", "opts": "defaults", "passno": "0", "src": "/dev/mapper/rootvg-rootlv" } redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount ok: [managed-node1] => (item={'src': '/dev/mapper/rootvg-usrlv', 'path': '/hpc-test2', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted', 'owner': None, 'group': None, 'mode': None}) => { "ansible_loop_var": "mount_info", "backup_file": "", "boot": "yes", "changed": false, "dump": "0", "fstab": "/etc/fstab", "fstype": "xfs", "mount_info": { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "mounted" }, "name": "/hpc-test2", "opts": "defaults", "passno": "0", "src": "/dev/mapper/rootvg-usrlv" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Saturday 13 December 2025 06:29:18 -0500 (0:00:00.715) 0:01:08.417 ***** skipping: [managed-node1] => (item={'src': '/dev/mapper/rootvg-rootlv', 'path': '/hpc-test1', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted', 'owner': None, 'group': None, 'mode': None}) => { "ansible_loop_var": "mount_info", "changed": false, "false_condition": "mount_info['owner'] != none or mount_info['group'] != none or mount_info['mode'] != none", "mount_info": { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "mounted" }, "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item={'src': '/dev/mapper/rootvg-usrlv', 'path': '/hpc-test2', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted', 'owner': None, 'group': None, 'mode': None}) => { "ansible_loop_var": "mount_info", "changed": false, "false_condition": "mount_info['owner'] != none or mount_info['group'] != none or mount_info['mode'] != none", "mount_info": { "dump": 0, "fstype": "xfs", "group": null, "mode": null, "opts": "defaults", "owner": null, "passno": 0, "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "mounted" }, "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Saturday 13 December 2025 06:29:18 -0500 (0:00:00.049) 0:01:08.467 ***** ok: [managed-node1] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Saturday 13 December 2025 06:29:18 -0500 (0:00:00.673) 0:01:09.141 ***** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1765625057.9956648, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1764328113.166, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194436, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1764327821.524, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "3963487230", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Saturday 13 December 2025 06:29:19 -0500 (0:00:00.354) 0:01:09.495 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Saturday 13 December 2025 06:29:19 -0500 (0:00:00.017) 0:01:09.513 ***** ok: [managed-node1] TASK [fedora.linux_system_roles.hpc : Force install kernel version] ************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:161 Saturday 13 December 2025 06:29:20 -0500 (0:00:00.876) 0:01:10.390 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "__hpc_force_kernel_version is not none", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Update kernel] *************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:169 Saturday 13 December 2025 06:29:20 -0500 (0:00:00.022) 0:01:10.413 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kernel TASK [fedora.linux_system_roles.hpc : Get package facts] *********************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:178 Saturday 13 December 2025 06:29:21 -0500 (0:00:01.237) 0:01:11.650 ***** ok: [managed-node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.hpc : Install kernel-devel and kernel-headers packages for all kernels] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:183 Saturday 13 December 2025 06:29:22 -0500 (0:00:01.112) 0:01:12.763 ***** ok: [managed-node1] => (item={'name': 'kernel', 'version': '5.14.0', 'release': '642.el9', 'epoch': None, 'arch': 'x86_64', 'source': 'rpm'}) => { "ansible_loop_var": "item", "changed": false, "item": { "arch": "x86_64", "epoch": null, "name": "kernel", "release": "642.el9", "source": "rpm", "version": "5.14.0" }, "rc": 0, "results": [] } MSG: Nothing to do ok: [managed-node1] => (item={'name': 'kernel', 'version': '5.14.0', 'release': '648.el9', 'epoch': None, 'arch': 'x86_64', 'source': 'rpm'}) => { "ansible_loop_var": "item", "changed": false, "item": { "arch": "x86_64", "epoch": null, "name": "kernel", "release": "648.el9", "source": "rpm", "version": "5.14.0" }, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kernel-devel-5.14.0-642.el9 kernel-devel-5.14.0-648.el9 kernel-headers-5.14.0-642.el9 kernel-headers-5.14.0-648.el9 TASK [fedora.linux_system_roles.hpc : Ensure that dnf-command(versionlock) is installed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:194 Saturday 13 December 2025 06:29:25 -0500 (0:00:02.681) 0:01:15.444 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: dnf-command(versionlock) TASK [fedora.linux_system_roles.hpc : Check if kernel versionlock entries exist] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:199 Saturday 13 December 2025 06:29:26 -0500 (0:00:01.258) 0:01:16.703 ***** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1765625292.252396, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1765625287.9793987, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 843067, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0644", "mtime": 1765625287.9793987, "nlink": 1, "path": "/etc/dnf/plugins/versionlock.list", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "version": "2734467333", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.hpc : Prevent installation of all kernel packages of a different version] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:208 Saturday 13 December 2025 06:29:26 -0500 (0:00:00.368) 0:01:17.071 ***** changed: [managed-node1] => (item=kernel) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel" ], "delta": "0:00:01.009197", "end": "2025-12-13 06:29:28.170474", "item": "kernel", "rc": 0, "start": "2025-12-13 06:29:27.161277" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:01:21 ago on Sat 13 Dec 2025 06:28:06 AM EST. Adding versionlock on: kernel-0:5.14.0-642.el9.* Adding versionlock on: kernel-0:5.14.0-648.el9.* STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client changed: [managed-node1] => (item=kernel-core) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-core" ], "delta": "0:00:01.045683", "end": "2025-12-13 06:29:29.548471", "item": "kernel-core", "rc": 0, "start": "2025-12-13 06:29:28.502788" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:01:23 ago on Sat 13 Dec 2025 06:28:06 AM EST. Adding versionlock on: kernel-core-0:5.14.0-648.el9.* Adding versionlock on: kernel-core-0:5.14.0-642.el9.* STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client changed: [managed-node1] => (item=kernel-modules) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-modules" ], "delta": "0:00:00.902369", "end": "2025-12-13 06:29:30.782670", "item": "kernel-modules", "rc": 0, "start": "2025-12-13 06:29:29.880301" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:01:24 ago on Sat 13 Dec 2025 06:28:06 AM EST. Adding versionlock on: kernel-modules-0:5.14.0-642.el9.* Adding versionlock on: kernel-modules-0:5.14.0-648.el9.* STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client changed: [managed-node1] => (item=kernel-modules-extra) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-modules-extra" ], "delta": "0:00:01.020510", "end": "2025-12-13 06:29:32.132621", "item": "kernel-modules-extra", "rc": 0, "start": "2025-12-13 06:29:31.112111" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:01:25 ago on Sat 13 Dec 2025 06:28:06 AM EST. Adding versionlock on: kernel-modules-extra-0:5.14.0-648.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-635.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-642.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-639.el9.* Adding versionlock on: kernel-modules-extra-0:5.14.0-645.el9.* STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client changed: [managed-node1] => (item=kernel-devel) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-devel" ], "delta": "0:00:00.948395", "end": "2025-12-13 06:29:33.407735", "item": "kernel-devel", "rc": 0, "start": "2025-12-13 06:29:32.459340" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:01:27 ago on Sat 13 Dec 2025 06:28:06 AM EST. Adding versionlock on: kernel-devel-0:5.14.0-648.el9.* Adding versionlock on: kernel-devel-0:5.14.0-642.el9.* STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client changed: [managed-node1] => (item=kernel-headers) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "kernel-headers" ], "delta": "0:00:00.884295", "end": "2025-12-13 06:29:34.622654", "item": "kernel-headers", "rc": 0, "start": "2025-12-13 06:29:33.738359" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:01:28 ago on Sat 13 Dec 2025 06:28:06 AM EST. Adding versionlock on: kernel-headers-0:5.14.0-648.el9.* STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client TASK [fedora.linux_system_roles.hpc : Update all packages to bring system to the latest state] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:216 Saturday 13 December 2025 06:29:34 -0500 (0:00:07.827) 0:01:24.898 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_update_all_packages", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Get list of dnf modules] ***************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:227 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.032) 0:01:24.931 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Reset nvidia-driver module if it is enabled of different version] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:232 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.032) 0:01:24.963 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Enable NVIDIA driver module] ************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:237 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.032) 0:01:24.996 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Install dkms] **************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:246 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.032) 0:01:25.029 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Install NVIDIA driver] ******************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:254 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.031) 0:01:25.060 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Restart dkms service to make it build nvidia drivers for all kernels] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:264 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.032) 0:01:25.093 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Install CUDA driver] ********************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:275 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.033) 0:01:25.127 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Enable nvidia-persistenced.service] ****** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:283 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.032) 0:01:25.159 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_system_vendor == \"Microsoft Corporation\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Install CUDA Toolkit] ******************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:293 Saturday 13 December 2025 06:29:34 -0500 (0:00:00.031) 0:01:25.190 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "hpc_install_cuda_toolkit", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.hpc : Prevent update of CUDA Toolkit packages] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:302 Saturday 13 December 2025 06:29:35 -0500 (0:00:00.036) 0:01:25.227 ***** skipping: [managed-node1] => (item=kernel) => { "ansible_loop_var": "item", "changed": false, "false_condition": "hpc_install_cuda_toolkit", "item": "kernel", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=kernel-core) => { "ansible_loop_var": "item", "changed": false, "false_condition": "hpc_install_cuda_toolkit", "item": "kernel-core", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=kernel-modules) => { "ansible_loop_var": "item", "changed": false, "false_condition": "hpc_install_cuda_toolkit", "item": "kernel-modules", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=kernel-modules-extra) => { "ansible_loop_var": "item", "changed": false, "false_condition": "hpc_install_cuda_toolkit", "item": "kernel-modules-extra", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=kernel-devel) => { "ansible_loop_var": "item", "changed": false, "false_condition": "hpc_install_cuda_toolkit", "item": "kernel-devel", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=kernel-headers) => { "ansible_loop_var": "item", "changed": false, "false_condition": "hpc_install_cuda_toolkit", "item": "kernel-headers", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.hpc : Install NVIDIA NCCL] ********************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:313 Saturday 13 December 2025 06:29:35 -0500 (0:00:00.074) 0:01:25.301 ***** changed: [managed-node1] => { "attempts": 1, "changed": true, "rc": 0, "results": [ "Installed: libnccl-2.27.5-1+cuda12.9.x86_64", "Installed: libnccl-devel-2.27.5-1+cuda12.9.x86_64" ] } lsrpackages: libnccl-2.27.5-1+cuda12.9 libnccl-devel-2.27.5-1+cuda12.9 TASK [fedora.linux_system_roles.hpc : Prevent update of NVIDIA NCCL packages] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:322 Saturday 13 December 2025 06:30:01 -0500 (0:00:26.209) 0:01:51.511 ***** changed: [managed-node1] => (item=libnccl-2.27.5-1+cuda12.9) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "libnccl-2.27.5-1+cuda12.9" ], "delta": "0:00:01.080009", "end": "2025-12-13 06:30:02.686534", "item": "libnccl-2.27.5-1+cuda12.9", "rc": 0, "start": "2025-12-13 06:30:01.606525" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:01:56 ago on Sat 13 Dec 2025 06:28:06 AM EST. Adding versionlock on: libnccl-0:2.27.5-1+cuda12.9.* STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client changed: [managed-node1] => (item=libnccl-devel-2.27.5-1+cuda12.9) => { "ansible_loop_var": "item", "changed": true, "cmd": [ "dnf", "versionlock", "add", "libnccl-devel-2.27.5-1+cuda12.9" ], "delta": "0:00:01.080791", "end": "2025-12-13 06:30:04.094057", "item": "libnccl-devel-2.27.5-1+cuda12.9", "rc": 0, "start": "2025-12-13 06:30:03.013266" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:01:57 ago on Sat 13 Dec 2025 06:28:06 AM EST. Adding versionlock on: libnccl-devel-0:2.27.5-1+cuda12.9.* STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client TASK [fedora.linux_system_roles.hpc : Install NVIDIA Fabric Manager] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:333 Saturday 13 December 2025 06:30:04 -0500 (0:00:02.857) 0:01:54.368 ***** FAILED - RETRYING: [managed-node1]: Install NVIDIA Fabric Manager (3 retries left). FAILED - RETRYING: [managed-node1]: Install NVIDIA Fabric Manager (2 retries left). FAILED - RETRYING: [managed-node1]: Install NVIDIA Fabric Manager (1 retries left). fatal: [managed-node1]: FAILED! => { "attempts": 3, "changed": false, "failures": [ "nvidia-fabric-manager All matches were filtered out by modular filtering for argument: nvidia-fabric-manager" ], "rc": 1, "results": [] } MSG: Failed to install some of the specified packages TASK [Remove both of the LVM logical volumes in 'foo' created above] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:151 Saturday 13 December 2025 06:30:24 -0500 (0:00:20.220) 0:02:14.589 ***** included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.107) 0:02:14.697 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.054) 0:02:14.751 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.057) 0:02:14.808 ***** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node1] => (item=CentOS_9.yml) => { "ansible_facts": { "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "vdo", "kmod-kvdo", "xfsprogs", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.074) 0:02:14.883 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.036) 0:02:14.919 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.035) 0:02:14.955 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.033) 0:02:14.989 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.033) 0:02:15.023 ***** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Saturday 13 December 2025 06:30:24 -0500 (0:00:00.107) 0:02:15.131 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kmod-kvdo libblockdev libblockdev-crypto libblockdev-dm libblockdev-lvm libblockdev-mdraid libblockdev-swap python3-blivet stratis-cli stratisd vdo xfsprogs TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Saturday 13 December 2025 06:30:26 -0500 (0:00:01.407) 0:02:16.538 ***** ok: [managed-node1] => { "storage_pools | d([])": [ { "disks": [ "sda", "sdb" ], "grow_to_fill": true, "name": "rootvg", "state": "absent", "volumes": [ { "mount_point": "/hpc-test1", "name": "rootlv", "size": "2G" }, { "mount_point": "/hpc-test2", "name": "usrlv", "size": "1G" } ] } ] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Saturday 13 December 2025 06:30:26 -0500 (0:00:00.064) 0:02:16.603 ***** ok: [managed-node1] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Saturday 13 December 2025 06:30:26 -0500 (0:00:00.054) 0:02:16.658 ***** ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Saturday 13 December 2025 06:30:27 -0500 (0:00:01.271) 0:02:17.929 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Saturday 13 December 2025 06:30:27 -0500 (0:00:00.063) 0:02:17.993 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Saturday 13 December 2025 06:30:27 -0500 (0:00:00.049) 0:02:18.043 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Saturday 13 December 2025 06:30:27 -0500 (0:00:00.051) 0:02:18.094 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Saturday 13 December 2025 06:30:27 -0500 (0:00:00.052) 0:02:18.146 ***** ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kpartx TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Saturday 13 December 2025 06:30:29 -0500 (0:00:01.349) 0:02:19.496 ***** ok: [managed-node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "apt-daily.service": { "name": "apt-daily.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autofs.service": { "name": "autofs.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "kvm_stat.service": { "name": "kvm_stat.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@efi_pstore.service": { "name": "modprobe@efi_pstore.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "ndctl-monitor.service": { "name": "ndctl-monitor.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "oddjobd.service": { "name": "oddjobd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles.service": { "name": "systemd-tmpfiles.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "target.service": { "name": "target.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "targetclid.service": { "name": "targetclid.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" }, "ypbind.service": { "name": "ypbind.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "yppasswdd.service": { "name": "yppasswdd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypserv.service": { "name": "ypserv.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ypxfrd.service": { "name": "ypxfrd.service", "source": "systemd", "state": "stopped", "status": "not-found" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Saturday 13 December 2025 06:30:31 -0500 (0:00:01.824) 0:02:21.321 ***** ok: [managed-node1] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Saturday 13 December 2025 06:30:31 -0500 (0:00:00.070) 0:02:21.391 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Saturday 13 December 2025 06:30:31 -0500 (0:00:00.063) 0:02:21.455 ***** changed: [managed-node1] => { "actions": [ { "action": "destroy format", "device": "/dev/mapper/rootvg-usrlv", "fs_type": "xfs" }, { "action": "destroy device", "device": "/dev/mapper/rootvg-usrlv", "fs_type": null }, { "action": "destroy format", "device": "/dev/mapper/rootvg-rootlv", "fs_type": "xfs" }, { "action": "destroy device", "device": "/dev/mapper/rootvg-rootlv", "fs_type": null }, { "action": "destroy device", "device": "/dev/rootvg", "fs_type": null }, { "action": "destroy format", "device": "/dev/sdb", "fs_type": "lvmpv" }, { "action": "destroy format", "device": "/dev/sda", "fs_type": "lvmpv" } ], "changed": true, "crypts": [], "leaves": [ "/dev/sda", "/dev/sdb", "/dev/xvda1" ], "mounts": [ { "fstype": "xfs", "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "absent" }, { "fstype": "xfs", "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "absent" } ], "packages": [ "xfsprogs" ], "pools": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "absent", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "1G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Saturday 13 December 2025 06:30:33 -0500 (0:00:02.413) 0:02:23.868 ***** skipping: [managed-node1] => { "changed": false, "false_condition": "storage_udevadm_trigger | d(false)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Saturday 13 December 2025 06:30:33 -0500 (0:00:00.054) 0:02:23.922 ***** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1765625339.2583668, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "9377586f1d232bf923bd728d683ed25ea92a0654", "ctime": 1765625339.2543669, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 71305522, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1765625339.2543669, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1473, "uid": 0, "version": "84410504", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Saturday 13 December 2025 06:30:34 -0500 (0:00:00.374) 0:02:24.297 ***** ok: [managed-node1] => { "backup": "", "changed": false } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Saturday 13 December 2025 06:30:34 -0500 (0:00:00.378) 0:02:24.676 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Saturday 13 December 2025 06:30:34 -0500 (0:00:00.030) 0:02:24.706 ***** ok: [managed-node1] => { "blivet_output": { "actions": [ { "action": "destroy format", "device": "/dev/mapper/rootvg-usrlv", "fs_type": "xfs" }, { "action": "destroy device", "device": "/dev/mapper/rootvg-usrlv", "fs_type": null }, { "action": "destroy format", "device": "/dev/mapper/rootvg-rootlv", "fs_type": "xfs" }, { "action": "destroy device", "device": "/dev/mapper/rootvg-rootlv", "fs_type": null }, { "action": "destroy device", "device": "/dev/rootvg", "fs_type": null }, { "action": "destroy format", "device": "/dev/sdb", "fs_type": "lvmpv" }, { "action": "destroy format", "device": "/dev/sda", "fs_type": "lvmpv" } ], "changed": true, "crypts": [], "failed": false, "leaves": [ "/dev/sda", "/dev/sdb", "/dev/xvda1" ], "mounts": [ { "fstype": "xfs", "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "absent" }, { "fstype": "xfs", "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "absent" } ], "packages": [ "xfsprogs" ], "pools": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "absent", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "1G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Saturday 13 December 2025 06:30:34 -0500 (0:00:00.042) 0:02:24.749 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [ { "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": true, "name": "rootvg", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "absent", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/rootvg-rootlv", "_mount_id": "/dev/mapper/rootvg-rootlv", "_raw_device": "/dev/mapper/rootvg-rootlv", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test1", "mount_user": null, "name": "rootlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "2G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/rootvg-usrlv", "_mount_id": "/dev/mapper/rootvg-usrlv", "_raw_device": "/dev/mapper/rootvg-usrlv", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [ "sda", "sdb" ], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "/hpc-test2", "mount_user": null, "name": "usrlv", "part_type": null, "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "1G", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Saturday 13 December 2025 06:30:34 -0500 (0:00:00.038) 0:02:24.787 ***** ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Saturday 13 December 2025 06:30:34 -0500 (0:00:00.034) 0:02:24.821 ***** redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount changed: [managed-node1] => (item={'src': '/dev/mapper/rootvg-usrlv', 'path': '/hpc-test2', 'state': 'absent', 'fstype': 'xfs'}) => { "ansible_loop_var": "mount_info", "backup_file": "", "boot": "yes", "changed": true, "dump": "0", "fstab": "/etc/fstab", "fstype": "xfs", "mount_info": { "fstype": "xfs", "path": "/hpc-test2", "src": "/dev/mapper/rootvg-usrlv", "state": "absent" }, "name": "/hpc-test2", "opts": "defaults", "passno": "0", "src": "/dev/mapper/rootvg-usrlv" } redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount changed: [managed-node1] => (item={'src': '/dev/mapper/rootvg-rootlv', 'path': '/hpc-test1', 'state': 'absent', 'fstype': 'xfs'}) => { "ansible_loop_var": "mount_info", "backup_file": "", "boot": "yes", "changed": true, "dump": "0", "fstab": "/etc/fstab", "fstype": "xfs", "mount_info": { "fstype": "xfs", "path": "/hpc-test1", "src": "/dev/mapper/rootvg-rootlv", "state": "absent" }, "name": "/hpc-test1", "opts": "defaults", "passno": "0", "src": "/dev/mapper/rootvg-rootlv" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Saturday 13 December 2025 06:30:35 -0500 (0:00:00.733) 0:02:25.555 ***** ok: [managed-node1] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Saturday 13 December 2025 06:30:36 -0500 (0:00:00.698) 0:02:26.253 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Saturday 13 December 2025 06:30:36 -0500 (0:00:00.054) 0:02:26.308 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Saturday 13 December 2025 06:30:36 -0500 (0:00:00.053) 0:02:26.362 ***** ok: [managed-node1] => { "changed": false, "name": null, "status": {} } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Saturday 13 December 2025 06:30:36 -0500 (0:00:00.701) 0:02:27.064 ***** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1765625057.9956648, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1764328113.166, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4194436, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1764327821.524, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "3963487230", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Saturday 13 December 2025 06:30:37 -0500 (0:00:00.370) 0:02:27.434 ***** skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Saturday 13 December 2025 06:30:37 -0500 (0:00:00.030) 0:02:27.465 ***** ok: [managed-node1] TASK [Clean up after the role invocation] ************************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:168 Saturday 13 December 2025 06:30:38 -0500 (0:00:00.947) 0:02:28.412 ***** included: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml for managed-node1 TASK [Check if versionlock entries exist] ************************************** task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:3 Saturday 13 December 2025 06:30:38 -0500 (0:00:00.078) 0:02:28.491 ***** ok: [managed-node1] => { "changed": false, "stat": { "atime": 1765625405.3333032, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "e9d583e97721d8377ac06980b307c295dab4c4ae", "ctime": 1765625404.0563045, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 843067, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1765625404.0563045, "nlink": 1, "path": "/etc/dnf/plugins/versionlock.list", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 882, "uid": 0, "version": "2734467333", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [Clear dnf versionlock entries] ******************************************* task path: /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tasks/cleanup.yml:8 Saturday 13 December 2025 06:30:38 -0500 (0:00:00.376) 0:02:28.867 ***** changed: [managed-node1] => { "changed": true, "cmd": [ "dnf", "versionlock", "clear" ], "delta": "0:00:00.921850", "end": "2025-12-13 06:30:39.884134", "rc": 0, "start": "2025-12-13 06:30:38.962284" } STDOUT: Beaker Client - RedHatEnterpriseLinux9 0.0 B/s | 0 B 00:00 Last metadata expiration check: 0:02:33 ago on Sat 13 Dec 2025 06:28:06 AM EST. STDERR: Errors during downloading metadata for repository 'beaker-client': - Curl error (6): Couldn't resolve host name for http://download.eng.bos.redhat.com/beakerrepos/client/RedHatEnterpriseLinux9/repodata/repomd.xml [Could not resolve host: download.eng.bos.redhat.com] Error: Failed to download metadata for repo 'beaker-client': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: beaker-client PLAY RECAP ********************************************************************* managed-node1 : ok=104 changed=11 unreachable=0 failed=1 skipped=59 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.14", "attempts": 3, "end_time": "2025-12-13T11:30:24.359115+00:00Z", "host": "managed-node1", "message": "Failed to install some of the specified packages", "rc": 1, "start_time": "2025-12-13T11:30:04.166723+00:00Z", "task_name": "Install NVIDIA Fabric Manager", "task_path": "/tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:333" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 13 December 2025 06:30:39 -0500 (0:00:01.269) 0:02:30.136 ***** =============================================================================== fedora.linux_system_roles.storage : Make sure blivet is available ------ 32.64s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 fedora.linux_system_roles.hpc : Install NVIDIA NCCL -------------------- 26.21s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:313 fedora.linux_system_roles.hpc : Install NVIDIA Fabric Manager ---------- 20.22s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:333 fedora.linux_system_roles.hpc : Prevent installation of all kernel packages of a different version --- 7.83s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:208 fedora.linux_system_roles.storage : Get service facts ------------------- 3.97s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 2.97s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.hpc : Prevent update of NVIDIA NCCL packages --- 2.86s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:322 fedora.linux_system_roles.hpc : Install kernel-devel and kernel-headers packages for all kernels --- 2.68s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:183 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 2.41s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Get service facts ------------------- 1.82s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Get service facts ------------------- 1.74s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 1.67s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.hpc : Install EPEL release package ------------ 1.61s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:37 fedora.linux_system_roles.hpc : Deploy GPG keys for repositories -------- 1.53s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:27 Find unused disks in the system ----------------------------------------- 1.46s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:40 Ensure test packages ---------------------------------------------------- 1.46s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/tests/hpc/tests_skip_toolkit.yml:31 fedora.linux_system_roles.storage : Make sure blivet is available ------- 1.41s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 fedora.linux_system_roles.storage : Make sure required packages are installed --- 1.35s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 fedora.linux_system_roles.hpc : Install lvm2 to get lvs command --------- 1.32s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/hpc/tasks/main.yml:114 fedora.linux_system_roles.storage : Make sure required packages are installed --- 1.31s /tmp/collections-H6s/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Dec 13 06:28:09 managed-node1 sshd-session[53501]: Accepted publickey for root from 10.31.14.156 port 44710 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Dec 13 06:28:09 managed-node1 systemd-logind[608]: New session 15 of user root. β–‘β–‘ Subject: A new session 15 has been created for user root β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ Documentation: sd-login(3) β–‘β–‘ β–‘β–‘ A new session with the ID 15 has been created for the user root. β–‘β–‘ β–‘β–‘ The leading process of the session is 53501. Dec 13 06:28:09 managed-node1 systemd[1]: Started Session 15 of User root. β–‘β–‘ Subject: A start job for unit session-15.scope has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit session-15.scope has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 1865. Dec 13 06:28:09 managed-node1 sshd-session[53501]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Dec 13 06:28:09 managed-node1 sshd-session[53504]: Received disconnect from 10.31.14.156 port 44710:11: disconnected by user Dec 13 06:28:09 managed-node1 sshd-session[53504]: Disconnected from user root 10.31.14.156 port 44710 Dec 13 06:28:09 managed-node1 sshd-session[53501]: pam_unix(sshd:session): session closed for user root Dec 13 06:28:09 managed-node1 systemd[1]: session-15.scope: Deactivated successfully. β–‘β–‘ Subject: Unit succeeded β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ The unit session-15.scope has successfully entered the 'dead' state. Dec 13 06:28:09 managed-node1 systemd-logind[608]: Session 15 logged out. Waiting for processes to exit. Dec 13 06:28:09 managed-node1 systemd-logind[608]: Removed session 15. β–‘β–‘ Subject: Session 15 has been terminated β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ Documentation: sd-login(3) β–‘β–‘ β–‘β–‘ A session with the ID 15 has been terminated. Dec 13 06:28:10 managed-node1 python3.9[53702]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 13 06:28:11 managed-node1 python3.9[53879]: ansible-ansible.legacy.dnf Invoked with name=['util-linux-core'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:28:12 managed-node1 python3.9[54033]: ansible-fedora.linux_system_roles.find_unused_disk Invoked with max_return=10 min_size=0 max_size=0 match_sector_size=False with_interface=None Dec 13 06:28:14 managed-node1 python3.9[54186]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:28:14 managed-node1 python3.9[54335]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:28:19 managed-node1 systemd[1]: Reloading. Dec 13 06:28:19 managed-node1 systemd-rc-local-generator[54423]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:28:19 managed-node1 systemd[1]: Listening on Device-mapper event daemon FIFOs. β–‘β–‘ Subject: A start job for unit dm-event.socket has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit dm-event.socket has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 1934. Dec 13 06:28:19 managed-node1 systemd[1]: Reloading. Dec 13 06:28:19 managed-node1 systemd-rc-local-generator[54457]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:28:19 managed-node1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... β–‘β–‘ Subject: A start job for unit lvm2-monitor.service has begun execution β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit lvm2-monitor.service has begun execution. β–‘β–‘ β–‘β–‘ The job identifier is 1938. Dec 13 06:28:19 managed-node1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. β–‘β–‘ Subject: A start job for unit lvm2-monitor.service has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit lvm2-monitor.service has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 1938. Dec 13 06:28:19 managed-node1 systemd[1]: Reloading. Dec 13 06:28:20 managed-node1 systemd-rc-local-generator[54490]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:28:20 managed-node1 systemd[1]: Listening on LVM2 poll daemon socket. β–‘β–‘ Subject: A start job for unit lvm2-lvmpolld.socket has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit lvm2-lvmpolld.socket has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 1944. Dec 13 06:28:20 managed-node1 dbus-broker-launch[599]: Noticed file-system modification, trigger reload. β–‘β–‘ Subject: A configuration directory was written to β–‘β–‘ Defined-By: dbus-broker β–‘β–‘ Support: https://groups.google.com/forum/#!forum/bus1-devel β–‘β–‘ β–‘β–‘ A write was detected to one of the directories containing D-Bus configuration β–‘β–‘ files, triggering a configuration reload. β–‘β–‘ β–‘β–‘ This functionality exists for backwards compatibility to pick up changes to β–‘β–‘ D-Bus configuration without an explicit reolad request. Typically when β–‘β–‘ installing or removing third-party software causes D-Bus configuration files β–‘β–‘ to be added or removed. β–‘β–‘ β–‘β–‘ It is worth noting that this may cause partial configuration to be loaded in β–‘β–‘ case dispatching this notification races with the writing of the configuration β–‘β–‘ files. However, a future notification will then cause the configuration to be β–‘β–‘ reladed again. Dec 13 06:28:20 managed-node1 dbus-broker-launch[599]: Noticed file-system modification, trigger reload. β–‘β–‘ Subject: A configuration directory was written to β–‘β–‘ Defined-By: dbus-broker β–‘β–‘ Support: https://groups.google.com/forum/#!forum/bus1-devel β–‘β–‘ β–‘β–‘ A write was detected to one of the directories containing D-Bus configuration β–‘β–‘ files, triggering a configuration reload. β–‘β–‘ β–‘β–‘ This functionality exists for backwards compatibility to pick up changes to β–‘β–‘ D-Bus configuration without an explicit reolad request. Typically when β–‘β–‘ installing or removing third-party software causes D-Bus configuration files β–‘β–‘ to be added or removed. β–‘β–‘ β–‘β–‘ It is worth noting that this may cause partial configuration to be loaded in β–‘β–‘ case dispatching this notification races with the writing of the configuration β–‘β–‘ files. However, a future notification will then cause the configuration to be β–‘β–‘ reladed again. Dec 13 06:28:20 managed-node1 groupadd[54506]: group added to /etc/group: name=clevis, GID=995 Dec 13 06:28:20 managed-node1 groupadd[54506]: group added to /etc/gshadow: name=clevis Dec 13 06:28:20 managed-node1 groupadd[54506]: new group: name=clevis, GID=995 Dec 13 06:28:20 managed-node1 useradd[54513]: new user: name=clevis, UID=996, GID=995, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none Dec 13 06:28:20 managed-node1 usermod[54523]: add 'clevis' to group 'tss' Dec 13 06:28:20 managed-node1 usermod[54523]: add 'clevis' to shadow group 'tss' Dec 13 06:28:21 managed-node1 dbus-broker-launch[599]: Noticed file-system modification, trigger reload. β–‘β–‘ Subject: A configuration directory was written to β–‘β–‘ Defined-By: dbus-broker β–‘β–‘ Support: https://groups.google.com/forum/#!forum/bus1-devel β–‘β–‘ β–‘β–‘ A write was detected to one of the directories containing D-Bus configuration β–‘β–‘ files, triggering a configuration reload. β–‘β–‘ β–‘β–‘ This functionality exists for backwards compatibility to pick up changes to β–‘β–‘ D-Bus configuration without an explicit reolad request. Typically when β–‘β–‘ installing or removing third-party software causes D-Bus configuration files β–‘β–‘ to be added or removed. β–‘β–‘ β–‘β–‘ It is worth noting that this may cause partial configuration to be loaded in β–‘β–‘ case dispatching this notification races with the writing of the configuration β–‘β–‘ files. However, a future notification will then cause the configuration to be β–‘β–‘ reladed again. Dec 13 06:28:21 managed-node1 dbus-broker-launch[599]: Noticed file-system modification, trigger reload. β–‘β–‘ Subject: A configuration directory was written to β–‘β–‘ Defined-By: dbus-broker β–‘β–‘ Support: https://groups.google.com/forum/#!forum/bus1-devel β–‘β–‘ β–‘β–‘ A write was detected to one of the directories containing D-Bus configuration β–‘β–‘ files, triggering a configuration reload. β–‘β–‘ β–‘β–‘ This functionality exists for backwards compatibility to pick up changes to β–‘β–‘ D-Bus configuration without an explicit reolad request. Typically when β–‘β–‘ installing or removing third-party software causes D-Bus configuration files β–‘β–‘ to be added or removed. β–‘β–‘ β–‘β–‘ It is worth noting that this may cause partial configuration to be loaded in β–‘β–‘ case dispatching this notification races with the writing of the configuration β–‘β–‘ files. However, a future notification will then cause the configuration to be β–‘β–‘ reladed again. Dec 13 06:28:46 managed-node1 systemd-udevd[519]: /etc/udev/rules.d/90-nfs-readahead.rules:6 Invalid value "/usr/bin/awk -v bdi=\$kernel 'BEGIN{ret=1} {if (\$4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes" for PROGRAM (char 50: invalid substitution type), ignoring. Dec 13 06:28:46 managed-node1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. β–‘β–‘ Subject: A start job for unit run-r7698dcd412b3427d9d3e45d8734796c8.service has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit run-r7698dcd412b3427d9d3e45d8734796c8.service has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 1954. Dec 13 06:28:46 managed-node1 systemd[1]: Starting man-db-cache-update.service... β–‘β–‘ Subject: A start job for unit man-db-cache-update.service has begun execution β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit man-db-cache-update.service has begun execution. β–‘β–‘ β–‘β–‘ The job identifier is 2024. Dec 13 06:28:46 managed-node1 systemd[1]: Reloading. Dec 13 06:28:46 managed-node1 systemd-rc-local-generator[55154]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:28:46 managed-node1 systemd[1]: Queuing reload/restart jobs for marked units… Dec 13 06:28:46 managed-node1 systemd[4342]: Created slice User Background Tasks Slice. β–‘β–‘ Subject: A start job for unit UNIT has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit UNIT has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 21. Dec 13 06:28:46 managed-node1 systemd[4342]: Starting Cleanup of User's Temporary Files and Directories... β–‘β–‘ Subject: A start job for unit UNIT has begun execution β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit UNIT has begun execution. β–‘β–‘ β–‘β–‘ The job identifier is 20. Dec 13 06:28:46 managed-node1 systemd[4342]: Finished Cleanup of User's Temporary Files and Directories. β–‘β–‘ Subject: A start job for unit UNIT has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit UNIT has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 20. Dec 13 06:28:47 managed-node1 systemd[1]: man-db-cache-update.service: Deactivated successfully. β–‘β–‘ Subject: Unit succeeded β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ The unit man-db-cache-update.service has successfully entered the 'dead' state. Dec 13 06:28:47 managed-node1 systemd[1]: Finished man-db-cache-update.service. β–‘β–‘ Subject: A start job for unit man-db-cache-update.service has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit man-db-cache-update.service has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 2024. Dec 13 06:28:47 managed-node1 systemd[1]: man-db-cache-update.service: Consumed 1.351s CPU time. β–‘β–‘ Subject: Resources consumed by unit runtime β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ The unit man-db-cache-update.service completed and consumed the indicated resources. Dec 13 06:28:47 managed-node1 systemd[1]: run-r7698dcd412b3427d9d3e45d8734796c8.service: Deactivated successfully. β–‘β–‘ Subject: Unit succeeded β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ The unit run-r7698dcd412b3427d9d3e45d8734796c8.service has successfully entered the 'dead' state. Dec 13 06:28:47 managed-node1 python3.9[57108]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[{'name': 'rootvg', 'disks': ['sda', 'sdb'], 'grow_to_fill': True, 'volumes': [{'name': 'rootlv', 'size': '2G', 'mount_point': '/hpc-test1', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}, {'name': 'usrlv', 'size': '1G', 'mount_point': '/hpc-test2', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}], 'state': 'present', 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'encryption_clevis_pin': None, 'encryption_tang_url': None, 'encryption_tang_thumbprint': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None, 'shared': None, 'type': None}] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Dec 13 06:28:48 managed-node1 python3.9[57271]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:28:50 managed-node1 python3.9[57425]: ansible-service_facts Invoked Dec 13 06:28:54 managed-node1 python3.9[57670]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[{'name': 'rootvg', 'disks': ['sda', 'sdb'], 'grow_to_fill': True, 'volumes': [{'name': 'rootlv', 'size': '2G', 'mount_point': '/hpc-test1', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}, {'name': 'usrlv', 'size': '1G', 'mount_point': '/hpc-test2', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}], 'state': 'present', 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'encryption_clevis_pin': None, 'encryption_tang_url': None, 'encryption_tang_thumbprint': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None, 'shared': None, 'type': None}] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Dec 13 06:28:54 managed-node1 systemd-udevd[519]: /etc/udev/rules.d/90-nfs-readahead.rules:6 Invalid value "/usr/bin/awk -v bdi=\$kernel 'BEGIN{ret=1} {if (\$4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes" for PROGRAM (char 50: invalid substitution type), ignoring. Dec 13 06:28:54 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:54 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:54 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:54 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:54 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:54 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:54 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:55 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:55 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:55 managed-node1 kernel: WRITE_SAME sectors: 65535 exceeds max_write_same_len: 4096 Dec 13 06:28:55 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:28:57 managed-node1 python3.9[57993]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:28:57 managed-node1 python3.9[58144]: ansible-lineinfile Invoked with insertbefore=^# firstmatch=True line=# system_role:storage regexp=# system_role:storage path=/etc/fstab state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 13 06:28:58 managed-node1 python3.9[58293]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 13 06:28:58 managed-node1 systemd[1]: Reloading. Dec 13 06:28:58 managed-node1 systemd-rc-local-generator[58310]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:28:58 managed-node1 python3.9[58471]: ansible-mount Invoked with src=/dev/mapper/rootvg-rootlv path=/hpc-test1 fstype=xfs opts=defaults state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 13 06:28:58 managed-node1 kernel: XFS (dm-1): Mounting V5 Filesystem 8eee22ce-63a3-4717-99a9-776dbc63601d Dec 13 06:28:58 managed-node1 kernel: XFS (dm-1): Ending clean mount Dec 13 06:28:59 managed-node1 python3.9[58629]: ansible-mount Invoked with src=/dev/mapper/rootvg-usrlv path=/hpc-test2 fstype=xfs opts=defaults state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 13 06:28:59 managed-node1 kernel: XFS (dm-0): Mounting V5 Filesystem 5e2d61f9-1c8b-4063-a9a1-d5f1584bc1fb Dec 13 06:28:59 managed-node1 kernel: XFS (dm-0): Ending clean mount Dec 13 06:28:59 managed-node1 python3.9[58787]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 13 06:28:59 managed-node1 systemd[1]: Reloading. Dec 13 06:28:59 managed-node1 systemd-rc-local-generator[58805]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:29:00 managed-node1 python3.9[58965]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:29:00 managed-node1 python3.9[59116]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 13 06:29:01 managed-node1 python3.9[59300]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:29:02 managed-node1 python3.9[59449]: ansible-rpm_key Invoked with key=https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9 state=present validate_certs=True fingerprint=None Dec 13 06:29:02 managed-node1 python3.9[59603]: ansible-rpm_key Invoked with key=https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/D42D0685.pub state=present validate_certs=True fingerprint=None Dec 13 06:29:03 managed-node1 python3.9[59757]: ansible-rpm_key Invoked with key=https://packages.microsoft.com/keys/microsoft.asc state=present validate_certs=True fingerprint=None Dec 13 06:29:03 managed-node1 python3.9[59911]: ansible-ansible.legacy.dnf Invoked with name=['https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:29:05 managed-node1 python3.9[60065]: ansible-yum_repository Invoked with name=nvidia-cuda description=NVIDIA CUDA repository baseurl=['https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64'] gpgcheck=True reposdir=/etc/yum.repos.d state=present unsafe_writes=False bandwidth=None cost=None deltarpm_metadata_percentage=None deltarpm_percentage=None enabled=None enablegroups=None exclude=None failovermethod=None file=None gpgcakey=None gpgkey=None module_hotfixes=None http_caching=None include=None includepkgs=None ip_resolve=None keepalive=None keepcache=None metadata_expire=None metadata_expire_filter=None metalink=None mirrorlist=None mirrorlist_expire=None password=NOT_LOGGING_PARAMETER priority=None protect=None proxy=None proxy_password=NOT_LOGGING_PARAMETER proxy_username=None repo_gpgcheck=None retries=None s3_enabled=None skip_if_unavailable=None sslcacert=None ssl_check_cert_permissions=None sslclientcert=None sslclientkey=None sslverify=None throttle=None timeout=None ui_repoid_vars=None username=None async=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 13 06:29:05 managed-node1 python3.9[60214]: ansible-yum_repository Invoked with name=microsoft-prod description=Microsoft Production repository baseurl=['https://packages.microsoft.com/rhel/9/prod/'] gpgcheck=True reposdir=/etc/yum.repos.d state=present unsafe_writes=False bandwidth=None cost=None deltarpm_metadata_percentage=None deltarpm_percentage=None enabled=None enablegroups=None exclude=None failovermethod=None file=None gpgcakey=None gpgkey=None module_hotfixes=None http_caching=None include=None includepkgs=None ip_resolve=None keepalive=None keepcache=None metadata_expire=None metadata_expire_filter=None metalink=None mirrorlist=None mirrorlist_expire=None password=NOT_LOGGING_PARAMETER priority=None protect=None proxy=None proxy_password=NOT_LOGGING_PARAMETER proxy_username=None repo_gpgcheck=None retries=None s3_enabled=None skip_if_unavailable=None sslcacert=None ssl_check_cert_permissions=None sslclientcert=None sslclientkey=None sslverify=None throttle=None timeout=None ui_repoid_vars=None username=None async=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 13 06:29:06 managed-node1 python3.9[60363]: ansible-ansible.legacy.dnf Invoked with name=['lvm2'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:29:07 managed-node1 python3.9[60517]: ansible-ansible.legacy.command Invoked with _raw_params=lvs --noheadings --units g --nosuffix -o lv_size /dev/mapper/rootvg-rootlv _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:29:08 managed-node1 python3.9[60667]: ansible-ansible.legacy.command Invoked with _raw_params=lvs --noheadings --units g --nosuffix -o lv_size /dev/mapper/rootvg-usrlv _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:29:08 managed-node1 python3.9[60817]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:29:10 managed-node1 python3.9[60971]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[{'name': 'rootvg', 'grow_to_fill': True, 'volumes': [{'name': 'rootlv', 'size': '2G', 'mount_point': '/hpc-test1', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}, {'name': 'usrlv', 'size': '2G', 'mount_point': '/hpc-test2', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}], 'disks': [], 'state': 'present', 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'encryption_clevis_pin': None, 'encryption_tang_url': None, 'encryption_tang_thumbprint': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None, 'shared': None, 'type': None}] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Dec 13 06:29:10 managed-node1 systemd-udevd[519]: /etc/udev/rules.d/90-nfs-readahead.rules:6 Invalid value "/usr/bin/awk -v bdi=\$kernel 'BEGIN{ret=1} {if (\$4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes" for PROGRAM (char 50: invalid substitution type), ignoring. Dec 13 06:29:11 managed-node1 python3.9[61155]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:29:12 managed-node1 python3.9[61309]: ansible-service_facts Invoked Dec 13 06:29:14 managed-node1 python3.9[61554]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[{'name': 'rootvg', 'grow_to_fill': True, 'volumes': [{'name': 'rootlv', 'size': '2G', 'mount_point': '/hpc-test1', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}, {'name': 'usrlv', 'size': '2G', 'mount_point': '/hpc-test2', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}], 'disks': [], 'state': 'present', 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'encryption_clevis_pin': None, 'encryption_tang_url': None, 'encryption_tang_thumbprint': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None, 'shared': None, 'type': None}] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Dec 13 06:29:15 managed-node1 systemd-udevd[519]: /etc/udev/rules.d/90-nfs-readahead.rules:6 Invalid value "/usr/bin/awk -v bdi=\$kernel 'BEGIN{ret=1} {if (\$4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes" for PROGRAM (char 50: invalid substitution type), ignoring. Dec 13 06:29:15 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:29:16 managed-node1 python3.9[61767]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:29:16 managed-node1 python3.9[61918]: ansible-lineinfile Invoked with insertbefore=^# firstmatch=True line=# system_role:storage regexp=# system_role:storage path=/etc/fstab state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 13 06:29:17 managed-node1 python3.9[62067]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 13 06:29:17 managed-node1 systemd[1]: Reloading. Dec 13 06:29:17 managed-node1 systemd-rc-local-generator[62083]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:29:17 managed-node1 python3.9[62245]: ansible-mount Invoked with src=/dev/mapper/rootvg-rootlv path=/hpc-test1 fstype=xfs opts=defaults state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 13 06:29:18 managed-node1 python3.9[62394]: ansible-mount Invoked with src=/dev/mapper/rootvg-usrlv path=/hpc-test2 fstype=xfs opts=defaults state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 13 06:29:18 managed-node1 python3.9[62543]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 13 06:29:18 managed-node1 systemd[1]: Reloading. Dec 13 06:29:18 managed-node1 systemd-rc-local-generator[62560]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:29:19 managed-node1 python3.9[62721]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:29:19 managed-node1 python3.9[62872]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 13 06:29:20 managed-node1 python3.9[63056]: ansible-ansible.legacy.dnf Invoked with name=['kernel'] state=latest allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:29:22 managed-node1 python3.9[63359]: ansible-ansible.legacy.dnf Invoked with name=['kernel-devel-5.14.0-642.el9', 'kernel-headers-5.14.0-642.el9'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:29:24 managed-node1 python3.9[63513]: ansible-ansible.legacy.dnf Invoked with name=['kernel-devel-5.14.0-648.el9', 'kernel-headers-5.14.0-648.el9'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:29:25 managed-node1 python3.9[63667]: ansible-ansible.legacy.dnf Invoked with name=['dnf-command(versionlock)'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:29:26 managed-node1 python3.9[63821]: ansible-stat Invoked with path=/etc/dnf/plugins/versionlock.list follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:29:27 managed-node1 python3.9[63972]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:29:28 managed-node1 python3.9[64126]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-core _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:29:29 managed-node1 python3.9[64280]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-modules _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:29:31 managed-node1 python3.9[64434]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-modules-extra _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:29:32 managed-node1 python3.9[64588]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-devel _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:29:33 managed-node1 python3.9[64742]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add kernel-headers _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:29:35 managed-node1 python3.9[64896]: ansible-ansible.legacy.dnf Invoked with name=['libnccl-2.27.5-1+cuda12.9', 'libnccl-devel-2.27.5-1+cuda12.9'] state=present allow_downgrade=True allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:30:01 managed-node1 python3.9[65060]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add libnccl-2.27.5-1+cuda12.9 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:30:03 managed-node1 python3.9[65214]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock add libnccl-devel-2.27.5-1+cuda12.9 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:30:04 managed-node1 python3.9[65368]: ansible-ansible.legacy.dnf Invoked with name=['nvidia-fabric-manager'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:30:10 managed-node1 python3.9[65522]: ansible-ansible.legacy.dnf Invoked with name=['nvidia-fabric-manager'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:30:17 managed-node1 python3.9[65676]: ansible-ansible.legacy.dnf Invoked with name=['nvidia-fabric-manager'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:30:23 managed-node1 python3.9[65830]: ansible-ansible.legacy.dnf Invoked with name=['nvidia-fabric-manager'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:30:25 managed-node1 python3.9[65984]: ansible-ansible.legacy.dnf Invoked with name=['python3-blivet', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'vdo', 'kmod-kvdo', 'xfsprogs', 'stratisd', 'stratis-cli', 'libblockdev'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:30:26 managed-node1 python3.9[66138]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[{'name': 'rootvg', 'disks': ['sda', 'sdb'], 'grow_to_fill': True, 'state': 'absent', 'volumes': [{'name': 'rootlv', 'size': '2G', 'mount_point': '/hpc-test1', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}, {'name': 'usrlv', 'size': '1G', 'mount_point': '/hpc-test2', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}], 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'encryption_clevis_pin': None, 'encryption_tang_url': None, 'encryption_tang_thumbprint': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None, 'shared': None, 'type': None}] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} packages_only=True uses_kmod_kvdo=True safe_mode=True diskvolume_mkfs_option_map={} Dec 13 06:30:27 managed-node1 systemd-udevd[519]: /etc/udev/rules.d/90-nfs-readahead.rules:6 Invalid value "/usr/bin/awk -v bdi=\$kernel 'BEGIN{ret=1} {if (\$4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes" for PROGRAM (char 50: invalid substitution type), ignoring. Dec 13 06:30:28 managed-node1 python3.9[66322]: ansible-ansible.legacy.dnf Invoked with name=['kpartx'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 13 06:30:29 managed-node1 python3.9[66476]: ansible-service_facts Invoked Dec 13 06:30:31 managed-node1 python3.9[66721]: ansible-fedora.linux_system_roles.blivet Invoked with pools=[{'name': 'rootvg', 'disks': ['sda', 'sdb'], 'grow_to_fill': True, 'state': 'absent', 'volumes': [{'name': 'rootlv', 'size': '2G', 'mount_point': '/hpc-test1', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}, {'name': 'usrlv', 'size': '1G', 'mount_point': '/hpc-test2', 'state': 'present', 'cache_devices': [], 'raid_disks': [], 'thin': False, 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'fs_create_options': None, 'fs_label': None, 'fs_type': None, 'mount_options': None, 'mount_user': None, 'mount_group': None, 'mount_mode': None, 'raid_level': None, 'type': None, 'cached': None, 'cache_mode': None, 'cache_size': None, 'compression': None, 'deduplication': None, 'part_type': None, 'raid_stripe_size': None, 'thin_pool_name': None, 'thin_pool_size': None, 'vdo_pool_size': None}], 'encryption': None, 'encryption_cipher': None, 'encryption_key': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'encryption_password': None, 'encryption_clevis_pin': None, 'encryption_tang_url': None, 'encryption_tang_thumbprint': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_metadata_version': None, 'raid_chunk_size': None, 'shared': None, 'type': None}] volumes=[] use_partitions=None disklabel_type=None pool_defaults={'state': 'present', 'type': 'lvm', 'disks': [], 'volumes': [], 'grow_to_fill': False, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'shared': False} volume_defaults={'state': 'present', 'type': 'lvm', 'size': 0, 'disks': [], 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_stripe_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_password': None, 'encryption_key': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'compression': None, 'deduplication': None, 'vdo_pool_size': None, 'thin': None, 'thin_pool_name': None, 'thin_pool_size': None, 'cached': False, 'cache_size': 0, 'cache_mode': None, 'cache_devices': []} safe_mode=True uses_kmod_kvdo=True packages_only=False diskvolume_mkfs_option_map={} Dec 13 06:30:32 managed-node1 systemd-udevd[519]: /etc/udev/rules.d/90-nfs-readahead.rules:6 Invalid value "/usr/bin/awk -v bdi=\$kernel 'BEGIN{ret=1} {if (\$4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes" for PROGRAM (char 50: invalid substitution type), ignoring. Dec 13 06:30:32 managed-node1 systemd[1]: hpc\x2dtest1.mount: Deactivated successfully. β–‘β–‘ Subject: Unit succeeded β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ The unit hpc\x2dtest1.mount has successfully entered the 'dead' state. Dec 13 06:30:32 managed-node1 kernel: XFS (dm-1): Unmounting Filesystem 8eee22ce-63a3-4717-99a9-776dbc63601d Dec 13 06:30:32 managed-node1 kernel: XFS (dm-0): Unmounting Filesystem 5e2d61f9-1c8b-4063-a9a1-d5f1584bc1fb Dec 13 06:30:32 managed-node1 systemd[1]: hpc\x2dtest2.mount: Deactivated successfully. β–‘β–‘ Subject: Unit succeeded β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ The unit hpc\x2dtest2.mount has successfully entered the 'dead' state. Dec 13 06:30:32 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:30:33 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:30:33 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:30:33 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:30:33 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:30:33 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:30:33 managed-node1 kernel: MODE SENSE: unimplemented page/subpage: 0x0a/0x05 Dec 13 06:30:34 managed-node1 python3.9[67001]: ansible-stat Invoked with path=/etc/fstab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:30:34 managed-node1 python3.9[67152]: ansible-lineinfile Invoked with insertbefore=^# firstmatch=True line=# system_role:storage regexp=# system_role:storage path=/etc/fstab state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 13 06:30:34 managed-node1 python3.9[67301]: ansible-mount Invoked with src=/dev/mapper/rootvg-usrlv path=/hpc-test2 fstype=xfs state=absent boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None opts=None Dec 13 06:30:35 managed-node1 python3.9[67450]: ansible-mount Invoked with src=/dev/mapper/rootvg-rootlv path=/hpc-test1 fstype=xfs state=absent boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None opts=None Dec 13 06:30:35 managed-node1 python3.9[67599]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 13 06:30:35 managed-node1 systemd[1]: Reloading. Dec 13 06:30:35 managed-node1 systemd-rc-local-generator[67616]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:30:36 managed-node1 python3.9[67777]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 13 06:30:36 managed-node1 systemd[1]: Reloading. Dec 13 06:30:36 managed-node1 systemd-rc-local-generator[67794]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 13 06:30:37 managed-node1 python3.9[67955]: ansible-stat Invoked with path=/etc/crypttab follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:30:37 managed-node1 python3.9[68106]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 13 06:30:38 managed-node1 python3.9[68286]: ansible-stat Invoked with path=/etc/dnf/plugins/versionlock.list follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 13 06:30:38 managed-node1 python3.9[68437]: ansible-ansible.legacy.command Invoked with _raw_params=dnf versionlock clear _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 13 06:30:40 managed-node1 sshd-session[68467]: Accepted publickey for root from 10.31.14.156 port 60292 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Dec 13 06:30:40 managed-node1 systemd-logind[608]: New session 16 of user root. β–‘β–‘ Subject: A new session 16 has been created for user root β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ Documentation: sd-login(3) β–‘β–‘ β–‘β–‘ A new session with the ID 16 has been created for the user root. β–‘β–‘ β–‘β–‘ The leading process of the session is 68467. Dec 13 06:30:40 managed-node1 systemd[1]: Started Session 16 of User root. β–‘β–‘ Subject: A start job for unit session-16.scope has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit session-16.scope has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 2410. Dec 13 06:30:40 managed-node1 sshd-session[68467]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Dec 13 06:30:40 managed-node1 sshd-session[68470]: Received disconnect from 10.31.14.156 port 60292:11: disconnected by user Dec 13 06:30:40 managed-node1 sshd-session[68470]: Disconnected from user root 10.31.14.156 port 60292 Dec 13 06:30:40 managed-node1 sshd-session[68467]: pam_unix(sshd:session): session closed for user root Dec 13 06:30:40 managed-node1 systemd-logind[608]: Session 16 logged out. Waiting for processes to exit. Dec 13 06:30:40 managed-node1 systemd[1]: session-16.scope: Deactivated successfully. β–‘β–‘ Subject: Unit succeeded β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ The unit session-16.scope has successfully entered the 'dead' state. Dec 13 06:30:40 managed-node1 systemd-logind[608]: Removed session 16. β–‘β–‘ Subject: Session 16 has been terminated β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ Documentation: sd-login(3) β–‘β–‘ β–‘β–‘ A session with the ID 16 has been terminated. Dec 13 06:30:40 managed-node1 sshd-session[68495]: Accepted publickey for root from 10.31.14.156 port 60296 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Dec 13 06:30:40 managed-node1 systemd-logind[608]: New session 17 of user root. β–‘β–‘ Subject: A new session 17 has been created for user root β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ Documentation: sd-login(3) β–‘β–‘ β–‘β–‘ A new session with the ID 17 has been created for the user root. β–‘β–‘ β–‘β–‘ The leading process of the session is 68495. Dec 13 06:30:40 managed-node1 systemd[1]: Started Session 17 of User root. β–‘β–‘ Subject: A start job for unit session-17.scope has finished successfully β–‘β–‘ Defined-By: systemd β–‘β–‘ Support: https://access.redhat.com/support β–‘β–‘ β–‘β–‘ A start job for unit session-17.scope has finished successfully. β–‘β–‘ β–‘β–‘ The job identifier is 2484. Dec 13 06:30:40 managed-node1 sshd-session[68495]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)