[WARNING]: Skipping plugin (/tmp/collections-Z5w/ansible_collections/ansible/posix/plugins/callback/cgroup_perf_recap.py), cannot load: cannot import name 'json' from 'ansible.parsing.ajson' (/usr/local/lib/python3.13/site-packages/ansible/parsing/ajson.py) [WARNING]: Deprecation warnings can be disabled by setting `deprecation_warnings=False` in ansible.cfg. [DEPRECATION WARNING]: The 'ansible.posix.profile_tasks' callback plugin implements the following deprecated method(s): playbook_on_stats. This feature will be removed from the callback plugin API in ansible-core version 2.23. Implement the `v2_*` equivalent callback method(s) instead. ansible-playbook [core 2.19.0b7] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.13/site-packages/ansible ansible collection location = /tmp/collections-Z5w executable location = /usr/local/bin/ansible-playbook python version = 3.13.3 (main, Apr 22 2025, 00:00:00) [GCC 15.0.1 20250418 (Red Hat 15.0.1-0)] (/usr/bin/python3.13) jinja version = 3.1.6 pyyaml version = 6.0.2 (with libyaml v0.2.5) No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_basic.yml ****************************************************** 1 plays in /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml PLAY [Basic snapshot test] ***************************************************** TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:2 Saturday 28 June 2025 18:18:42 -0400 (0:00:00.046) 0:00:00.046 ********* [WARNING]: Host 'managed-node1' is using the discovered Python interpreter at '/usr/bin/python3.13', but future installation of another Python interpreter could cause a different interpreter to be discovered. See https://docs.ansible.com/ansible-core/devel/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node1] TASK [Setup] ******************************************************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:38 Saturday 28 June 2025 18:18:43 -0400 (0:00:01.447) 0:00:01.494 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/setup.yml for managed-node1 TASK [Check if system is ostree] *********************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/setup.yml:10 Saturday 28 June 2025 18:18:43 -0400 (0:00:00.034) 0:00:01.528 ********* ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [Set mount parent] ******************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/setup.yml:15 Saturday 28 June 2025 18:18:44 -0400 (0:00:00.699) 0:00:02.228 ********* ok: [managed-node1] => { "ansible_facts": { "test_mnt_parent": "/mnt" }, "changed": false } TASK [Run the storage role install base packages] ****************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/setup.yml:19 Saturday 28 June 2025 18:18:44 -0400 (0:00:00.048) 0:00:02.277 ********* included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Saturday 28 June 2025 18:18:44 -0400 (0:00:00.039) 0:00:02.316 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Saturday 28 June 2025 18:18:44 -0400 (0:00:00.022) 0:00:02.339 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Saturday 28 June 2025 18:18:44 -0400 (0:00:00.033) 0:00:02.372 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=Fedora.yml) => { "ansible_facts": { "_storage_copr_support_packages": [ "dnf-plugins-core" ], "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/vars/Fedora.yml" ], "ansible_loop_var": "item", "changed": false, "item": "Fedora.yml" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Saturday 28 June 2025 18:18:44 -0400 (0:00:00.048) 0:00:02.421 ********* ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Saturday 28 June 2025 18:18:45 -0400 (0:00:00.474) 0:00:02.895 ********* ok: [managed-node1] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Saturday 28 June 2025 18:18:45 -0400 (0:00:00.026) 0:00:02.922 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Saturday 28 June 2025 18:18:45 -0400 (0:00:00.017) 0:00:02.939 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Saturday 28 June 2025 18:18:45 -0400 (0:00:00.017) 0:00:02.957 ********* redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Saturday 28 June 2025 18:18:45 -0400 (0:00:00.072) 0:00:03.030 ********* changed: [managed-node1] => { "changed": true, "rc": 0, "results": [ "Installed: stratisd-3.8.0-2.fc42.x86_64", "Installed: stratis-cli-3.8.0-1.fc42.noarch", "Installed: vdo-8.3.0.73-1.fc42.x86_64", "Installed: python3-dbus-client-gen-0.5.1-9.fc42.noarch", "Installed: python3-dbus-python-client-gen-0.8.3-8.fc42.noarch", "Installed: python3-justbytes-0.15.2-8.fc42.noarch", "Installed: python3-packaging-24.2-3.fc42.noarch", "Installed: python3-psutil-6.1.1-2.fc42.x86_64", "Installed: python3-into-dbus-python-0.8.2-8.fc42.noarch", "Installed: python3-justbases-0.15.2-10.fc42.noarch", "Installed: python3-dbus-signature-pyparsing-0.4.1-10.fc42.noarch", "Installed: python3-blivet-1:3.12.1-3.fc42.noarch", "Installed: blivet-data-1:3.12.1-3.fc42.noarch", "Installed: lsof-4.98.0-7.fc42.x86_64", "Installed: python3-bytesize-2.11-100.fc42.x86_64", "Installed: python3-libmount-2.40.4-7.fc42.x86_64", "Installed: python3-pyparted-1:3.13.0-8.fc42.x86_64", "Installed: libblockdev-dm-3.3.1-2.fc42.x86_64", "Installed: python3-blockdev-3.3.1-2.fc42.x86_64", "Installed: libblockdev-lvm-3.3.1-2.fc42.x86_64", "Installed: clevis-luks-21-10.fc42.x86_64", "Installed: cryptsetup-2.7.5-2.fc42.x86_64", "Installed: clevis-21-10.fc42.x86_64", "Installed: luksmeta-9-24.fc42.x86_64", "Installed: clevis-pin-tpm2-0.5.3-9.fc42.x86_64", "Installed: jose-14-4.fc42.x86_64", "Installed: libjose-14-4.fc42.x86_64", "Installed: tpm2-tools-5.7-3.fc42.x86_64", "Installed: libluksmeta-9-24.fc42.x86_64", "Installed: tpm2-tss-fapi-4.1.3-6.fc42.x86_64", "Installed: libblockdev-btrfs-3.3.1-2.fc42.x86_64", "Installed: libblockdev-mpath-3.3.1-2.fc42.x86_64", "Installed: device-mapper-multipath-0.10.0-5.fc42.x86_64", "Installed: device-mapper-multipath-libs-0.10.0-5.fc42.x86_64", "Installed: libblockdev-utils-3.3.1-2.fc42.x86_64", "Installed: libblockdev-swap-3.3.1-2.fc42.x86_64", "Installed: libblockdev-smart-3.3.1-2.fc42.x86_64", "Installed: libblockdev-part-3.3.1-2.fc42.x86_64", "Installed: libblockdev-nvme-3.3.1-2.fc42.x86_64", "Installed: libblockdev-mdraid-3.3.1-2.fc42.x86_64", "Installed: libblockdev-loop-3.3.1-2.fc42.x86_64", "Installed: libblockdev-fs-3.3.1-2.fc42.x86_64", "Installed: libblockdev-crypto-3.3.1-2.fc42.x86_64", "Installed: libblockdev-3.3.1-2.fc42.x86_64", "Removed: libblockdev-3.3.1-1.fc42.x86_64", "Removed: libblockdev-crypto-3.3.1-1.fc42.x86_64", "Removed: libblockdev-fs-3.3.1-1.fc42.x86_64", "Removed: libblockdev-loop-3.3.1-1.fc42.x86_64", "Removed: libblockdev-mdraid-3.3.1-1.fc42.x86_64", "Removed: libblockdev-nvme-3.3.1-1.fc42.x86_64", "Removed: libblockdev-part-3.3.1-1.fc42.x86_64", "Removed: libblockdev-smart-3.3.1-1.fc42.x86_64", "Removed: libblockdev-swap-3.3.1-1.fc42.x86_64", "Removed: libblockdev-utils-3.3.1-1.fc42.x86_64" ] } lsrpackages: libblockdev libblockdev-crypto libblockdev-dm libblockdev-fs libblockdev-lvm libblockdev-mdraid libblockdev-swap python3-blivet stratis-cli stratisd vdo TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Saturday 28 June 2025 18:18:50 -0400 (0:00:04.868) 0:00:07.898 ********* ok: [managed-node1] => { "storage_pools | d([])": [] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Saturday 28 June 2025 18:18:50 -0400 (0:00:00.034) 0:00:07.932 ********* ok: [managed-node1] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Saturday 28 June 2025 18:18:50 -0400 (0:00:00.036) 0:00:07.969 ********* [WARNING]: Module invocation had junk after the JSON data: :0: DeprecationWarning: builtin type swigvarlink has no __module__ attribute ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Saturday 28 June 2025 18:18:51 -0400 (0:00:01.276) 0:00:09.245 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Saturday 28 June 2025 18:18:51 -0400 (0:00:00.030) 0:00:09.276 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Saturday 28 June 2025 18:18:51 -0400 (0:00:00.032) 0:00:09.309 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Saturday 28 June 2025 18:18:51 -0400 (0:00:00.033) 0:00:09.343 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Saturday 28 June 2025 18:18:51 -0400 (0:00:00.032) 0:00:09.375 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kpartx TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Saturday 28 June 2025 18:18:53 -0400 (0:00:01.389) 0:00:10.764 ********* ok: [managed-node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "bluetooth.service": { "name": "bluetooth.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.bluez.service": { "name": "dbus-org.bluez.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.portable1.service": { "name": "dbus-org.freedesktop.portable1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.resolve1.service": { "name": "dbus-org.freedesktop.resolve1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dnf5-makecache.service": { "name": "dnf5-makecache.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dnf5-offline-transaction-cleanup.service": { "name": "dnf5-offline-transaction-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf5-offline-transaction.service": { "name": "dnf5-offline-transaction.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "fwupd-refresh.service": { "name": "fwupd-refresh.service", "source": "systemd", "state": "inactive", "status": "static" }, "fwupd.service": { "name": "fwupd.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_multipath.service": { "name": "modprobe@dm_multipath.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "passim.service": { "name": "passim.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-halt.service": { "name": "plymouth-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-kexec.service": { "name": "plymouth-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-poweroff.service": { "name": "plymouth-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-quit.service": { "name": "plymouth-quit.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-read-write.service": { "name": "plymouth-read-write.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-reboot.service": { "name": "plymouth-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-switch-root-initramfs.service": { "name": "plymouth-switch-root-initramfs.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-switch-root.service": { "name": "plymouth-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-plymouth.service": { "name": "systemd-ask-password-plymouth.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-bsod.service": { "name": "systemd-bsod.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-homed-activate.service": { "name": "systemd-homed-activate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-homed-firstboot.service": { "name": "systemd-homed-firstboot.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-homed.service": { "name": "systemd-homed.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-persistent-storage.service": { "name": "systemd-networkd-persistent-storage.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-networkd-wait-online@.service": { "name": "systemd-networkd-wait-online@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "systemd-networkd.service": { "name": "systemd-networkd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-portabled.service": { "name": "systemd-portabled.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-resolved.service": { "name": "systemd-resolved.service", "source": "systemd", "state": "running", "status": "enabled" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-storagetm.service": { "name": "systemd-storagetm.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-time-wait-sync.service": { "name": "systemd-time-wait-sync.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-zram-setup@.service": { "name": "systemd-zram-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-zram-setup@zram0.service": { "name": "systemd-zram-setup@zram0.service", "source": "systemd", "state": "stopped", "status": "active" }, "target.service": { "name": "target.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "targetclid.service": { "name": "targetclid.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "udisks2.service": { "name": "udisks2.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Saturday 28 June 2025 18:18:56 -0400 (0:00:03.149) 0:00:13.913 ********* ok: [managed-node1] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Saturday 28 June 2025 18:18:56 -0400 (0:00:00.269) 0:00:14.183 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Saturday 28 June 2025 18:18:56 -0400 (0:00:00.024) 0:00:14.207 ********* ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Saturday 28 June 2025 18:18:57 -0400 (0:00:00.734) 0:00:14.942 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Saturday 28 June 2025 18:18:57 -0400 (0:00:00.063) 0:00:15.005 ********* ok: [managed-node1] => { "changed": false, "stat": { "atime": 1751148856.6415944, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "c6a75908a9d4976e1b01922f732e09b9af082981", "ctime": 1750750281.4928355, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 14, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1750750281.4928355, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1344, "uid": 0, "version": "211217384", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Saturday 28 June 2025 18:18:57 -0400 (0:00:00.525) 0:00:15.531 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Saturday 28 June 2025 18:18:57 -0400 (0:00:00.035) 0:00:15.566 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Saturday 28 June 2025 18:18:57 -0400 (0:00:00.019) 0:00:15.585 ********* ok: [managed-node1] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [], "warnings": [ "Module invocation had junk after the JSON data: :0: DeprecationWarning: builtin type swigvarlink has no __module__ attribute" ] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Saturday 28 June 2025 18:18:57 -0400 (0:00:00.031) 0:00:15.617 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Saturday 28 June 2025 18:18:57 -0400 (0:00:00.024) 0:00:15.641 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Saturday 28 June 2025 18:18:58 -0400 (0:00:00.022) 0:00:15.664 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Saturday 28 June 2025 18:18:58 -0400 (0:00:00.034) 0:00:15.699 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Saturday 28 June 2025 18:18:58 -0400 (0:00:00.036) 0:00:15.736 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Saturday 28 June 2025 18:18:58 -0400 (0:00:00.035) 0:00:15.771 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Saturday 28 June 2025 18:18:58 -0400 (0:00:00.035) 0:00:15.807 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Saturday 28 June 2025 18:18:58 -0400 (0:00:00.034) 0:00:15.842 ********* ok: [managed-node1] => { "changed": false, "stat": { "atime": 1750749154.890496, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1750749597.598, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 15, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1750749154.890496, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "4287645168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Saturday 28 June 2025 18:18:58 -0400 (0:00:00.536) 0:00:16.378 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Saturday 28 June 2025 18:18:58 -0400 (0:00:00.059) 0:00:16.438 ********* ok: [managed-node1] TASK [Get unused disks] ******************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/setup.yml:25 Saturday 28 June 2025 18:19:00 -0400 (0:00:02.086) 0:00:18.525 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml for managed-node1 TASK [Check if system is ostree] *********************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:5 Saturday 28 June 2025 18:19:00 -0400 (0:00:00.058) 0:00:18.583 ********* ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [Set flag to indicate system is ostree] *********************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:10 Saturday 28 June 2025 18:19:01 -0400 (0:00:00.514) 0:00:19.097 ********* ok: [managed-node1] => { "ansible_facts": { "__snapshot_is_ostree": false }, "changed": false } TASK [Ensure test packages] **************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:14 Saturday 28 June 2025 18:19:01 -0400 (0:00:00.031) 0:00:19.129 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: util-linux-core TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:23 Saturday 28 June 2025 18:19:02 -0400 (0:00:01.409) 0:00:20.538 ********* ok: [managed-node1] => { "changed": false, "disks": [ "sda", "sdb", "sdc", "sdd", "sde", "sdf", "sdg", "sdh", "sdi", "sdj" ], "info": [ "Line: NAME=\"/dev/sda\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdb\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdc\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdd\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sde\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdf\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdg\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdh\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdi\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdj\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdk\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdl\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"ext4\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"ext4\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/zram0\" TYPE=\"disk\" SIZE=\"3894411264\" FSTYPE=\"swap\" LOG-SEC=\"4096\"", "filename [xvda2] is a partition", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions", "Disk [/dev/zram0] attrs [{'type': 'disk', 'size': '3894411264', 'fstype': 'swap', 'ssize': '4096'}] has fstype" ] } TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:31 Saturday 28 June 2025 18:19:04 -0400 (0:00:01.807) 0:00:22.345 ********* ok: [managed-node1] => { "ansible_facts": { "unused_disks": [ "sda", "sdb", "sdc", "sdd", "sde", "sdf", "sdg", "sdh", "sdi", "sdj" ] }, "changed": false } TASK [Print unused disks] ****************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:36 Saturday 28 June 2025 18:19:04 -0400 (0:00:00.026) 0:00:22.372 ********* ok: [managed-node1] => { "unused_disks": [ "sda", "sdb", "sdc", "sdd", "sde", "sdf", "sdg", "sdh", "sdi", "sdj" ] } TASK [Print info from find_unused_disk] **************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:44 Saturday 28 June 2025 18:19:04 -0400 (0:00:00.026) 0:00:22.398 ********* skipping: [managed-node1] => { "false_condition": "unused_disks | d([]) | length < disks_needed | d(1)" } TASK [Show disk information] *************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:49 Saturday 28 June 2025 18:19:04 -0400 (0:00:00.038) 0:00:22.436 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "unused_disks | d([]) | length < disks_needed | d(1)", "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:58 Saturday 28 June 2025 18:19:04 -0400 (0:00:00.052) 0:00:22.489 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "unused_disks | d([]) | length < disks_needed | d(1)", "skip_reason": "Conditional result was False" } TASK [Create LVM logical volumes under volume groups] ************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/setup.yml:31 Saturday 28 June 2025 18:19:04 -0400 (0:00:00.052) 0:00:22.541 ********* included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Saturday 28 June 2025 18:19:04 -0400 (0:00:00.047) 0:00:22.588 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:04 -0400 (0:00:00.033) 0:00:22.622 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Saturday 28 June 2025 18:19:05 -0400 (0:00:00.072) 0:00:22.694 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=Fedora.yml) => { "ansible_facts": { "_storage_copr_support_packages": [ "dnf-plugins-core" ], "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/vars/Fedora.yml" ], "ansible_loop_var": "item", "changed": false, "item": "Fedora.yml" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Saturday 28 June 2025 18:19:05 -0400 (0:00:00.084) 0:00:22.778 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Saturday 28 June 2025 18:19:05 -0400 (0:00:00.036) 0:00:22.815 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Saturday 28 June 2025 18:19:05 -0400 (0:00:00.032) 0:00:22.848 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Saturday 28 June 2025 18:19:05 -0400 (0:00:00.036) 0:00:22.884 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Saturday 28 June 2025 18:19:05 -0400 (0:00:00.040) 0:00:22.925 ********* redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Saturday 28 June 2025 18:19:05 -0400 (0:00:00.097) 0:00:23.023 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: libblockdev libblockdev-crypto libblockdev-dm libblockdev-fs libblockdev-lvm libblockdev-mdraid libblockdev-swap python3-blivet stratis-cli stratisd vdo TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Saturday 28 June 2025 18:19:06 -0400 (0:00:01.455) 0:00:24.479 ********* ok: [managed-node1] => { "storage_pools | d([])": [ { "disks": [ "sda", "sdb", "sdc" ], "name": "test_vg1", "volumes": [ { "name": "lv1", "size": "15%" }, { "name": "lv2", "size": "50%" } ] }, { "disks": [ "sdd", "sde", "sdf" ], "name": "test_vg2", "volumes": [ { "name": "lv3", "size": "10%" }, { "name": "lv4", "size": "20%" } ] }, { "disks": [ "sdg", "sdh", "sdi", "sdj" ], "name": "test_vg3", "volumes": [ { "name": "lv5", "size": "30%" }, { "name": "lv6", "size": "25%" }, { "name": "lv7", "size": "10%" }, { "name": "lv8", "size": "10%" } ] } ] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Saturday 28 June 2025 18:19:06 -0400 (0:00:00.076) 0:00:24.555 ********* ok: [managed-node1] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Saturday 28 June 2025 18:19:06 -0400 (0:00:00.073) 0:00:24.629 ********* ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [ "lvm2" ], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Saturday 28 June 2025 18:19:08 -0400 (0:00:01.949) 0:00:26.579 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Saturday 28 June 2025 18:19:08 -0400 (0:00:00.034) 0:00:26.613 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Saturday 28 June 2025 18:19:08 -0400 (0:00:00.030) 0:00:26.643 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Saturday 28 June 2025 18:19:09 -0400 (0:00:00.032) 0:00:26.676 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Saturday 28 June 2025 18:19:09 -0400 (0:00:00.033) 0:00:26.709 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kpartx lvm2 TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Saturday 28 June 2025 18:19:10 -0400 (0:00:01.345) 0:00:28.054 ********* ok: [managed-node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "bluetooth.service": { "name": "bluetooth.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.bluez.service": { "name": "dbus-org.bluez.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.portable1.service": { "name": "dbus-org.freedesktop.portable1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.resolve1.service": { "name": "dbus-org.freedesktop.resolve1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dnf5-makecache.service": { "name": "dnf5-makecache.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dnf5-offline-transaction-cleanup.service": { "name": "dnf5-offline-transaction-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf5-offline-transaction.service": { "name": "dnf5-offline-transaction.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "fwupd-refresh.service": { "name": "fwupd-refresh.service", "source": "systemd", "state": "inactive", "status": "static" }, "fwupd.service": { "name": "fwupd.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_multipath.service": { "name": "modprobe@dm_multipath.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "passim.service": { "name": "passim.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-halt.service": { "name": "plymouth-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-kexec.service": { "name": "plymouth-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-poweroff.service": { "name": "plymouth-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-quit.service": { "name": "plymouth-quit.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-read-write.service": { "name": "plymouth-read-write.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-reboot.service": { "name": "plymouth-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-switch-root-initramfs.service": { "name": "plymouth-switch-root-initramfs.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-switch-root.service": { "name": "plymouth-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-plymouth.service": { "name": "systemd-ask-password-plymouth.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-bsod.service": { "name": "systemd-bsod.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-homed-activate.service": { "name": "systemd-homed-activate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-homed-firstboot.service": { "name": "systemd-homed-firstboot.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-homed.service": { "name": "systemd-homed.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-persistent-storage.service": { "name": "systemd-networkd-persistent-storage.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-networkd-wait-online@.service": { "name": "systemd-networkd-wait-online@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "systemd-networkd.service": { "name": "systemd-networkd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-portabled.service": { "name": "systemd-portabled.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-resolved.service": { "name": "systemd-resolved.service", "source": "systemd", "state": "running", "status": "enabled" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-storagetm.service": { "name": "systemd-storagetm.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-time-wait-sync.service": { "name": "systemd-time-wait-sync.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-zram-setup@.service": { "name": "systemd-zram-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-zram-setup@zram0.service": { "name": "systemd-zram-setup@zram0.service", "source": "systemd", "state": "stopped", "status": "active" }, "target.service": { "name": "target.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "targetclid.service": { "name": "targetclid.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "udisks2.service": { "name": "udisks2.service", "source": "systemd", "state": "running", "status": "enabled" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Saturday 28 June 2025 18:19:13 -0400 (0:00:02.728) 0:00:30.783 ********* ok: [managed-node1] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Saturday 28 June 2025 18:19:13 -0400 (0:00:00.151) 0:00:30.934 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Saturday 28 June 2025 18:19:13 -0400 (0:00:00.020) 0:00:30.955 ********* changed: [managed-node1] => { "actions": [ { "action": "create format", "device": "/dev/sdj", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdi", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdh", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdg", "fs_type": "lvmpv" }, { "action": "create device", "device": "/dev/test_vg3", "fs_type": null }, { "action": "create device", "device": "/dev/mapper/test_vg3-lv8", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg3-lv8", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg3-lv7", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg3-lv7", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg3-lv6", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg3-lv6", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg3-lv5", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg3-lv5", "fs_type": "xfs" }, { "action": "create format", "device": "/dev/sdf", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sde", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdd", "fs_type": "lvmpv" }, { "action": "create device", "device": "/dev/test_vg2", "fs_type": null }, { "action": "create device", "device": "/dev/mapper/test_vg2-lv4", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg2-lv4", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg2-lv3", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg2-lv3", "fs_type": "xfs" }, { "action": "create format", "device": "/dev/sdc", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdb", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sda", "fs_type": "lvmpv" }, { "action": "create device", "device": "/dev/test_vg1", "fs_type": null }, { "action": "create device", "device": "/dev/mapper/test_vg1-lv2", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg1-lv2", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg1-lv1", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg1-lv1", "fs_type": "xfs" } ], "changed": true, "crypts": [], "leaves": [ "/dev/sdk", "/dev/sdl", "/dev/xvda1", "/dev/xvda2", "/dev/zram0", "/dev/mapper/test_vg1-lv1", "/dev/mapper/test_vg1-lv2", "/dev/mapper/test_vg2-lv3", "/dev/mapper/test_vg2-lv4", "/dev/mapper/test_vg3-lv5", "/dev/mapper/test_vg3-lv6", "/dev/mapper/test_vg3-lv7", "/dev/mapper/test_vg3-lv8" ], "mounts": [], "packages": [ "e2fsprogs", "lvm2", "xfsprogs" ], "pools": [ { "disks": [ "sda", "sdb", "sdc" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg1", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg1-lv1", "_kernel_device": "/dev/dm-7", "_mount_id": "UUID=d16d04d4-8c39-458d-91ab-62a71063488e", "_raw_device": "/dev/mapper/test_vg1-lv1", "_raw_kernel_device": "/dev/dm-7", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv1", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "15%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg1-lv2", "_kernel_device": "/dev/dm-6", "_mount_id": "UUID=991d22ac-80b0-450e-8c69-466187ba5696", "_raw_device": "/dev/mapper/test_vg1-lv2", "_raw_kernel_device": "/dev/dm-6", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv2", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "50%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] }, { "disks": [ "sdd", "sde", "sdf" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg2", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg2-lv3", "_kernel_device": "/dev/dm-5", "_mount_id": "UUID=13c0d833-a00e-49fa-a58c-aa045851ccb6", "_raw_device": "/dev/mapper/test_vg2-lv3", "_raw_kernel_device": "/dev/dm-5", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv3", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg2-lv4", "_kernel_device": "/dev/dm-4", "_mount_id": "UUID=8ff069e1-541f-476d-a94a-129e7396e539", "_raw_device": "/dev/mapper/test_vg2-lv4", "_raw_kernel_device": "/dev/dm-4", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv4", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "20%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] }, { "disks": [ "sdg", "sdh", "sdi", "sdj" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg3", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg3-lv5", "_kernel_device": "/dev/dm-3", "_mount_id": "UUID=eb47ddaa-1d93-495c-b8f9-7231835c82c5", "_raw_device": "/dev/mapper/test_vg3-lv5", "_raw_kernel_device": "/dev/dm-3", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv5", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "30%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv6", "_kernel_device": "/dev/dm-2", "_mount_id": "UUID=84001af2-01d9-4734-b51a-79efe7f59395", "_raw_device": "/dev/mapper/test_vg3-lv6", "_raw_kernel_device": "/dev/dm-2", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv6", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "25%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv7", "_kernel_device": "/dev/dm-1", "_mount_id": "UUID=596601cb-c0d2-421d-af31-da3ca0a3650c", "_raw_device": "/dev/mapper/test_vg3-lv7", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv7", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv8", "_kernel_device": "/dev/dm-0", "_mount_id": "UUID=77628583-14fb-431d-8722-115eadb2c621", "_raw_device": "/dev/mapper/test_vg3-lv8", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv8", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Saturday 28 June 2025 18:19:25 -0400 (0:00:11.866) 0:00:42.821 ********* ok: [managed-node1] => { "changed": false, "cmd": [ "udevadm", "trigger", "--subsystem-match=block" ], "delta": "0:00:00.016262", "end": "2025-06-28 18:19:25.921970", "rc": 0, "start": "2025-06-28 18:19:25.905708" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Saturday 28 June 2025 18:19:26 -0400 (0:00:00.950) 0:00:43.772 ********* ok: [managed-node1] => { "changed": false, "stat": { "atime": 1751148856.6415944, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "c6a75908a9d4976e1b01922f732e09b9af082981", "ctime": 1750750281.4928355, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 14, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1750750281.4928355, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1344, "uid": 0, "version": "211217384", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Saturday 28 June 2025 18:19:26 -0400 (0:00:00.560) 0:00:44.332 ********* changed: [managed-node1] => { "backup": "", "changed": true } MSG: line added TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.832) 0:00:45.164 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.017) 0:00:45.182 ********* ok: [managed-node1] => { "blivet_output": { "actions": [ { "action": "create format", "device": "/dev/sdj", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdi", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdh", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdg", "fs_type": "lvmpv" }, { "action": "create device", "device": "/dev/test_vg3", "fs_type": null }, { "action": "create device", "device": "/dev/mapper/test_vg3-lv8", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg3-lv8", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg3-lv7", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg3-lv7", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg3-lv6", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg3-lv6", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg3-lv5", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg3-lv5", "fs_type": "xfs" }, { "action": "create format", "device": "/dev/sdf", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sde", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdd", "fs_type": "lvmpv" }, { "action": "create device", "device": "/dev/test_vg2", "fs_type": null }, { "action": "create device", "device": "/dev/mapper/test_vg2-lv4", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg2-lv4", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg2-lv3", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg2-lv3", "fs_type": "xfs" }, { "action": "create format", "device": "/dev/sdc", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sdb", "fs_type": "lvmpv" }, { "action": "create format", "device": "/dev/sda", "fs_type": "lvmpv" }, { "action": "create device", "device": "/dev/test_vg1", "fs_type": null }, { "action": "create device", "device": "/dev/mapper/test_vg1-lv2", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg1-lv2", "fs_type": "xfs" }, { "action": "create device", "device": "/dev/mapper/test_vg1-lv1", "fs_type": null }, { "action": "create format", "device": "/dev/mapper/test_vg1-lv1", "fs_type": "xfs" } ], "changed": true, "crypts": [], "failed": false, "leaves": [ "/dev/sdk", "/dev/sdl", "/dev/xvda1", "/dev/xvda2", "/dev/zram0", "/dev/mapper/test_vg1-lv1", "/dev/mapper/test_vg1-lv2", "/dev/mapper/test_vg2-lv3", "/dev/mapper/test_vg2-lv4", "/dev/mapper/test_vg3-lv5", "/dev/mapper/test_vg3-lv6", "/dev/mapper/test_vg3-lv7", "/dev/mapper/test_vg3-lv8" ], "mounts": [], "packages": [ "e2fsprogs", "lvm2", "xfsprogs" ], "pools": [ { "disks": [ "sda", "sdb", "sdc" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg1", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg1-lv1", "_kernel_device": "/dev/dm-7", "_mount_id": "UUID=d16d04d4-8c39-458d-91ab-62a71063488e", "_raw_device": "/dev/mapper/test_vg1-lv1", "_raw_kernel_device": "/dev/dm-7", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv1", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "15%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg1-lv2", "_kernel_device": "/dev/dm-6", "_mount_id": "UUID=991d22ac-80b0-450e-8c69-466187ba5696", "_raw_device": "/dev/mapper/test_vg1-lv2", "_raw_kernel_device": "/dev/dm-6", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv2", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "50%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] }, { "disks": [ "sdd", "sde", "sdf" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg2", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg2-lv3", "_kernel_device": "/dev/dm-5", "_mount_id": "UUID=13c0d833-a00e-49fa-a58c-aa045851ccb6", "_raw_device": "/dev/mapper/test_vg2-lv3", "_raw_kernel_device": "/dev/dm-5", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv3", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg2-lv4", "_kernel_device": "/dev/dm-4", "_mount_id": "UUID=8ff069e1-541f-476d-a94a-129e7396e539", "_raw_device": "/dev/mapper/test_vg2-lv4", "_raw_kernel_device": "/dev/dm-4", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv4", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "20%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] }, { "disks": [ "sdg", "sdh", "sdi", "sdj" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg3", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg3-lv5", "_kernel_device": "/dev/dm-3", "_mount_id": "UUID=eb47ddaa-1d93-495c-b8f9-7231835c82c5", "_raw_device": "/dev/mapper/test_vg3-lv5", "_raw_kernel_device": "/dev/dm-3", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv5", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "30%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv6", "_kernel_device": "/dev/dm-2", "_mount_id": "UUID=84001af2-01d9-4734-b51a-79efe7f59395", "_raw_device": "/dev/mapper/test_vg3-lv6", "_raw_kernel_device": "/dev/dm-2", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv6", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "25%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv7", "_kernel_device": "/dev/dm-1", "_mount_id": "UUID=596601cb-c0d2-421d-af31-da3ca0a3650c", "_raw_device": "/dev/mapper/test_vg3-lv7", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv7", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv8", "_kernel_device": "/dev/dm-0", "_mount_id": "UUID=77628583-14fb-431d-8722-115eadb2c621", "_raw_device": "/dev/mapper/test_vg3-lv8", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv8", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ], "volumes": [], "warnings": [ "Module invocation had junk after the JSON data: :0: DeprecationWarning: builtin type swigvarlink has no __module__ attribute" ] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.033) 0:00:45.215 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [ { "disks": [ "sda", "sdb", "sdc" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg1", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg1-lv1", "_kernel_device": "/dev/dm-7", "_mount_id": "UUID=d16d04d4-8c39-458d-91ab-62a71063488e", "_raw_device": "/dev/mapper/test_vg1-lv1", "_raw_kernel_device": "/dev/dm-7", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv1", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "15%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg1-lv2", "_kernel_device": "/dev/dm-6", "_mount_id": "UUID=991d22ac-80b0-450e-8c69-466187ba5696", "_raw_device": "/dev/mapper/test_vg1-lv2", "_raw_kernel_device": "/dev/dm-6", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv2", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "50%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] }, { "disks": [ "sdd", "sde", "sdf" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg2", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg2-lv3", "_kernel_device": "/dev/dm-5", "_mount_id": "UUID=13c0d833-a00e-49fa-a58c-aa045851ccb6", "_raw_device": "/dev/mapper/test_vg2-lv3", "_raw_kernel_device": "/dev/dm-5", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv3", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg2-lv4", "_kernel_device": "/dev/dm-4", "_mount_id": "UUID=8ff069e1-541f-476d-a94a-129e7396e539", "_raw_device": "/dev/mapper/test_vg2-lv4", "_raw_kernel_device": "/dev/dm-4", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv4", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "20%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] }, { "disks": [ "sdg", "sdh", "sdi", "sdj" ], "encryption": false, "encryption_cipher": null, "encryption_clevis_pin": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "encryption_tang_thumbprint": null, "encryption_tang_url": null, "grow_to_fill": false, "name": "test_vg3", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "shared": false, "state": "present", "type": "lvm", "volumes": [ { "_device": "/dev/mapper/test_vg3-lv5", "_kernel_device": "/dev/dm-3", "_mount_id": "UUID=eb47ddaa-1d93-495c-b8f9-7231835c82c5", "_raw_device": "/dev/mapper/test_vg3-lv5", "_raw_kernel_device": "/dev/dm-3", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv5", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "30%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv6", "_kernel_device": "/dev/dm-2", "_mount_id": "UUID=84001af2-01d9-4734-b51a-79efe7f59395", "_raw_device": "/dev/mapper/test_vg3-lv6", "_raw_kernel_device": "/dev/dm-2", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv6", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "25%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv7", "_kernel_device": "/dev/dm-1", "_mount_id": "UUID=596601cb-c0d2-421d-af31-da3ca0a3650c", "_raw_device": "/dev/mapper/test_vg3-lv7", "_raw_kernel_device": "/dev/dm-1", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv7", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null }, { "_device": "/dev/mapper/test_vg3-lv8", "_kernel_device": "/dev/dm-0", "_mount_id": "UUID=77628583-14fb-431d-8722-115eadb2c621", "_raw_device": "/dev/mapper/test_vg3-lv8", "_raw_kernel_device": "/dev/dm-0", "cache_devices": [], "cache_mode": null, "cache_size": 0, "cached": false, "compression": null, "deduplication": null, "disks": [], "encryption": false, "encryption_cipher": null, "encryption_key": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_password": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "xfs", "mount_check": 0, "mount_device_identifier": "uuid", "mount_group": null, "mount_mode": null, "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "mount_user": null, "name": "lv8", "raid_chunk_size": null, "raid_device_count": null, "raid_disks": [], "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "raid_stripe_size": null, "size": "10%", "state": "present", "thin": false, "thin_pool_name": null, "thin_pool_size": null, "type": "lvm", "vdo_pool_size": null } ] } ] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.031) 0:00:45.247 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.023) 0:00:45.271 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.030) 0:00:45.301 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.032) 0:00:45.333 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.030) 0:00:45.364 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.030) 0:00:45.394 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Saturday 28 June 2025 18:19:27 -0400 (0:00:00.031) 0:00:45.426 ********* ok: [managed-node1] => { "changed": false, "stat": { "atime": 1751149138.617773, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1750749597.598, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 15, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1750749154.890496, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "4287645168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Saturday 28 June 2025 18:19:28 -0400 (0:00:00.482) 0:00:45.908 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Saturday 28 June 2025 18:19:28 -0400 (0:00:00.018) 0:00:45.927 ********* ok: [managed-node1] TASK [Run the snapshot role to create snapshot LVs] **************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:41 Saturday 28 June 2025 18:19:29 -0400 (0:00:01.023) 0:00:46.951 ********* included: fedora.linux_system_roles.snapshot for managed-node1 TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:3 Saturday 28 June 2025 18:19:29 -0400 (0:00:00.082) 0:00:47.033 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.snapshot : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:29 -0400 (0:00:00.087) 0:00:47.121 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__snapshot_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Check if system is ostree] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:10 Saturday 28 June 2025 18:19:29 -0400 (0:00:00.052) 0:00:47.174 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:15 Saturday 28 June 2025 18:19:29 -0400 (0:00:00.028) 0:00:47.202 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:19 Saturday 28 June 2025 18:19:29 -0400 (0:00:00.033) 0:00:47.236 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.snapshot : Ensure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 Saturday 28 June 2025 18:19:29 -0400 (0:00:00.058) 0:00:47.294 ********* changed: [managed-node1] => { "changed": true, "rc": 0, "results": [ "Installed: snapm-0.4.3-2.fc42.noarch", "Installed: python3-snapm-0.4.3-2.fc42.noarch", "Installed: boom-boot-1.6.6-2.fc42.noarch", "Installed: boom-boot-conf-1.6.6-2.fc42.noarch", "Installed: python3-boom-1.6.6-2.fc42.noarch" ] } lsrpackages: boom-boot lvm2 snapm util-linux-core TASK [fedora.linux_system_roles.snapshot : Run snapshot module snapshot] ******* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Saturday 28 June 2025 18:19:31 -0400 (0:00:01.912) 0:00:49.207 ********* changed: [managed-node1] => { "changed": true, "errors": "", "message": "", "return_code": 0 } TASK [fedora.linux_system_roles.snapshot : Print out response] ***************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:40 Saturday 28 June 2025 18:19:35 -0400 (0:00:04.412) 0:00:53.619 ********* ok: [managed-node1] => { "snapshot_cmd": { "changed": true, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } } TASK [fedora.linux_system_roles.snapshot : Set result] ************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:45 Saturday 28 June 2025 18:19:35 -0400 (0:00:00.031) 0:00:53.651 ********* ok: [managed-node1] => { "ansible_facts": { "snapshot_cmd": { "changed": true, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } }, "changed": false } TASK [fedora.linux_system_roles.snapshot : Set snapshot_facts to the JSON results] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:49 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.034) 0:00:53.686 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "snapshot_lvm_action == \"list\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Show errors] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:54 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.022) 0:00:53.708 ********* skipping: [managed-node1] => { "false_condition": "snapshot_cmd[\"return_code\"] != 0" } TASK [Assert changes for creation] ********************************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:50 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.037) 0:00:53.745 ********* ok: [managed-node1] => { "changed": false } MSG: All assertions passed TASK [Verify the snapshot LVs are created] ************************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:54 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.043) 0:00:53.788 ********* included: fedora.linux_system_roles.snapshot for managed-node1 TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:3 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.052) 0:00:53.840 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.snapshot : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.032) 0:00:53.873 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__snapshot_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Check if system is ostree] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:10 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.051) 0:00:53.925 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:15 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.042) 0:00:53.967 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:19 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.043) 0:00:54.010 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.snapshot : Ensure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 Saturday 28 June 2025 18:19:36 -0400 (0:00:00.090) 0:00:54.101 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: boom-boot lvm2 snapm util-linux-core TASK [fedora.linux_system_roles.snapshot : Run snapshot module check] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Saturday 28 June 2025 18:19:37 -0400 (0:00:01.438) 0:00:55.540 ********* ok: [managed-node1] => { "changed": false, "errors": "", "message": "", "return_code": 0 } TASK [fedora.linux_system_roles.snapshot : Print out response] ***************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:40 Saturday 28 June 2025 18:19:39 -0400 (0:00:01.885) 0:00:57.426 ********* ok: [managed-node1] => { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } } TASK [fedora.linux_system_roles.snapshot : Set result] ************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:45 Saturday 28 June 2025 18:19:39 -0400 (0:00:00.041) 0:00:57.467 ********* ok: [managed-node1] => { "ansible_facts": { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } }, "changed": false } TASK [fedora.linux_system_roles.snapshot : Set snapshot_facts to the JSON results] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:49 Saturday 28 June 2025 18:19:39 -0400 (0:00:00.039) 0:00:57.506 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "snapshot_lvm_action == \"list\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Show errors] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:54 Saturday 28 June 2025 18:19:39 -0400 (0:00:00.034) 0:00:57.541 ********* skipping: [managed-node1] => { "false_condition": "snapshot_cmd[\"return_code\"] != 0" } TASK [Run the snapshot role again to check idempotence] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:63 Saturday 28 June 2025 18:19:39 -0400 (0:00:00.041) 0:00:57.583 ********* included: fedora.linux_system_roles.snapshot for managed-node1 TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:3 Saturday 28 June 2025 18:19:40 -0400 (0:00:00.121) 0:00:57.705 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.snapshot : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:40 -0400 (0:00:00.105) 0:00:57.810 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__snapshot_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Check if system is ostree] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:10 Saturday 28 June 2025 18:19:40 -0400 (0:00:00.104) 0:00:57.914 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:15 Saturday 28 June 2025 18:19:40 -0400 (0:00:00.086) 0:00:58.001 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:19 Saturday 28 June 2025 18:19:40 -0400 (0:00:00.105) 0:00:58.106 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.snapshot : Ensure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 Saturday 28 June 2025 18:19:40 -0400 (0:00:00.134) 0:00:58.241 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: boom-boot lvm2 snapm util-linux-core TASK [fedora.linux_system_roles.snapshot : Run snapshot module snapshot] ******* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Saturday 28 June 2025 18:19:42 -0400 (0:00:01.551) 0:00:59.792 ********* ok: [managed-node1] => { "changed": false, "errors": "", "message": "", "return_code": 0 } TASK [fedora.linux_system_roles.snapshot : Print out response] ***************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:40 Saturday 28 June 2025 18:19:44 -0400 (0:00:01.992) 0:01:01.785 ********* ok: [managed-node1] => { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } } TASK [fedora.linux_system_roles.snapshot : Set result] ************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:45 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.044) 0:01:01.830 ********* ok: [managed-node1] => { "ansible_facts": { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } }, "changed": false } TASK [fedora.linux_system_roles.snapshot : Set snapshot_facts to the JSON results] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:49 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.049) 0:01:01.880 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "snapshot_lvm_action == \"list\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Show errors] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:54 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.035) 0:01:01.915 ********* skipping: [managed-node1] => { "false_condition": "snapshot_cmd[\"return_code\"] != 0" } TASK [Assert no changes for creation] ****************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:72 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.038) 0:01:01.954 ********* ok: [managed-node1] => { "changed": false } MSG: All assertions passed TASK [Verify again to check idempotence] *************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:76 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.036) 0:01:01.991 ********* included: fedora.linux_system_roles.snapshot for managed-node1 TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:3 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.060) 0:01:02.051 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.snapshot : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.045) 0:01:02.097 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__snapshot_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Check if system is ostree] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:10 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.053) 0:01:02.150 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:15 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.034) 0:01:02.184 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:19 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.041) 0:01:02.226 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.snapshot : Ensure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 Saturday 28 June 2025 18:19:44 -0400 (0:00:00.073) 0:01:02.300 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: boom-boot lvm2 snapm util-linux-core TASK [fedora.linux_system_roles.snapshot : Run snapshot module check] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Saturday 28 June 2025 18:19:46 -0400 (0:00:01.398) 0:01:03.698 ********* ok: [managed-node1] => { "changed": false, "errors": "", "message": "", "return_code": 0 } TASK [fedora.linux_system_roles.snapshot : Print out response] ***************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:40 Saturday 28 June 2025 18:19:48 -0400 (0:00:02.020) 0:01:05.719 ********* ok: [managed-node1] => { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } } TASK [fedora.linux_system_roles.snapshot : Set result] ************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:45 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.025) 0:01:05.745 ********* ok: [managed-node1] => { "ansible_facts": { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } }, "changed": false } TASK [fedora.linux_system_roles.snapshot : Set snapshot_facts to the JSON results] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:49 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.034) 0:01:05.780 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "snapshot_lvm_action == \"list\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Show errors] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:54 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.033) 0:01:05.814 ********* skipping: [managed-node1] => { "false_condition": "snapshot_cmd[\"return_code\"] != 0" } TASK [Assert no changes for verify] ******************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:85 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.040) 0:01:05.854 ********* ok: [managed-node1] => { "changed": false } MSG: All assertions passed TASK [Run the snapshot role remove the snapshot LVs] *************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:89 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.050) 0:01:05.905 ********* included: fedora.linux_system_roles.snapshot for managed-node1 TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:3 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.111) 0:01:06.016 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.snapshot : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.082) 0:01:06.099 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__snapshot_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Check if system is ostree] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:10 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.099) 0:01:06.199 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:15 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.064) 0:01:06.263 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:19 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.060) 0:01:06.324 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.snapshot : Ensure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 Saturday 28 June 2025 18:19:48 -0400 (0:00:00.106) 0:01:06.430 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: boom-boot lvm2 snapm util-linux-core TASK [fedora.linux_system_roles.snapshot : Run snapshot module remove] ********* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Saturday 28 June 2025 18:19:50 -0400 (0:00:01.534) 0:01:07.965 ********* changed: [managed-node1] => { "changed": true, "errors": "", "message": "", "return_code": 0 } TASK [fedora.linux_system_roles.snapshot : Print out response] ***************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:40 Saturday 28 June 2025 18:19:52 -0400 (0:00:02.372) 0:01:10.337 ********* ok: [managed-node1] => { "snapshot_cmd": { "changed": true, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } } TASK [fedora.linux_system_roles.snapshot : Set result] ************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:45 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.025) 0:01:10.363 ********* ok: [managed-node1] => { "ansible_facts": { "snapshot_cmd": { "changed": true, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } }, "changed": false } TASK [fedora.linux_system_roles.snapshot : Set snapshot_facts to the JSON results] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:49 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.023) 0:01:10.387 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "snapshot_lvm_action == \"list\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Show errors] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:54 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.017) 0:01:10.404 ********* skipping: [managed-node1] => { "false_condition": "snapshot_cmd[\"return_code\"] != 0" } TASK [Assert changes for removal] ********************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:96 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.022) 0:01:10.426 ********* ok: [managed-node1] => { "changed": false } MSG: All assertions passed TASK [Use the snapshot_lvm_verify option to make sure remove is done] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:100 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.023) 0:01:10.450 ********* included: fedora.linux_system_roles.snapshot for managed-node1 TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:3 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.037) 0:01:10.487 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.snapshot : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.027) 0:01:10.514 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__snapshot_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Check if system is ostree] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:10 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.046) 0:01:10.561 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:15 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.043) 0:01:10.604 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:19 Saturday 28 June 2025 18:19:52 -0400 (0:00:00.049) 0:01:10.653 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.snapshot : Ensure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 Saturday 28 June 2025 18:19:53 -0400 (0:00:00.068) 0:01:10.722 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: boom-boot lvm2 snapm util-linux-core TASK [fedora.linux_system_roles.snapshot : Run snapshot module remove] ********* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Saturday 28 June 2025 18:19:54 -0400 (0:00:01.415) 0:01:12.137 ********* ok: [managed-node1] => { "changed": false, "errors": "", "message": "", "return_code": 0 } TASK [fedora.linux_system_roles.snapshot : Print out response] ***************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:40 Saturday 28 June 2025 18:19:55 -0400 (0:00:01.190) 0:01:13.327 ********* ok: [managed-node1] => { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } } TASK [fedora.linux_system_roles.snapshot : Set result] ************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:45 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.046) 0:01:13.374 ********* ok: [managed-node1] => { "ansible_facts": { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } }, "changed": false } TASK [fedora.linux_system_roles.snapshot : Set snapshot_facts to the JSON results] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:49 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.024) 0:01:13.398 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "snapshot_lvm_action == \"list\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Show errors] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:54 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.017) 0:01:13.416 ********* skipping: [managed-node1] => { "false_condition": "snapshot_cmd[\"return_code\"] != 0" } TASK [Remove again to check idempotence] *************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:108 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.021) 0:01:13.438 ********* included: fedora.linux_system_roles.snapshot for managed-node1 TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:3 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.037) 0:01:13.475 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.snapshot : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.027) 0:01:13.503 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__snapshot_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Check if system is ostree] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:10 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.038) 0:01:13.541 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:15 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.033) 0:01:13.574 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:19 Saturday 28 June 2025 18:19:55 -0400 (0:00:00.037) 0:01:13.612 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.snapshot : Ensure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 Saturday 28 June 2025 18:19:56 -0400 (0:00:00.072) 0:01:13.684 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: boom-boot lvm2 snapm util-linux-core TASK [fedora.linux_system_roles.snapshot : Run snapshot module remove] ********* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Saturday 28 June 2025 18:19:57 -0400 (0:00:01.469) 0:01:15.154 ********* ok: [managed-node1] => { "changed": false, "errors": "", "message": "", "return_code": 0 } TASK [fedora.linux_system_roles.snapshot : Print out response] ***************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:40 Saturday 28 June 2025 18:19:58 -0400 (0:00:01.140) 0:01:16.294 ********* ok: [managed-node1] => { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } } TASK [fedora.linux_system_roles.snapshot : Set result] ************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:45 Saturday 28 June 2025 18:19:58 -0400 (0:00:00.043) 0:01:16.338 ********* ok: [managed-node1] => { "ansible_facts": { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } }, "changed": false } TASK [fedora.linux_system_roles.snapshot : Set snapshot_facts to the JSON results] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:49 Saturday 28 June 2025 18:19:58 -0400 (0:00:00.040) 0:01:16.379 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "snapshot_lvm_action == \"list\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Show errors] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:54 Saturday 28 June 2025 18:19:58 -0400 (0:00:00.020) 0:01:16.399 ********* skipping: [managed-node1] => { "false_condition": "snapshot_cmd[\"return_code\"] != 0" } TASK [Assert no changes for remove] ******************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:115 Saturday 28 June 2025 18:19:58 -0400 (0:00:00.037) 0:01:16.437 ********* ok: [managed-node1] => { "changed": false } MSG: All assertions passed TASK [Verify remove again to check idempotence] ******************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:119 Saturday 28 June 2025 18:19:58 -0400 (0:00:00.046) 0:01:16.483 ********* included: fedora.linux_system_roles.snapshot for managed-node1 TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:3 Saturday 28 June 2025 18:19:58 -0400 (0:00:00.086) 0:01:16.569 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.snapshot : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:2 Saturday 28 June 2025 18:19:58 -0400 (0:00:00.077) 0:01:16.647 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__snapshot_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Check if system is ostree] ********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:10 Saturday 28 June 2025 18:19:59 -0400 (0:00:00.069) 0:01:16.716 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:15 Saturday 28 June 2025 18:19:59 -0400 (0:00:00.049) 0:01:16.766 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/set_vars.yml:19 Saturday 28 June 2025 18:19:59 -0400 (0:00:00.042) 0:01:16.809 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.snapshot : Ensure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 Saturday 28 June 2025 18:19:59 -0400 (0:00:00.082) 0:01:16.892 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: boom-boot lvm2 snapm util-linux-core TASK [fedora.linux_system_roles.snapshot : Run snapshot module remove] ********* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Saturday 28 June 2025 18:20:00 -0400 (0:00:01.522) 0:01:18.414 ********* ok: [managed-node1] => { "changed": false, "errors": "", "message": "", "return_code": 0 } TASK [fedora.linux_system_roles.snapshot : Print out response] ***************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:40 Saturday 28 June 2025 18:20:02 -0400 (0:00:01.273) 0:01:19.688 ********* ok: [managed-node1] => { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } } TASK [fedora.linux_system_roles.snapshot : Set result] ************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:45 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.046) 0:01:19.734 ********* ok: [managed-node1] => { "ansible_facts": { "snapshot_cmd": { "changed": false, "errors": "", "failed": false, "message": "", "msg": "", "return_code": 0 } }, "changed": false } TASK [fedora.linux_system_roles.snapshot : Set snapshot_facts to the JSON results] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:49 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.043) 0:01:19.778 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "snapshot_lvm_action == \"list\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.snapshot : Show errors] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:54 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.032) 0:01:19.810 ********* skipping: [managed-node1] => { "false_condition": "snapshot_cmd[\"return_code\"] != 0" } TASK [Assert no changes for remove verify] ************************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:127 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.038) 0:01:19.849 ********* ok: [managed-node1] => { "changed": false } MSG: All assertions passed TASK [Cleanup] ***************************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tests_basic.yml:132 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.030) 0:01:19.879 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/cleanup.yml for managed-node1 TASK [Remove storage volumes] ************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/cleanup.yml:7 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.035) 0:01:19.915 ********* included: fedora.linux_system_roles.storage for managed-node1 TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.039) 0:01:19.954 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.028) 0:01:19.983 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "__storage_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.114) 0:01:20.097 ********* skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } ok: [managed-node1] => (item=Fedora.yml) => { "ansible_facts": { "_storage_copr_support_packages": [ "dnf-plugins-core" ], "blivet_package_list": [ "python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-fs", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "stratisd", "stratis-cli", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}", "vdo" ] }, "ansible_included_var_files": [ "/tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/vars/Fedora.yml" ], "ansible_loop_var": "item", "changed": false, "item": "Fedora.yml" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=Fedora_42.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "Fedora_42.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.086) 0:01:20.184 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.040) 0:01:20.225 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __storage_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.039) 0:01:20.264 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.034) 0:01:20.299 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.035) 0:01:20.335 ********* redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount redirecting (type: modules) ansible.builtin.mount to ansible.posix.mount included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Saturday 28 June 2025 18:20:02 -0400 (0:00:00.072) 0:01:20.408 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: libblockdev libblockdev-crypto libblockdev-dm libblockdev-fs libblockdev-lvm libblockdev-mdraid libblockdev-swap python3-blivet stratis-cli stratisd vdo TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Saturday 28 June 2025 18:20:04 -0400 (0:00:01.423) 0:01:21.831 ********* [WARNING]: Encountered 2 template errors. error 1 - access to attribute 'append' of 'list' object is unsafe. error 2 - template potentially truncated Origin: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/cleanup.yml:13:20 11 storage_udevadm_trigger: true # helps with get_unused_disks, cleanup 12 storage_safe_mode: false 13 storage_pools: | ^ column 20 ok: [managed-node1] => { "storage_pools | d([])": "<< error 1 - access to attribute 'append' of 'list' object is unsafe. >><< error 2 - template potentially truncated >>\n" } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Saturday 28 June 2025 18:20:04 -0400 (0:00:00.082) 0:01:21.913 ********* ok: [managed-node1] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Saturday 28 June 2025 18:20:04 -0400 (0:00:00.073) 0:01:21.987 ********* ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Saturday 28 June 2025 18:20:05 -0400 (0:00:00.793) 0:01:22.781 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node1 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Saturday 28 June 2025 18:20:05 -0400 (0:00:00.271) 0:01:23.053 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Saturday 28 June 2025 18:20:05 -0400 (0:00:00.128) 0:01:23.181 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "install_copr | d(false) | bool", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Saturday 28 June 2025 18:20:05 -0400 (0:00:00.145) 0:01:23.326 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Saturday 28 June 2025 18:20:05 -0400 (0:00:00.081) 0:01:23.408 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: kpartx TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Saturday 28 June 2025 18:20:07 -0400 (0:00:01.492) 0:01:24.901 ********* ok: [managed-node1] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "audit-rules.service": { "name": "audit-rules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "bluetooth.service": { "name": "bluetooth.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "capsule@.service": { "name": "capsule@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd-restricted.service": { "name": "chronyd-restricted.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-hotplugd.service": { "name": "cloud-init-hotplugd.service", "source": "systemd", "state": "inactive", "status": "static" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "dbus-broker.service": { "name": "dbus-broker.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.bluez.service": { "name": "dbus-org.bluez.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.portable1.service": { "name": "dbus-org.freedesktop.portable1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus-org.freedesktop.resolve1.service": { "name": "dbus-org.freedesktop.resolve1.service", "source": "systemd", "state": "active", "status": "alias" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "active", "status": "alias" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd.service": { "name": "dhcpcd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dhcpcd@.service": { "name": "dhcpcd@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "display-manager.service": { "name": "display-manager.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "running", "status": "static" }, "dnf-makecache.service": { "name": "dnf-makecache.service", "source": "systemd", "state": "stopped", "status": "static" }, "dnf-system-upgrade-cleanup.service": { "name": "dnf-system-upgrade-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf-system-upgrade.service": { "name": "dnf-system-upgrade.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dnf5-makecache.service": { "name": "dnf5-makecache.service", "source": "systemd", "state": "inactive", "status": "alias" }, "dnf5-offline-transaction-cleanup.service": { "name": "dnf5-offline-transaction-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "dnf5-offline-transaction.service": { "name": "dnf5-offline-transaction.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown-onfailure.service": { "name": "dracut-shutdown-onfailure.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "fcoe.service": { "name": "fcoe.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "fips-crypto-policy-overlay.service": { "name": "fips-crypto-policy-overlay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fsidd.service": { "name": "fsidd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "stopped", "status": "static" }, "fwupd-refresh.service": { "name": "fwupd-refresh.service", "source": "systemd", "state": "inactive", "status": "static" }, "fwupd.service": { "name": "fwupd.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "active" }, "grub-boot-indeterminate.service": { "name": "grub-boot-indeterminate.service", "source": "systemd", "state": "inactive", "status": "static" }, "grub2-systemd-integration.service": { "name": "grub2-systemd-integration.service", "source": "systemd", "state": "inactive", "status": "static" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "hv_kvp_daemon.service": { "name": "hv_kvp_daemon.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "iscsi-shutdown.service": { "name": "iscsi-shutdown.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsi.service": { "name": "iscsi.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "iscsid.service": { "name": "iscsid.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "ldconfig.service": { "name": "ldconfig.service", "source": "systemd", "state": "stopped", "status": "static" }, "logrotate.service": { "name": "logrotate.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm-devices-import.service": { "name": "lvm-devices-import.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "lvm2-activation-early.service": { "name": "lvm2-activation-early.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "man-db-cache-update.service": { "name": "man-db-cache-update.service", "source": "systemd", "state": "inactive", "status": "static" }, "man-db-restart-cache-update.service": { "name": "man-db-restart-cache-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "modprobe@.service": { "name": "modprobe@.service", "source": "systemd", "state": "unknown", "status": "static" }, "modprobe@configfs.service": { "name": "modprobe@configfs.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_mod.service": { "name": "modprobe@dm_mod.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@dm_multipath.service": { "name": "modprobe@dm_multipath.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@drm.service": { "name": "modprobe@drm.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@fuse.service": { "name": "modprobe@fuse.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "modprobe@loop.service": { "name": "modprobe@loop.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "multipathd.service": { "name": "multipathd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfsdcld.service": { "name": "nfsdcld.service", "source": "systemd", "state": "stopped", "status": "static" }, "nftables.service": { "name": "nftables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nis-domainname.service": { "name": "nis-domainname.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nm-priv-helper.service": { "name": "nm-priv-helper.service", "source": "systemd", "state": "inactive", "status": "static" }, "ntpd.service": { "name": "ntpd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ntpdate.service": { "name": "ntpdate.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "pam_namespace.service": { "name": "pam_namespace.service", "source": "systemd", "state": "inactive", "status": "static" }, "passim.service": { "name": "passim.service", "source": "systemd", "state": "inactive", "status": "static" }, "pcscd.service": { "name": "pcscd.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "plymouth-halt.service": { "name": "plymouth-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-kexec.service": { "name": "plymouth-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-poweroff.service": { "name": "plymouth-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-quit.service": { "name": "plymouth-quit.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-read-write.service": { "name": "plymouth-read-write.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-reboot.service": { "name": "plymouth-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "static" }, "plymouth-switch-root-initramfs.service": { "name": "plymouth-switch-root-initramfs.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-switch-root.service": { "name": "plymouth-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "quotaon-root.service": { "name": "quotaon-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "quotaon@.service": { "name": "quotaon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "raid-check.service": { "name": "raid-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "rbdmap.service": { "name": "rbdmap.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-svcgssd.service": { "name": "rpc-svcgssd.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "rpmdb-rebuild.service": { "name": "rpmdb-rebuild.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "selinux-autorelabel-mark.service": { "name": "selinux-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "selinux-autorelabel.service": { "name": "selinux-autorelabel.service", "source": "systemd", "state": "inactive", "status": "static" }, "selinux-check-proper-disable.service": { "name": "selinux-check-proper-disable.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "active" }, "sntp.service": { "name": "sntp.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "ssh-host-keys-migration.service": { "name": "ssh-host-keys-migration.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "sshd-keygen@.service": { "name": "sshd-keygen@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "sshd-keygen@ecdsa.service": { "name": "sshd-keygen@ecdsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@ed25519.service": { "name": "sshd-keygen@ed25519.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-keygen@rsa.service": { "name": "sshd-keygen@rsa.service", "source": "systemd", "state": "stopped", "status": "inactive" }, "sshd-unix-local@.service": { "name": "sshd-unix-local@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd-vsock@.service": { "name": "sshd-vsock@.service", "source": "systemd", "state": "unknown", "status": "alias" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "indirect" }, "sssd-autofs.service": { "name": "sssd-autofs.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-kcm.service": { "name": "sssd-kcm.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "sssd-nss.service": { "name": "sssd-nss.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pac.service": { "name": "sssd-pac.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-pam.service": { "name": "sssd-pam.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-ssh.service": { "name": "sssd-ssh.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd-sudo.service": { "name": "sssd-sudo.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "sssd.service": { "name": "sssd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "stratis-fstab-setup@.service": { "name": "stratis-fstab-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "stratisd-min-postinitrd.service": { "name": "stratisd-min-postinitrd.service", "source": "systemd", "state": "inactive", "status": "static" }, "stratisd.service": { "name": "stratisd.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "syslog.service": { "name": "syslog.service", "source": "systemd", "state": "stopped", "status": "not-found" }, "system-update-cleanup.service": { "name": "system-update-cleanup.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-plymouth.service": { "name": "systemd-ask-password-plymouth.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-battery-check.service": { "name": "systemd-battery-check.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bless-boot.service": { "name": "systemd-bless-boot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-boot-check-no-failures.service": { "name": "systemd-boot-check-no-failures.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-boot-random-seed.service": { "name": "systemd-boot-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-boot-update.service": { "name": "systemd-boot-update.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-bootctl@.service": { "name": "systemd-bootctl@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-bsod.service": { "name": "systemd-bsod.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-confext.service": { "name": "systemd-confext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-coredump@.service": { "name": "systemd-coredump@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-creds@.service": { "name": "systemd-creds@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-exit.service": { "name": "systemd-exit.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-growfs-root.service": { "name": "systemd-growfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-growfs@.service": { "name": "systemd-growfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-clear.service": { "name": "systemd-hibernate-clear.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate-resume.service": { "name": "systemd-hibernate-resume.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-homed-activate.service": { "name": "systemd-homed-activate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-homed-firstboot.service": { "name": "systemd-homed-firstboot.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-homed.service": { "name": "systemd-homed.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald-sync@.service": { "name": "systemd-journald-sync@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-journald@.service": { "name": "systemd-journald@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-network-generator.service": { "name": "systemd-network-generator.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-networkd-persistent-storage.service": { "name": "systemd-networkd-persistent-storage.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-networkd-wait-online.service": { "name": "systemd-networkd-wait-online.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-networkd-wait-online@.service": { "name": "systemd-networkd-wait-online@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "systemd-networkd.service": { "name": "systemd-networkd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-oomd.service": { "name": "systemd-oomd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-pcrextend@.service": { "name": "systemd-pcrextend@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrfs-root.service": { "name": "systemd-pcrfs-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pcrfs@.service": { "name": "systemd-pcrfs@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrlock-file-system.service": { "name": "systemd-pcrlock-file-system.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-code.service": { "name": "systemd-pcrlock-firmware-code.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-firmware-config.service": { "name": "systemd-pcrlock-firmware-config.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-machine-id.service": { "name": "systemd-pcrlock-machine-id.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-make-policy.service": { "name": "systemd-pcrlock-make-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-authority.service": { "name": "systemd-pcrlock-secureboot-authority.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock-secureboot-policy.service": { "name": "systemd-pcrlock-secureboot-policy.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-pcrlock@.service": { "name": "systemd-pcrlock@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-pcrmachine.service": { "name": "systemd-pcrmachine.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-initrd.service": { "name": "systemd-pcrphase-initrd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase-sysinit.service": { "name": "systemd-pcrphase-sysinit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-pcrphase.service": { "name": "systemd-pcrphase.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-portabled.service": { "name": "systemd-portabled.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-pstore.service": { "name": "systemd-pstore.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-quotacheck-root.service": { "name": "systemd-quotacheck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-quotacheck@.service": { "name": "systemd-quotacheck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "enabled-runtime" }, "systemd-repart.service": { "name": "systemd-repart.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-resolved.service": { "name": "systemd-resolved.service", "source": "systemd", "state": "running", "status": "enabled" }, "systemd-rfkill.service": { "name": "systemd-rfkill.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-soft-reboot.service": { "name": "systemd-soft-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-storagetm.service": { "name": "systemd-storagetm.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend-then-hibernate.service": { "name": "systemd-suspend-then-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-sysext.service": { "name": "systemd-sysext.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-sysext@.service": { "name": "systemd-sysext@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-sysupdate-reboot.service": { "name": "systemd-sysupdate-reboot.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysupdate.service": { "name": "systemd-sysupdate.service", "source": "systemd", "state": "inactive", "status": "indirect" }, "systemd-sysusers.service": { "name": "systemd-sysusers.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-time-wait-sync.service": { "name": "systemd-time-wait-sync.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-timesyncd.service": { "name": "systemd-timesyncd.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev-early.service": { "name": "systemd-tmpfiles-setup-dev-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup-early.service": { "name": "systemd-tpm2-setup-early.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tpm2-setup.service": { "name": "systemd-tpm2-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-load-credentials.service": { "name": "systemd-udev-load-credentials.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-userdbd.service": { "name": "systemd-userdbd.service", "source": "systemd", "state": "running", "status": "indirect" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-volatile-root.service": { "name": "systemd-volatile-root.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-zram-setup@.service": { "name": "systemd-zram-setup@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-zram-setup@zram0.service": { "name": "systemd-zram-setup@zram0.service", "source": "systemd", "state": "stopped", "status": "active" }, "target.service": { "name": "target.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "targetclid.service": { "name": "targetclid.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "udisks2.service": { "name": "udisks2.service", "source": "systemd", "state": "running", "status": "enabled" }, "unbound-anchor.service": { "name": "unbound-anchor.service", "source": "systemd", "state": "stopped", "status": "static" }, "user-runtime-dir@.service": { "name": "user-runtime-dir@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user-runtime-dir@0.service": { "name": "user-runtime-dir@0.service", "source": "systemd", "state": "stopped", "status": "active" }, "user@.service": { "name": "user@.service", "source": "systemd", "state": "unknown", "status": "static" }, "user@0.service": { "name": "user@0.service", "source": "systemd", "state": "running", "status": "active" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Saturday 28 June 2025 18:20:10 -0400 (0:00:02.929) 0:01:27.830 ********* ok: [managed-node1] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Saturday 28 June 2025 18:20:10 -0400 (0:00:00.173) 0:01:28.004 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Saturday 28 June 2025 18:20:10 -0400 (0:00:00.029) 0:01:28.034 ********* ok: [managed-node1] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.717) 0:01:28.751 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.035) 0:01:28.787 ********* ok: [managed-node1] => { "changed": false, "stat": { "atime": 1751149167.4386027, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "b217da31b2f64136c01e111f73bfdd354d292400", "ctime": 1751149167.4375753, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 393219, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1751149167.4380517, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1366, "uid": 0, "version": "1619388706", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.523) 0:01:29.310 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output is changed", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.026) 0:01:29.336 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.017) 0:01:29.354 ********* ok: [managed-node1] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [], "warnings": [ "Module invocation had junk after the JSON data: :0: DeprecationWarning: builtin type swigvarlink has no __module__ attribute" ] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.025) 0:01:29.379 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.024) 0:01:29.404 ********* ok: [managed-node1] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.024) 0:01:29.428 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.033) 0:01:29.462 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.048) 0:01:29.510 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Saturday 28 June 2025 18:20:11 -0400 (0:00:00.067) 0:01:29.578 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Saturday 28 June 2025 18:20:12 -0400 (0:00:00.108) 0:01:29.686 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "blivet_output['mounts'] | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Saturday 28 June 2025 18:20:12 -0400 (0:00:00.120) 0:01:29.807 ********* ok: [managed-node1] => { "changed": false, "stat": { "atime": 1751149138.617773, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1750749597.598, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 15, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1750749154.890496, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "4287645168", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Saturday 28 June 2025 18:20:12 -0400 (0:00:00.609) 0:01:30.416 ********* skipping: [managed-node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Saturday 28 June 2025 18:20:12 -0400 (0:00:00.033) 0:01:30.450 ********* ok: [managed-node1] TASK [Save unused_disk_return before verify] *********************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/cleanup.yml:30 Saturday 28 June 2025 18:20:13 -0400 (0:00:01.015) 0:01:31.466 ********* ok: [managed-node1] => { "ansible_facts": { "unused_disks_before": [ "sda", "sdb", "sdc", "sdd", "sde", "sdf", "sdg", "sdh", "sdi", "sdj" ] }, "changed": false } TASK [Verify that pools/volumes used in test are removed] ********************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/cleanup.yml:34 Saturday 28 June 2025 18:20:13 -0400 (0:00:00.024) 0:01:31.490 ********* included: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml for managed-node1 TASK [Check if system is ostree] *********************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:5 Saturday 28 June 2025 18:20:13 -0400 (0:00:00.032) 0:01:31.522 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [Set flag to indicate system is ostree] *********************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:10 Saturday 28 June 2025 18:20:13 -0400 (0:00:00.022) 0:01:31.545 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "not __snapshot_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [Ensure test packages] **************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:14 Saturday 28 June 2025 18:20:13 -0400 (0:00:00.022) 0:01:31.568 ********* ok: [managed-node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: util-linux-core TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:23 Saturday 28 June 2025 18:20:15 -0400 (0:00:01.339) 0:01:32.907 ********* ok: [managed-node1] => { "changed": false, "disks": [ "sdk", "sdl" ], "info": [ "Line: NAME=\"/dev/sda\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg1-lv2\" TYPE=\"lvm\" SIZE=\"4827643904\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg1-lv2\" TYPE=\"lvm\" SIZE=\"4827643904\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdb\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg1-lv2\" TYPE=\"lvm\" SIZE=\"4827643904\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg1-lv2\" TYPE=\"lvm\" SIZE=\"4827643904\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg1-lv1\" TYPE=\"lvm\" SIZE=\"1451229184\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg1-lv1\" TYPE=\"lvm\" SIZE=\"1451229184\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdc\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdd\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg2-lv4\" TYPE=\"lvm\" SIZE=\"1933574144\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg2-lv4\" TYPE=\"lvm\" SIZE=\"1933574144\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg2-lv3\" TYPE=\"lvm\" SIZE=\"968884224\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg2-lv3\" TYPE=\"lvm\" SIZE=\"968884224\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sde\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdf\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdg\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg3-lv8\" TYPE=\"lvm\" SIZE=\"1287651328\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg3-lv8\" TYPE=\"lvm\" SIZE=\"1287651328\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg3-lv7\" TYPE=\"lvm\" SIZE=\"1287651328\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg3-lv7\" TYPE=\"lvm\" SIZE=\"1287651328\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg3-lv6\" TYPE=\"lvm\" SIZE=\"3217031168\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg3-lv6\" TYPE=\"lvm\" SIZE=\"3217031168\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdh\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg3-lv6\" TYPE=\"lvm\" SIZE=\"3217031168\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg3-lv6\" TYPE=\"lvm\" SIZE=\"3217031168\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdi\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg3-lv5\" TYPE=\"lvm\" SIZE=\"3862953984\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg3-lv5\" TYPE=\"lvm\" SIZE=\"3862953984\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdj\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/mapper/test_vg3-lv5\" TYPE=\"lvm\" SIZE=\"3862953984\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line type [lvm] is not disk: NAME=\"/dev/mapper/test_vg3-lv5\" TYPE=\"lvm\" SIZE=\"3862953984\" FSTYPE=\"xfs\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdk\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/sdl\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"ext4\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"ext4\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/zram0\" TYPE=\"disk\" SIZE=\"3894411264\" FSTYPE=\"swap\" LOG-SEC=\"4096\"", "Disk [/dev/sda] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sdb] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sdc] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sdd] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sde] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sdf] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sdg] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sdh] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sdi] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "Disk [/dev/sdj] attrs [{'type': 'disk', 'size': '3221225472', 'fstype': 'LVM2_member', 'ssize': '512'}] has fstype", "filename [xvda2] is a partition", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'type': 'disk', 'size': '268435456000', 'fstype': '', 'ssize': '512'}] has partitions", "Disk [/dev/zram0] attrs [{'type': 'disk', 'size': '3894411264', 'fstype': 'swap', 'ssize': '4096'}] has fstype" ] } TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:31 Saturday 28 June 2025 18:20:15 -0400 (0:00:00.559) 0:01:33.467 ********* ok: [managed-node1] => { "ansible_facts": { "unused_disks": [ "sdk", "sdl" ] }, "changed": false } TASK [Print unused disks] ****************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:36 Saturday 28 June 2025 18:20:15 -0400 (0:00:00.044) 0:01:33.511 ********* ok: [managed-node1] => { "unused_disks": [ "sdk", "sdl" ] } TASK [Print info from find_unused_disk] **************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:44 Saturday 28 June 2025 18:20:15 -0400 (0:00:00.038) 0:01:33.550 ********* skipping: [managed-node1] => { "false_condition": "unused_disks | d([]) | length < disks_needed | d(1)" } TASK [Show disk information] *************************************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:49 Saturday 28 June 2025 18:20:15 -0400 (0:00:00.051) 0:01:33.602 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "unused_disks | d([]) | length < disks_needed | d(1)", "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:58 Saturday 28 June 2025 18:20:16 -0400 (0:00:00.063) 0:01:33.665 ********* skipping: [managed-node1] => { "changed": false, "false_condition": "unused_disks | d([]) | length < disks_needed | d(1)", "skip_reason": "Conditional result was False" } TASK [Debug why list of unused disks has changed] ****************************** task path: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/cleanup.yml:40 Saturday 28 June 2025 18:20:16 -0400 (0:00:00.073) 0:01:33.739 ********* [ERROR]: Task failed: Action failed. Origin: /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/cleanup.yml:40:3 38 min_return: "{{ test_disk_count }}" 39 40 - name: Debug why list of unused disks has changed ^ column 3 fatal: [managed-node1]: FAILED! => { "changed": false, "cmd": "set -euxo pipefail\nexec 1>&2\nmount -f -l\ndf -H\nlvs --all\npvs --all\nvgs --all\ncat /tmp/snapshot_role.log || :\ncat /etc/lvm/devices/system.devices || :\nfor dev in $(lsblk -l -p -o NAME); do\n if [ \"$dev\" = NAME ]; then continue; fi\n echo blkid info with cache\n blkid \"$dev\" || :\n echo blkid info without cache\n blkid -p \"$dev\" || :\ndone\n# garbage collect\nblkid -g || :\necho lsblk after garbage collect\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE\n# flush cache\nblkid -s none || :\necho lsblk after cache flush\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE\ncat /tmp/blivet.log || :\n", "delta": "0:00:00.275215", "end": "2025-06-28 18:20:16.871243", "failed_when_result": true, "rc": 0, "start": "2025-06-28 18:20:16.596028" } STDERR: + exec + mount -f -l /dev/xvda2 on / type ext4 (rw,relatime,seclabel) devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=1882264k,nr_inodes=470566,mode=755,inode64) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel,inode64) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot) none on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel) bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,size=760632k,nr_inodes=819200,mode=755,inode64) selinuxfs on /sys/fs/selinux type selinuxfs (rw,nosuid,noexec,relatime) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=37,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=4693) tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime,seclabel) debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime,seclabel) hugetlbfs on /dev/hugepages type hugetlbfs (rw,nosuid,nodev,relatime,seclabel,pagesize=2M) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel) tmpfs on /run/credentials/systemd-journald.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,seclabel,size=1024k,nr_inodes=1024,mode=700,inode64,noswap) fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime) tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,nr_inodes=1048576,inode64) tmpfs on /run/credentials/systemd-resolved.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,seclabel,size=1024k,nr_inodes=1024,mode=700,inode64,noswap) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) tmpfs on /run/credentials/getty@tty1.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,seclabel,size=1024k,nr_inodes=1024,mode=700,inode64,noswap) tmpfs on /run/credentials/serial-getty@ttyS0.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,seclabel,size=1024k,nr_inodes=1024,mode=700,inode64,noswap) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=380312k,nr_inodes=95078,mode=700,inode64) + df -H Filesystem Size Used Avail Use% Mounted on /dev/xvda2 265G 2.8G 251G 2% / devtmpfs 2.0G 0 2.0G 0% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 779M 783k 779M 1% /run tmpfs 1.1M 0 1.1M 0% /run/credentials/systemd-journald.service tmpfs 2.0G 2.7M 2.0G 1% /tmp tmpfs 1.1M 0 1.1M 0% /run/credentials/systemd-resolved.service tmpfs 1.1M 0 1.1M 0% /run/credentials/getty@tty1.service tmpfs 1.1M 0 1.1M 0% /run/credentials/serial-getty@ttyS0.service tmpfs 390M 4.1k 390M 1% /run/user/0 + lvs --all LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 test_vg1 -wi-a----- 1.35g lv2 test_vg1 -wi-a----- <4.50g lv3 test_vg2 -wi-a----- 924.00m lv4 test_vg2 -wi-a----- 1.80g lv5 test_vg3 -wi-a----- <3.60g lv6 test_vg3 -wi-a----- <3.00g lv7 test_vg3 -wi-a----- <1.20g lv8 test_vg3 -wi-a----- <1.20g + pvs --all PV VG Fmt Attr PSize PFree /dev/sda test_vg1 lvm2 a-- 2.99g 0 /dev/sdb test_vg1 lvm2 a-- 2.99g 140.00m /dev/sdc test_vg1 lvm2 a-- 2.99g 2.99g /dev/sdd test_vg2 lvm2 a-- 2.99g 296.00m /dev/sde test_vg2 lvm2 a-- 2.99g 2.99g /dev/sdf test_vg2 lvm2 a-- 2.99g 2.99g /dev/sdg test_vg3 lvm2 a-- 2.99g 604.00m /dev/sdh test_vg3 lvm2 a-- 2.99g 0 /dev/sdi test_vg3 lvm2 a-- 2.99g 0 /dev/sdj test_vg3 lvm2 a-- 2.99g <2.39g + vgs --all VG #PV #LV #SN Attr VSize VFree test_vg1 3 2 0 wz--n- <8.98g <3.13g test_vg2 3 2 0 wz--n- <8.98g 6.27g test_vg3 4 4 0 wz--n- <11.97g <2.98g + cat /tmp/snapshot_role.log 2025-06-28 18:19:32,265 INFO snapshot-role/MainThread: run_module() 2025-06-28 18:19:32,268 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'snapshot', 'snapshot_lvm_all_vgs': True, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '15', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': False, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'} 2025-06-28 18:19:32,268 INFO snapshot-role/MainThread: get_json_from_args: BEGIN 2025-06-28 18:19:32,298 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '6736052224', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:32,350 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': '15'} 2025-06-28 18:19:32,403 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': '15'} 2025-06-28 18:19:32,403 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '3196059648', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:32,456 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': '15'} 2025-06-28 18:19:32,509 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': '15'} 2025-06-28 18:19:32,564 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': '15'} 2025-06-28 18:19:32,619 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': '15'} 2025-06-28 18:19:32,619 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '3359637504', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:32,674 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': '15'} 2025-06-28 18:19:32,729 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': '15'} 2025-06-28 18:19:32,729 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': '15'}]} 2025-06-28 18:19:32,729 INFO snapshot-role/MainThread: mgr_snapshot_cmd: snapset1 2025-06-28 18:19:32,730 INFO snapshot-role/MainThread: verify snapsset : snapset1 2025-06-28 18:19:33,176 INFO snapshot-role/MainThread: snapsset ok: snapset1 2025-06-28 18:19:35,872 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': True} 2025-06-28 18:19:35,873 INFO snapshot-role/MainThread: result: {'changed': True, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''} 2025-06-28 18:19:38,430 INFO snapshot-role/MainThread: run_module() 2025-06-28 18:19:38,433 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'check', 'snapshot_lvm_all_vgs': True, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': True, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'} 2025-06-28 18:19:38,433 INFO snapshot-role/MainThread: get_json_from_args: BEGIN 2025-06-28 18:19:38,470 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '5662310400', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '1W7efB-S01h-s24U-T43G-WUtx-dA3v-QRf3nY', 'lv_name': 'lv3-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv3', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'jxi2kf-Vt6s-vojQ-wS3k-65qH-Uin5-JEfIIY', 'lv_name': 'lv4-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv4', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:38,517 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''} 2025-06-28 18:19:38,549 INFO snapshot-role/MainThread: get_json_from_args: lv lv3-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:38,598 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''} 2025-06-28 18:19:38,625 INFO snapshot-role/MainThread: get_json_from_args: lv lv4-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:38,625 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '1002438656', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'wdqgo2-uU5H-wlvg-aoYF-y2Qt-4fDN-pOAyBj', 'lv_name': 'lv5-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_size': '583008256', 'origin': 'lv5', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'TvUg1o-wzGo-dQFn-JyNr-Ak2n-f08U-XoAZ7A', 'lv_name': 'lv6-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv6', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'clujZs-C4oH-5Lg0-88kw-Ofm5-Balm-rBrqPw', 'lv_name': 'lv7-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv7', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'Cek7N7-hMOC-epqr-JdpY-aDBW-qRMX-cDnbP9', 'lv_name': 'lv8-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv8', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:38,684 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''} 2025-06-28 18:19:38,711 INFO snapshot-role/MainThread: get_json_from_args: lv lv5-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:38,768 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''} 2025-06-28 18:19:38,794 INFO snapshot-role/MainThread: get_json_from_args: lv lv6-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:38,847 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''} 2025-06-28 18:19:38,877 INFO snapshot-role/MainThread: get_json_from_args: lv lv7-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:38,924 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''} 2025-06-28 18:19:38,950 INFO snapshot-role/MainThread: get_json_from_args: lv lv8-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:38,950 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '2097152000', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CQ7FFq-VkRL-SUQj-hYSb-Woin-YHi2-cFvW5x', 'lv_name': 'lv1-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv1', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aXPoo7-LiNs-8W62-S2aO-QMwN-ln1F-WzAPCu', 'lv_name': 'lv2-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_size': '725614592', 'origin': 'lv2', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:39,009 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''} 2025-06-28 18:19:39,031 INFO snapshot-role/MainThread: get_json_from_args: lv lv1-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:39,084 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''} 2025-06-28 18:19:39,108 INFO snapshot-role/MainThread: get_json_from_args: lv lv2-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:39,109 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]} 2025-06-28 18:19:39,109 INFO snapshot-role/MainThread: check_cmd: snapset1 2025-06-28 18:19:39,235 INFO snapshot-role/MainThread: mgr_check_verify_lvs_set: snapset1 2025-06-28 18:19:39,235 INFO snapshot-role/MainThread: verify snapsset : snapset1 2025-06-28 18:19:39,674 INFO snapshot-role/MainThread: snapsset ok: snapset1 2025-06-28 18:19:39,674 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False} 2025-06-28 18:19:39,674 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''} 2025-06-28 18:19:42,784 INFO snapshot-role/MainThread: run_module() 2025-06-28 18:19:42,787 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'snapshot', 'snapshot_lvm_all_vgs': True, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '15', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': False, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'} 2025-06-28 18:19:42,787 INFO snapshot-role/MainThread: get_json_from_args: BEGIN 2025-06-28 18:19:42,824 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '5662310400', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '1W7efB-S01h-s24U-T43G-WUtx-dA3v-QRf3nY', 'lv_name': 'lv3-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv3', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'jxi2kf-Vt6s-vojQ-wS3k-65qH-Uin5-JEfIIY', 'lv_name': 'lv4-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv4', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:42,878 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': '15'} 2025-06-28 18:19:42,904 INFO snapshot-role/MainThread: get_json_from_args: lv lv3-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:42,957 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': '15'} 2025-06-28 18:19:42,982 INFO snapshot-role/MainThread: get_json_from_args: lv lv4-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:42,982 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '1002438656', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'wdqgo2-uU5H-wlvg-aoYF-y2Qt-4fDN-pOAyBj', 'lv_name': 'lv5-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_size': '583008256', 'origin': 'lv5', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'TvUg1o-wzGo-dQFn-JyNr-Ak2n-f08U-XoAZ7A', 'lv_name': 'lv6-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv6', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'clujZs-C4oH-5Lg0-88kw-Ofm5-Balm-rBrqPw', 'lv_name': 'lv7-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv7', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'Cek7N7-hMOC-epqr-JdpY-aDBW-qRMX-cDnbP9', 'lv_name': 'lv8-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv8', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:43,034 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': '15'} 2025-06-28 18:19:43,064 INFO snapshot-role/MainThread: get_json_from_args: lv lv5-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:43,120 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': '15'} 2025-06-28 18:19:43,151 INFO snapshot-role/MainThread: get_json_from_args: lv lv6-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:43,206 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': '15'} 2025-06-28 18:19:43,237 INFO snapshot-role/MainThread: get_json_from_args: lv lv7-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:43,293 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': '15'} 2025-06-28 18:19:43,323 INFO snapshot-role/MainThread: get_json_from_args: lv lv8-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:43,323 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '2097152000', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CQ7FFq-VkRL-SUQj-hYSb-Woin-YHi2-cFvW5x', 'lv_name': 'lv1-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv1', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aXPoo7-LiNs-8W62-S2aO-QMwN-ln1F-WzAPCu', 'lv_name': 'lv2-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_size': '725614592', 'origin': 'lv2', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:43,379 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': '15'} 2025-06-28 18:19:43,404 INFO snapshot-role/MainThread: get_json_from_args: lv lv1-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:43,457 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': '15'} 2025-06-28 18:19:43,484 INFO snapshot-role/MainThread: get_json_from_args: lv lv2-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:43,484 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': '15'}]} 2025-06-28 18:19:43,484 INFO snapshot-role/MainThread: mgr_snapshot_cmd: snapset1 2025-06-28 18:19:43,485 INFO snapshot-role/MainThread: verify snapsset : snapset1 2025-06-28 18:19:43,911 INFO snapshot-role/MainThread: snapsset ok: snapset1 2025-06-28 18:19:44,035 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False} 2025-06-28 18:19:44,035 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''} 2025-06-28 18:19:46,681 INFO snapshot-role/MainThread: run_module() 2025-06-28 18:19:46,683 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'check', 'snapshot_lvm_all_vgs': True, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': True, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'} 2025-06-28 18:19:46,683 INFO snapshot-role/MainThread: get_json_from_args: BEGIN 2025-06-28 18:19:46,721 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '5662310400', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '1W7efB-S01h-s24U-T43G-WUtx-dA3v-QRf3nY', 'lv_name': 'lv3-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv3', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'jxi2kf-Vt6s-vojQ-wS3k-65qH-Uin5-JEfIIY', 'lv_name': 'lv4-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv4', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:46,771 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''} 2025-06-28 18:19:46,802 INFO snapshot-role/MainThread: get_json_from_args: lv lv3-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:46,857 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''} 2025-06-28 18:19:46,883 INFO snapshot-role/MainThread: get_json_from_args: lv lv4-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:46,883 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '1002438656', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'wdqgo2-uU5H-wlvg-aoYF-y2Qt-4fDN-pOAyBj', 'lv_name': 'lv5-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_size': '583008256', 'origin': 'lv5', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'TvUg1o-wzGo-dQFn-JyNr-Ak2n-f08U-XoAZ7A', 'lv_name': 'lv6-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv6', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'clujZs-C4oH-5Lg0-88kw-Ofm5-Balm-rBrqPw', 'lv_name': 'lv7-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv7', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'Cek7N7-hMOC-epqr-JdpY-aDBW-qRMX-cDnbP9', 'lv_name': 'lv8-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv8', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:46,939 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''} 2025-06-28 18:19:46,966 INFO snapshot-role/MainThread: get_json_from_args: lv lv5-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:47,024 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''} 2025-06-28 18:19:47,050 INFO snapshot-role/MainThread: get_json_from_args: lv lv6-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:47,103 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''} 2025-06-28 18:19:47,135 INFO snapshot-role/MainThread: get_json_from_args: lv lv7-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:47,189 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''} 2025-06-28 18:19:47,221 INFO snapshot-role/MainThread: get_json_from_args: lv lv8-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:47,222 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '2097152000', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CQ7FFq-VkRL-SUQj-hYSb-Woin-YHi2-cFvW5x', 'lv_name': 'lv1-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv1', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aXPoo7-LiNs-8W62-S2aO-QMwN-ln1F-WzAPCu', 'lv_name': 'lv2-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_size': '725614592', 'origin': 'lv2', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:47,283 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''} 2025-06-28 18:19:47,314 INFO snapshot-role/MainThread: get_json_from_args: lv lv1-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:47,371 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''} 2025-06-28 18:19:47,397 INFO snapshot-role/MainThread: get_json_from_args: lv lv2-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:47,397 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]} 2025-06-28 18:19:47,397 INFO snapshot-role/MainThread: check_cmd: snapset1 2025-06-28 18:19:47,520 INFO snapshot-role/MainThread: mgr_check_verify_lvs_set: snapset1 2025-06-28 18:19:47,520 INFO snapshot-role/MainThread: verify snapsset : snapset1 2025-06-28 18:19:47,979 INFO snapshot-role/MainThread: snapsset ok: snapset1 2025-06-28 18:19:47,979 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False} 2025-06-28 18:19:47,979 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''} 2025-06-28 18:19:50,906 INFO snapshot-role/MainThread: run_module() 2025-06-28 18:19:50,909 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'remove', 'snapshot_lvm_all_vgs': False, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': False, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'} 2025-06-28 18:19:50,909 INFO snapshot-role/MainThread: get_json_from_args: BEGIN 2025-06-28 18:19:50,938 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '5662310400', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '1W7efB-S01h-s24U-T43G-WUtx-dA3v-QRf3nY', 'lv_name': 'lv3-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv3', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'jxi2kf-Vt6s-vojQ-wS3k-65qH-Uin5-JEfIIY', 'lv_name': 'lv4-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv4', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:50,991 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''} 2025-06-28 18:19:51,022 INFO snapshot-role/MainThread: get_json_from_args: lv lv3-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:51,078 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''} 2025-06-28 18:19:51,102 INFO snapshot-role/MainThread: get_json_from_args: lv lv4-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:51,102 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '1002438656', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'wdqgo2-uU5H-wlvg-aoYF-y2Qt-4fDN-pOAyBj', 'lv_name': 'lv5-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_size': '583008256', 'origin': 'lv5', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'TvUg1o-wzGo-dQFn-JyNr-Ak2n-f08U-XoAZ7A', 'lv_name': 'lv6-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv6', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'clujZs-C4oH-5Lg0-88kw-Ofm5-Balm-rBrqPw', 'lv_name': 'lv7-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv7', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'Cek7N7-hMOC-epqr-JdpY-aDBW-qRMX-cDnbP9', 'lv_name': 'lv8-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv8', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:51,154 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''} 2025-06-28 18:19:51,181 INFO snapshot-role/MainThread: get_json_from_args: lv lv5-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:51,231 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''} 2025-06-28 18:19:51,262 INFO snapshot-role/MainThread: get_json_from_args: lv lv6-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:51,318 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''} 2025-06-28 18:19:51,349 INFO snapshot-role/MainThread: get_json_from_args: lv lv7-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:51,412 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''} 2025-06-28 18:19:51,437 INFO snapshot-role/MainThread: get_json_from_args: lv lv8-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:51,437 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '2097152000', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CQ7FFq-VkRL-SUQj-hYSb-Woin-YHi2-cFvW5x', 'lv_name': 'lv1-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv1', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aXPoo7-LiNs-8W62-S2aO-QMwN-ln1F-WzAPCu', 'lv_name': 'lv2-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_size': '725614592', 'origin': 'lv2', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}] 2025-06-28 18:19:51,487 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''} 2025-06-28 18:19:51,513 INFO snapshot-role/MainThread: get_json_from_args: lv lv1-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:51,562 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''} 2025-06-28 18:19:51,589 INFO snapshot-role/MainThread: get_json_from_args: lv lv2-snapset_snapset1_1751149174_ is a snapshot - skipping 2025-06-28 18:19:51,589 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]} 2025-06-28 18:19:51,589 INFO snapshot-role/MainThread: remove_cmd: snapset1 2025-06-28 18:19:52,584 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': True} 2025-06-28 18:19:52,584 INFO snapshot-role/MainThread: result: {'changed': True, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''} 2025-06-28 18:19:54,952 INFO snapshot-role/MainThread: run_module() 2025-06-28 18:19:54,955 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'remove', 'snapshot_lvm_all_vgs': False, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': True, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'} 2025-06-28 18:19:54,955 INFO snapshot-role/MainThread: get_json_from_args: BEGIN 2025-06-28 18:19:54,987 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '6736052224', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:55,039 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''} 2025-06-28 18:19:55,092 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''} 2025-06-28 18:19:55,092 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '3196059648', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:55,148 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''} 2025-06-28 18:19:55,203 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''} 2025-06-28 18:19:55,259 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''} 2025-06-28 18:19:55,321 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''} 2025-06-28 18:19:55,322 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '3359637504', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:55,379 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''} 2025-06-28 18:19:55,439 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''} 2025-06-28 18:19:55,440 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]} 2025-06-28 18:19:55,440 INFO snapshot-role/MainThread: remove_cmd: snapset1 2025-06-28 18:19:55,567 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False} 2025-06-28 18:19:55,567 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''} 2025-06-28 18:19:57,961 INFO snapshot-role/MainThread: run_module() 2025-06-28 18:19:57,963 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'remove', 'snapshot_lvm_all_vgs': False, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': False, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'} 2025-06-28 18:19:57,963 INFO snapshot-role/MainThread: get_json_from_args: BEGIN 2025-06-28 18:19:57,992 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '6736052224', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:58,052 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''} 2025-06-28 18:19:58,110 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''} 2025-06-28 18:19:58,110 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '3196059648', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:58,163 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''} 2025-06-28 18:19:58,211 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''} 2025-06-28 18:19:58,268 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''} 2025-06-28 18:19:58,320 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''} 2025-06-28 18:19:58,320 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '3359637504', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:19:58,375 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''} 2025-06-28 18:19:58,429 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''} 2025-06-28 18:19:58,429 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]} 2025-06-28 18:19:58,429 INFO snapshot-role/MainThread: remove_cmd: snapset1 2025-06-28 18:19:58,549 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False} 2025-06-28 18:19:58,549 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''} 2025-06-28 18:20:01,323 INFO snapshot-role/MainThread: run_module() 2025-06-28 18:20:01,326 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'remove', 'snapshot_lvm_all_vgs': False, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': True, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'} 2025-06-28 18:20:01,327 INFO snapshot-role/MainThread: get_json_from_args: BEGIN 2025-06-28 18:20:01,358 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '6736052224', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:20:01,415 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''} 2025-06-28 18:20:01,471 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''} 2025-06-28 18:20:01,471 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '3196059648', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:20:01,536 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''} 2025-06-28 18:20:01,596 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''} 2025-06-28 18:20:01,649 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''} 2025-06-28 18:20:01,706 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''} 2025-06-28 18:20:01,707 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '3359637504', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}] 2025-06-28 18:20:01,764 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''} 2025-06-28 18:20:01,822 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''} 2025-06-28 18:20:01,823 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]} 2025-06-28 18:20:01,823 INFO snapshot-role/MainThread: remove_cmd: snapset1 2025-06-28 18:20:01,942 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False} 2025-06-28 18:20:01,942 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''} + cat /etc/lvm/devices/system.devices # LVM uses devices listed in this file. # Created by LVM command vgcreate pid 8426 at Sat Jun 28 18:19:23 2025 # HASH=3642833769 PRODUCT_UUID=ec2257d3-6085-ae99-95b3-ebfab5fe05d2 VERSION=1.1.23 IDTYPE=sys_wwid IDNAME=naa.6001405a9efc0e1911c4201a970a6f85 DEVNAME=/dev/sdg PVID=fGZQyexmygUivBN2OeyOC9UQbrOSL6Yg IDTYPE=sys_wwid IDNAME=naa.60014058181fbe60fbb48f6bf65e97b7 DEVNAME=/dev/sdh PVID=4xDnwooI2hV7TQpKHN0kasYYMi7EXCjq IDTYPE=sys_wwid IDNAME=naa.600140536dcfebe092746238bb16a3fa DEVNAME=/dev/sdi PVID=s2n1Tr6dKzyCoZ1UN6yOuMAF23fu2ozK IDTYPE=sys_wwid IDNAME=naa.600140580c834ee801b48198b71671c5 DEVNAME=/dev/sdj PVID=WrvtiyQHAQm4D0TKnuJDRehcWFGRqc2u IDTYPE=sys_wwid IDNAME=naa.6001405acd2ba9b1a974f55a9704061c DEVNAME=/dev/sdd PVID=g4qDrcspQRjs5FniDaADHDx4Xf8P9sOS IDTYPE=sys_wwid IDNAME=naa.6001405378e6ca643c443e0b9c840399 DEVNAME=/dev/sde PVID=gDmTuzMWP5wqBf9PmL2JdYTsUYgCGQFw IDTYPE=sys_wwid IDNAME=naa.6001405f858954a0e784149995a198ad DEVNAME=/dev/sdf PVID=cvneN6LIbklm4KHpVz00roxXzwTXdY8B IDTYPE=sys_wwid IDNAME=naa.60014058847ce6dd73d4f01931e495d9 DEVNAME=/dev/sda PVID=PRn8hB5TjCWyYHYuFPLMZFk3jUqH1ryT IDTYPE=sys_wwid IDNAME=naa.6001405c48b47cd2cda4408882faf8c6 DEVNAME=/dev/sdb PVID=wYtbK0nntGubftNog8syUu4xOsdENa3q IDTYPE=sys_wwid IDNAME=naa.6001405582e0de585294686b36ae1d1e DEVNAME=/dev/sdc PVID=93Kz1c1GE6bJpqEiUO0JUc6pm1Y5YKad ++ lsblk -l -p -o NAME + for dev in $(lsblk -l -p -o NAME) + '[' NAME = NAME ']' + continue + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sda = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sda /dev/sda: UUID="PRn8hB-5TjC-WyYH-YuFP-LMZF-k3jU-qH1ryT" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sda /dev/sda: UUID="PRn8hB-5TjC-WyYH-YuFP-LMZF-k3jU-qH1ryT" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdb = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdb /dev/sdb: UUID="wYtbK0-nntG-ubft-Nog8-syUu-4xOs-dENa3q" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sdb /dev/sdb: UUID="wYtbK0-nntG-ubft-Nog8-syUu-4xOs-dENa3q" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdc = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdc /dev/sdc: UUID="93Kz1c-1GE6-bJpq-EiUO-0JUc-6pm1-Y5YKad" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sdc /dev/sdc: UUID="93Kz1c-1GE6-bJpq-EiUO-0JUc-6pm1-Y5YKad" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdd = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdd /dev/sdd: UUID="g4qDrc-spQR-js5F-niDa-ADHD-x4Xf-8P9sOS" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sdd /dev/sdd: UUID="g4qDrc-spQR-js5F-niDa-ADHD-x4Xf-8P9sOS" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sde = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sde /dev/sde: UUID="gDmTuz-MWP5-wqBf-9PmL-2JdY-TsUY-gCGQFw" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sde /dev/sde: UUID="gDmTuz-MWP5-wqBf-9PmL-2JdY-TsUY-gCGQFw" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdf = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdf /dev/sdf: UUID="cvneN6-LIbk-lm4K-HpVz-00ro-xXzw-TXdY8B" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sdf /dev/sdf: UUID="cvneN6-LIbk-lm4K-HpVz-00ro-xXzw-TXdY8B" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdg = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdg /dev/sdg: UUID="fGZQye-xmyg-UivB-N2Oe-yOC9-UQbr-OSL6Yg" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sdg /dev/sdg: UUID="fGZQye-xmyg-UivB-N2Oe-yOC9-UQbr-OSL6Yg" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdh = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdh /dev/sdh: UUID="4xDnwo-oI2h-V7TQ-pKHN-0kas-YYMi-7EXCjq" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sdh /dev/sdh: UUID="4xDnwo-oI2h-V7TQ-pKHN-0kas-YYMi-7EXCjq" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdi = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdi /dev/sdi: UUID="s2n1Tr-6dKz-yCoZ-1UN6-yOuM-AF23-fu2ozK" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sdi /dev/sdi: UUID="s2n1Tr-6dKz-yCoZ-1UN6-yOuM-AF23-fu2ozK" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdj = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdj /dev/sdj: UUID="Wrvtiy-QHAQ-m4D0-TKnu-JDRe-hcWF-GRqc2u" TYPE="LVM2_member" + echo blkid info without cache blkid info without cache + blkid -p /dev/sdj /dev/sdj: UUID="Wrvtiy-QHAQ-m4D0-TKnu-JDRe-hcWF-GRqc2u" VERSION="LVM2 001" TYPE="LVM2_member" USAGE="raid" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdk = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdk + : + echo blkid info without cache blkid info without cache + blkid -p /dev/sdk + : + for dev in $(lsblk -l -p -o NAME) + '[' /dev/sdl = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/sdl + : + echo blkid info without cache blkid info without cache + blkid -p /dev/sdl + : + for dev in $(lsblk -l -p -o NAME) + '[' /dev/xvda = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/xvda /dev/xvda: PTUUID="91c3c0f1-4957-4f21-b15a-28e9016b79c2" PTTYPE="gpt" + echo blkid info without cache blkid info without cache + blkid -p /dev/xvda /dev/xvda: PTUUID="91c3c0f1-4957-4f21-b15a-28e9016b79c2" PTTYPE="gpt" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/xvda1 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/xvda1 /dev/xvda1: PARTUUID="fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda" + echo blkid info without cache blkid info without cache + blkid -p /dev/xvda1 /dev/xvda1: PART_ENTRY_SCHEME="gpt" PART_ENTRY_UUID="fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda" PART_ENTRY_TYPE="21686148-6449-6e6f-744e-656564454649" PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048" PART_ENTRY_SIZE="2048" PART_ENTRY_DISK="202:0" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/xvda2 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/xvda2 /dev/xvda2: UUID="8959a9f3-59d4-4eb7-8e53-e856bbc805e9" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="782cc2d2-7936-4e3f-9cb4-9758a83f53fa" + echo blkid info without cache blkid info without cache + blkid -p /dev/xvda2 /dev/xvda2: UUID="8959a9f3-59d4-4eb7-8e53-e856bbc805e9" VERSION="1.0" FSBLOCKSIZE="4096" BLOCK_SIZE="4096" FSLASTBLOCK="65535483" FSSIZE="268433338368" TYPE="ext4" USAGE="filesystem" PART_ENTRY_SCHEME="gpt" PART_ENTRY_UUID="782cc2d2-7936-4e3f-9cb4-9758a83f53fa" PART_ENTRY_TYPE="0fc63daf-8483-4772-8e79-3d69d8477de4" PART_ENTRY_NUMBER="2" PART_ENTRY_OFFSET="4096" PART_ENTRY_SIZE="524283871" PART_ENTRY_DISK="202:0" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/zram0 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/zram0 /dev/zram0: LABEL="zram0" UUID="9e4b39b6-8d8e-46c1-8981-c482cb670ee6" TYPE="swap" + echo blkid info without cache blkid info without cache + blkid -p /dev/zram0 /dev/zram0: ENDIANNESS="LITTLE" FSBLOCKSIZE="4096" FSSIZE="3894407168" FSLASTBLOCK="950784" LABEL="zram0" UUID="9e4b39b6-8d8e-46c1-8981-c482cb670ee6" VERSION="1" TYPE="swap" USAGE="other" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/mapper/test_vg3-lv8 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/mapper/test_vg3-lv8 /dev/mapper/test_vg3-lv8: UUID="77628583-14fb-431d-8722-115eadb2c621" BLOCK_SIZE="512" TYPE="xfs" + echo blkid info without cache blkid info without cache + blkid -p /dev/mapper/test_vg3-lv8 /dev/mapper/test_vg3-lv8: UUID="77628583-14fb-431d-8722-115eadb2c621" FSSIZE="1220542464" FSLASTBLOCK="314368" FSBLOCKSIZE="4096" BLOCK_SIZE="512" TYPE="xfs" USAGE="filesystem" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/mapper/test_vg3-lv7 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/mapper/test_vg3-lv7 /dev/mapper/test_vg3-lv7: UUID="596601cb-c0d2-421d-af31-da3ca0a3650c" BLOCK_SIZE="512" TYPE="xfs" + echo blkid info without cache blkid info without cache + blkid -p /dev/mapper/test_vg3-lv7 /dev/mapper/test_vg3-lv7: UUID="596601cb-c0d2-421d-af31-da3ca0a3650c" FSSIZE="1220542464" FSLASTBLOCK="314368" FSBLOCKSIZE="4096" BLOCK_SIZE="512" TYPE="xfs" USAGE="filesystem" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/mapper/test_vg3-lv6 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/mapper/test_vg3-lv6 /dev/mapper/test_vg3-lv6: UUID="84001af2-01d9-4734-b51a-79efe7f59395" BLOCK_SIZE="512" TYPE="xfs" + echo blkid info without cache blkid info without cache + blkid -p /dev/mapper/test_vg3-lv6 /dev/mapper/test_vg3-lv6: UUID="84001af2-01d9-4734-b51a-79efe7f59395" FSSIZE="3149922304" FSLASTBLOCK="785408" FSBLOCKSIZE="4096" BLOCK_SIZE="512" TYPE="xfs" USAGE="filesystem" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/mapper/test_vg3-lv5 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/mapper/test_vg3-lv5 /dev/mapper/test_vg3-lv5: UUID="eb47ddaa-1d93-495c-b8f9-7231835c82c5" BLOCK_SIZE="512" TYPE="xfs" + echo blkid info without cache blkid info without cache + blkid -p /dev/mapper/test_vg3-lv5 /dev/mapper/test_vg3-lv5: UUID="eb47ddaa-1d93-495c-b8f9-7231835c82c5" FSSIZE="3795845120" FSLASTBLOCK="943104" FSBLOCKSIZE="4096" BLOCK_SIZE="512" TYPE="xfs" USAGE="filesystem" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/mapper/test_vg2-lv4 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/mapper/test_vg2-lv4 /dev/mapper/test_vg2-lv4: UUID="8ff069e1-541f-476d-a94a-129e7396e539" BLOCK_SIZE="512" TYPE="xfs" + echo blkid info without cache blkid info without cache + blkid -p /dev/mapper/test_vg2-lv4 /dev/mapper/test_vg2-lv4: UUID="8ff069e1-541f-476d-a94a-129e7396e539" FSSIZE="1866465280" FSLASTBLOCK="472064" FSBLOCKSIZE="4096" BLOCK_SIZE="512" TYPE="xfs" USAGE="filesystem" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/mapper/test_vg2-lv3 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/mapper/test_vg2-lv3 /dev/mapper/test_vg2-lv3: UUID="13c0d833-a00e-49fa-a58c-aa045851ccb6" BLOCK_SIZE="512" TYPE="xfs" + echo blkid info without cache blkid info without cache + blkid -p /dev/mapper/test_vg2-lv3 /dev/mapper/test_vg2-lv3: UUID="13c0d833-a00e-49fa-a58c-aa045851ccb6" FSSIZE="901775360" FSLASTBLOCK="236544" FSBLOCKSIZE="4096" BLOCK_SIZE="512" TYPE="xfs" USAGE="filesystem" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/mapper/test_vg1-lv2 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/mapper/test_vg1-lv2 /dev/mapper/test_vg1-lv2: UUID="991d22ac-80b0-450e-8c69-466187ba5696" BLOCK_SIZE="512" TYPE="xfs" + echo blkid info without cache blkid info without cache + blkid -p /dev/mapper/test_vg1-lv2 /dev/mapper/test_vg1-lv2: UUID="991d22ac-80b0-450e-8c69-466187ba5696" FSSIZE="4760535040" FSLASTBLOCK="1178624" FSBLOCKSIZE="4096" BLOCK_SIZE="512" TYPE="xfs" USAGE="filesystem" + for dev in $(lsblk -l -p -o NAME) + '[' /dev/mapper/test_vg1-lv1 = NAME ']' + echo blkid info with cache blkid info with cache + blkid /dev/mapper/test_vg1-lv1 /dev/mapper/test_vg1-lv1: UUID="d16d04d4-8c39-458d-91ab-62a71063488e" BLOCK_SIZE="512" TYPE="xfs" + echo blkid info without cache blkid info without cache + blkid -p /dev/mapper/test_vg1-lv1 /dev/mapper/test_vg1-lv1: UUID="d16d04d4-8c39-458d-91ab-62a71063488e" FSSIZE="1384120320" FSLASTBLOCK="354304" FSBLOCKSIZE="4096" BLOCK_SIZE="512" TYPE="xfs" USAGE="filesystem" + blkid -g + echo lsblk after garbage collect lsblk after garbage collect + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE NAME="/dev/sda" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg1-lv2" TYPE="lvm" SIZE="4827643904" FSTYPE="xfs" NAME="/dev/sdb" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg1-lv2" TYPE="lvm" SIZE="4827643904" FSTYPE="xfs" NAME="/dev/mapper/test_vg1-lv1" TYPE="lvm" SIZE="1451229184" FSTYPE="xfs" NAME="/dev/sdc" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/sdd" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg2-lv4" TYPE="lvm" SIZE="1933574144" FSTYPE="xfs" NAME="/dev/mapper/test_vg2-lv3" TYPE="lvm" SIZE="968884224" FSTYPE="xfs" NAME="/dev/sde" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/sdf" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/sdg" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg3-lv8" TYPE="lvm" SIZE="1287651328" FSTYPE="xfs" NAME="/dev/mapper/test_vg3-lv7" TYPE="lvm" SIZE="1287651328" FSTYPE="xfs" NAME="/dev/mapper/test_vg3-lv6" TYPE="lvm" SIZE="3217031168" FSTYPE="xfs" NAME="/dev/sdh" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg3-lv6" TYPE="lvm" SIZE="3217031168" FSTYPE="xfs" NAME="/dev/sdi" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg3-lv5" TYPE="lvm" SIZE="3862953984" FSTYPE="xfs" NAME="/dev/sdj" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg3-lv5" TYPE="lvm" SIZE="3862953984" FSTYPE="xfs" NAME="/dev/sdk" TYPE="disk" SIZE="3221225472" FSTYPE="" NAME="/dev/sdl" TYPE="disk" SIZE="3221225472" FSTYPE="" NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" NAME="/dev/xvda1" TYPE="part" SIZE="1048576" FSTYPE="" NAME="/dev/xvda2" TYPE="part" SIZE="268433341952" FSTYPE="ext4" NAME="/dev/zram0" TYPE="disk" SIZE="3894411264" FSTYPE="swap" + blkid -s none + echo lsblk after cache flush lsblk after cache flush + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE NAME="/dev/sda" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg1-lv2" TYPE="lvm" SIZE="4827643904" FSTYPE="xfs" NAME="/dev/sdb" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg1-lv2" TYPE="lvm" SIZE="4827643904" FSTYPE="xfs" NAME="/dev/mapper/test_vg1-lv1" TYPE="lvm" SIZE="1451229184" FSTYPE="xfs" NAME="/dev/sdc" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/sdd" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg2-lv4" TYPE="lvm" SIZE="1933574144" FSTYPE="xfs" NAME="/dev/mapper/test_vg2-lv3" TYPE="lvm" SIZE="968884224" FSTYPE="xfs" NAME="/dev/sde" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/sdf" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/sdg" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg3-lv8" TYPE="lvm" SIZE="1287651328" FSTYPE="xfs" NAME="/dev/mapper/test_vg3-lv7" TYPE="lvm" SIZE="1287651328" FSTYPE="xfs" NAME="/dev/mapper/test_vg3-lv6" TYPE="lvm" SIZE="3217031168" FSTYPE="xfs" NAME="/dev/sdh" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg3-lv6" TYPE="lvm" SIZE="3217031168" FSTYPE="xfs" NAME="/dev/sdi" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg3-lv5" TYPE="lvm" SIZE="3862953984" FSTYPE="xfs" NAME="/dev/sdj" TYPE="disk" SIZE="3221225472" FSTYPE="LVM2_member" NAME="/dev/mapper/test_vg3-lv5" TYPE="lvm" SIZE="3862953984" FSTYPE="xfs" NAME="/dev/sdk" TYPE="disk" SIZE="3221225472" FSTYPE="" NAME="/dev/sdl" TYPE="disk" SIZE="3221225472" FSTYPE="" NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" NAME="/dev/xvda1" TYPE="part" SIZE="1048576" FSTYPE="" NAME="/dev/xvda2" TYPE="part" SIZE="268433341952" FSTYPE="ext4" NAME="/dev/zram0" TYPE="disk" SIZE="3894411264" FSTYPE="swap" + cat /tmp/blivet.log 2025-06-28 18:18:51,479 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_87gzpafp/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py'] 2025-06-28 18:18:57,178 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_sbvzybbq/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py'] 2025-06-28 18:19:07,697 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_q_oih1un/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py'] 2025-06-28 18:19:07,714 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:07,728 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:07,729 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 0 2025-06-28 18:19:07,732 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:07,732 DEBUG blivet/MainThread: trying to set new default fstype to 'ext4' 2025-06-28 18:19:07,735 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:07,735 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 1 2025-06-28 18:19:07,738 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:07,738 INFO blivet/MainThread: Fstab file '' does not exist, setting fstab read path to None 2025-06-28 18:19:07,738 INFO program/MainThread: Running... lsblk --bytes -a -o NAME,SIZE,OWNER,GROUP,MODE,FSTYPE,LABEL,UUID,PARTUUID,MOUNTPOINT 2025-06-28 18:19:07,762 INFO program/MainThread: stdout: 2025-06-28 18:19:07,762 INFO program/MainThread: NAME SIZE OWNER GROUP MODE FSTYPE LABEL UUID PARTUUID MOUNTPOINT 2025-06-28 18:19:07,762 INFO program/MainThread: sda 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdb 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdc 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdd 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sde 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdf 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdg 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdh 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdi 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdj 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdk 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: sdl 3221225472 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: xvda 268435456000 root disk brw-rw---- 2025-06-28 18:19:07,762 INFO program/MainThread: |-xvda1 1048576 root disk brw-rw---- fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda 2025-06-28 18:19:07,762 INFO program/MainThread: `-xvda2 268433341952 root disk brw-rw---- ext4 8959a9f3-59d4-4eb7-8e53-e856bbc805e9 782cc2d2-7936-4e3f-9cb4-9758a83f53fa / 2025-06-28 18:19:07,763 INFO program/MainThread: zram0 3894411264 root disk brw-rw---- swap zram0 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 [SWAP] 2025-06-28 18:19:07,763 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:07,763 DEBUG blivet/MainThread: lsblk output: NAME SIZE OWNER GROUP MODE FSTYPE LABEL UUID PARTUUID MOUNTPOINT sda 3221225472 root disk brw-rw---- sdb 3221225472 root disk brw-rw---- sdc 3221225472 root disk brw-rw---- sdd 3221225472 root disk brw-rw---- sde 3221225472 root disk brw-rw---- sdf 3221225472 root disk brw-rw---- sdg 3221225472 root disk brw-rw---- sdh 3221225472 root disk brw-rw---- sdi 3221225472 root disk brw-rw---- sdj 3221225472 root disk brw-rw---- sdk 3221225472 root disk brw-rw---- sdl 3221225472 root disk brw-rw---- xvda 268435456000 root disk brw-rw---- |-xvda1 1048576 root disk brw-rw---- fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda `-xvda2 268433341952 root disk brw-rw---- ext4 8959a9f3-59d4-4eb7-8e53-e856bbc805e9 782cc2d2-7936-4e3f-9cb4-9758a83f53fa / zram0 3894411264 root disk brw-rw---- swap zram0 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 [SWAP] 2025-06-28 18:19:07,763 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:07,763 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:07,763 INFO blivet/MainThread: resetting Blivet (version 3.12.1) instance 2025-06-28 18:19:07,763 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:07,763 INFO blivet/MainThread: DeviceTree.populate: ignored_disks is [] ; exclusive_disks is [] 2025-06-28 18:19:07,764 WARNING blivet/MainThread: Failed to call the update_volume_info method: libstoragemgmt functionality not available 2025-06-28 18:19:07,764 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:07,774 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:07,785 INFO blivet/MainThread: devices to scan: ['sda', 'sdb', 'sdk', 'sdl', 'sdc', 'sdd', 'sde', 'sdf', 'sdg', 'sdh', 'sdi', 'sdj', 'xvda', 'xvda1', 'xvda2', 'zram0'] 2025-06-28 18:19:07,789 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sda ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-360014058847ce6dd73d4f01931e495d9 ' '/dev/disk/by-id/wwn-0x60014058847ce6dd73d4f01931e495d9 ' '/dev/disk/by-diskseq/3', 'DEVNAME': '/dev/sda', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda', 'DEVTYPE': 'disk', 'DISKSEQ': '3', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk0', 'ID_MODEL_ENC': 'disk0\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '8847ce6d-d73d-4f01-931e-495d93c2876b', 'ID_SERIAL': '360014058847ce6dd73d4f01931e495d9', 'ID_SERIAL_SHORT': '60014058847ce6dd73d4f01931e495d9', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x60014058847ce6dd', 'ID_WWN_VENDOR_EXTENSION': '0x73d4f01931e495d9', 'ID_WWN_WITH_EXTENSION': '0x60014058847ce6dd73d4f01931e495d9', 'MAJOR': '8', 'MINOR': '0', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sda', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193406043'} ; 2025-06-28 18:19:07,789 INFO blivet/MainThread: scanning sda (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda)... 2025-06-28 18:19:07,790 INFO program/MainThread: Running [3] lvm lvs --noheadings --nosuffix --nameprefixes --unquoted --units=b -a -o vg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype,origin,pool_lv,data_lv,metadata_lv,role,move_pv,data_percent,metadata_percent,copy_percent,lv_tags --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:07,824 INFO program/MainThread: stdout[3]: 2025-06-28 18:19:07,824 INFO program/MainThread: stderr[3]: 2025-06-28 18:19:07,824 INFO program/MainThread: ...done [3] (exit code: 0) 2025-06-28 18:19:07,829 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:07,832 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:07,832 INFO program/MainThread: Running [4] mdadm --version ... 2025-06-28 18:19:07,841 INFO program/MainThread: stdout[4]: 2025-06-28 18:19:07,841 INFO program/MainThread: stderr[4]: mdadm - v4.3 - 2024-02-15 2025-06-28 18:19:07,841 INFO program/MainThread: ...done [4] (exit code: 0) 2025-06-28 18:19:07,842 INFO program/MainThread: Running [5] dmsetup --version ... 2025-06-28 18:19:07,847 INFO program/MainThread: stdout[5]: Library version: 1.02.204 (2025-01-14) Driver version: 4.49.0 2025-06-28 18:19:07,847 INFO program/MainThread: stderr[5]: 2025-06-28 18:19:07,847 INFO program/MainThread: ...done [5] (exit code: 0) 2025-06-28 18:19:07,998 INFO blivet/MainThread: failed to get initiator name from iscsi firmware: UDisks iSCSI functionality not available 2025-06-28 18:19:07,999 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/udev.py:1087: DeprecationWarning: Will be removed in 1.0. Access properties with Device.properties. while device: 2025-06-28 18:19:08,005 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sda ; 2025-06-28 18:19:08,006 INFO blivet/MainThread: sda is a disk 2025-06-28 18:19:08,006 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:08,006 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 3 2025-06-28 18:19:08,006 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 4 2025-06-28 18:19:08,010 DEBUG blivet/MainThread: DiskDevice._set_format: sda ; type: None ; current: None ; 2025-06-28 18:19:08,014 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sda ; status: True ; 2025-06-28 18:19:08,014 DEBUG blivet/MainThread: sda sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda 2025-06-28 18:19:08,018 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sda ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda ; 2025-06-28 18:19:08,018 DEBUG blivet/MainThread: updated sda size to 3 GiB (3 GiB) 2025-06-28 18:19:08,018 INFO blivet/MainThread: added disk sda (id 2) to device tree 2025-06-28 18:19:08,018 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa3b8c0) -- name = sda status = True id = 2 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 0 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda target size = 3 GiB path = /dev/sda format args = [] original_format = None removable = False wwn = 60014058847ce6dd73d4f01931e495d9 2025-06-28 18:19:08,022 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sda ; 2025-06-28 18:19:08,022 DEBUG blivet/MainThread: no type or existing type for sda, bailing 2025-06-28 18:19:08,026 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdb ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-36001405c48b47cd2cda4408882faf8c6 ' '/dev/disk/by-id/wwn-0x6001405c48b47cd2cda4408882faf8c6 ' '/dev/disk/by-diskseq/4', 'DEVNAME': '/dev/sdb', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb', 'DEVTYPE': 'disk', 'DISKSEQ': '4', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk1', 'ID_MODEL_ENC': 'disk1\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'c48b47cd-2cda-4408-882f-af8c68d83c74', 'ID_SERIAL': '36001405c48b47cd2cda4408882faf8c6', 'ID_SERIAL_SHORT': '6001405c48b47cd2cda4408882faf8c6', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405c48b47cd2', 'ID_WWN_VENDOR_EXTENSION': '0xcda4408882faf8c6', 'ID_WWN_WITH_EXTENSION': '0x6001405c48b47cd2cda4408882faf8c6', 'MAJOR': '8', 'MINOR': '16', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdb', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193440060'} ; 2025-06-28 18:19:08,026 INFO blivet/MainThread: scanning sdb (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb)... 2025-06-28 18:19:08,029 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdb ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,032 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,037 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdb ; 2025-06-28 18:19:08,037 INFO blivet/MainThread: sdb is a disk 2025-06-28 18:19:08,037 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 8 2025-06-28 18:19:08,038 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 9 2025-06-28 18:19:08,041 DEBUG blivet/MainThread: DiskDevice._set_format: sdb ; type: None ; current: None ; 2025-06-28 18:19:08,044 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdb ; status: True ; 2025-06-28 18:19:08,044 DEBUG blivet/MainThread: sdb sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb 2025-06-28 18:19:08,048 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdb ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb ; 2025-06-28 18:19:08,048 DEBUG blivet/MainThread: updated sdb size to 3 GiB (3 GiB) 2025-06-28 18:19:08,048 INFO blivet/MainThread: added disk sdb (id 7) to device tree 2025-06-28 18:19:08,048 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa96710) -- name = sdb status = True id = 7 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 16 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb target size = 3 GiB path = /dev/sdb format args = [] original_format = None removable = False wwn = 6001405c48b47cd2cda4408882faf8c6 2025-06-28 18:19:08,052 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdb ; 2025-06-28 18:19:08,052 DEBUG blivet/MainThread: no type or existing type for sdb, bailing 2025-06-28 18:19:08,055 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdk ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-diskseq/13 ' '/dev/disk/by-id/wwn-0x600140532b7553a2ede45408ac592f3f ' '/dev/disk/by-id/scsi-3600140532b7553a2ede45408ac592f3f', 'DEVNAME': '/dev/sdk', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk', 'DEVTYPE': 'disk', 'DISKSEQ': '13', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk10', 'ID_MODEL_ENC': 'disk10\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '32b7553a-2ede-4540-8ac5-92f3f51fb680', 'ID_SERIAL': '3600140532b7553a2ede45408ac592f3f', 'ID_SERIAL_SHORT': '600140532b7553a2ede45408ac592f3f', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x600140532b7553a2', 'ID_WWN_VENDOR_EXTENSION': '0xede45408ac592f3f', 'ID_WWN_WITH_EXTENSION': '0x600140532b7553a2ede45408ac592f3f', 'MAJOR': '8', 'MINOR': '160', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdk', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '194002206'} ; 2025-06-28 18:19:08,055 INFO blivet/MainThread: scanning sdk (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk)... 2025-06-28 18:19:08,058 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdk ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,062 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,066 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdk ; 2025-06-28 18:19:08,067 INFO blivet/MainThread: sdk is a disk 2025-06-28 18:19:08,067 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 13 2025-06-28 18:19:08,067 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 14 2025-06-28 18:19:08,070 DEBUG blivet/MainThread: DiskDevice._set_format: sdk ; type: None ; current: None ; 2025-06-28 18:19:08,073 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdk ; status: True ; 2025-06-28 18:19:08,073 DEBUG blivet/MainThread: sdk sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk 2025-06-28 18:19:08,078 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdk ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk ; 2025-06-28 18:19:08,078 DEBUG blivet/MainThread: updated sdk size to 3 GiB (3 GiB) 2025-06-28 18:19:08,078 INFO blivet/MainThread: added disk sdk (id 12) to device tree 2025-06-28 18:19:08,078 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa96350) -- name = sdk status = True id = 12 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 160 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk target size = 3 GiB path = /dev/sdk format args = [] original_format = None removable = False wwn = 600140532b7553a2ede45408ac592f3f 2025-06-28 18:19:08,081 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdk ; 2025-06-28 18:19:08,081 DEBUG blivet/MainThread: no type or existing type for sdk, bailing 2025-06-28 18:19:08,085 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdl ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-36001405d599e4585dd1440490d5145a0 ' '/dev/disk/by-id/wwn-0x6001405d599e4585dd1440490d5145a0 ' '/dev/disk/by-diskseq/14', 'DEVNAME': '/dev/sdl', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl', 'DEVTYPE': 'disk', 'DISKSEQ': '14', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk11', 'ID_MODEL_ENC': 'disk11\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'd599e458-5dd1-4404-90d5-145a01a2b253', 'ID_SERIAL': '36001405d599e4585dd1440490d5145a0', 'ID_SERIAL_SHORT': '6001405d599e4585dd1440490d5145a0', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405d599e4585', 'ID_WWN_VENDOR_EXTENSION': '0xdd1440490d5145a0', 'ID_WWN_WITH_EXTENSION': '0x6001405d599e4585dd1440490d5145a0', 'MAJOR': '8', 'MINOR': '176', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdl', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '194051087'} ; 2025-06-28 18:19:08,085 INFO blivet/MainThread: scanning sdl (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl)... 2025-06-28 18:19:08,088 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdl ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,091 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,096 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdl ; 2025-06-28 18:19:08,096 INFO blivet/MainThread: sdl is a disk 2025-06-28 18:19:08,096 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 18 2025-06-28 18:19:08,096 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 19 2025-06-28 18:19:08,099 DEBUG blivet/MainThread: DiskDevice._set_format: sdl ; type: None ; current: None ; 2025-06-28 18:19:08,103 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdl ; status: True ; 2025-06-28 18:19:08,103 DEBUG blivet/MainThread: sdl sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl 2025-06-28 18:19:08,107 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdl ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl ; 2025-06-28 18:19:08,107 DEBUG blivet/MainThread: updated sdl size to 3 GiB (3 GiB) 2025-06-28 18:19:08,107 INFO blivet/MainThread: added disk sdl (id 17) to device tree 2025-06-28 18:19:08,107 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95950) -- name = sdl status = True id = 17 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 176 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl target size = 3 GiB path = /dev/sdl format args = [] original_format = None removable = False wwn = 6001405d599e4585dd1440490d5145a0 2025-06-28 18:19:08,110 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdl ; 2025-06-28 18:19:08,110 DEBUG blivet/MainThread: no type or existing type for sdl, bailing 2025-06-28 18:19:08,113 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdc ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-36001405582e0de585294686b36ae1d1e ' '/dev/disk/by-diskseq/5 ' '/dev/disk/by-id/wwn-0x6001405582e0de585294686b36ae1d1e', 'DEVNAME': '/dev/sdc', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc', 'DEVTYPE': 'disk', 'DISKSEQ': '5', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk2', 'ID_MODEL_ENC': 'disk2\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '582e0de5-8529-4686-b36a-e1d1e173bd3f', 'ID_SERIAL': '36001405582e0de585294686b36ae1d1e', 'ID_SERIAL_SHORT': '6001405582e0de585294686b36ae1d1e', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405582e0de58', 'ID_WWN_VENDOR_EXTENSION': '0x5294686b36ae1d1e', 'ID_WWN_WITH_EXTENSION': '0x6001405582e0de585294686b36ae1d1e', 'MAJOR': '8', 'MINOR': '32', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdc', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193503847'} ; 2025-06-28 18:19:08,114 INFO blivet/MainThread: scanning sdc (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc)... 2025-06-28 18:19:08,117 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdc ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,120 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,125 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdc ; 2025-06-28 18:19:08,125 INFO blivet/MainThread: sdc is a disk 2025-06-28 18:19:08,125 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 23 2025-06-28 18:19:08,125 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 24 2025-06-28 18:19:08,128 DEBUG blivet/MainThread: DiskDevice._set_format: sdc ; type: None ; current: None ; 2025-06-28 18:19:08,132 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdc ; status: True ; 2025-06-28 18:19:08,132 DEBUG blivet/MainThread: sdc sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc 2025-06-28 18:19:08,135 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdc ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc ; 2025-06-28 18:19:08,136 DEBUG blivet/MainThread: updated sdc size to 3 GiB (3 GiB) 2025-06-28 18:19:08,136 INFO blivet/MainThread: added disk sdc (id 22) to device tree 2025-06-28 18:19:08,136 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa960d0) -- name = sdc status = True id = 22 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 32 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc target size = 3 GiB path = /dev/sdc format args = [] original_format = None removable = False wwn = 6001405582e0de585294686b36ae1d1e 2025-06-28 18:19:08,139 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdc ; 2025-06-28 18:19:08,139 DEBUG blivet/MainThread: no type or existing type for sdc, bailing 2025-06-28 18:19:08,143 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdd ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-36001405acd2ba9b1a974f55a9704061c ' '/dev/disk/by-id/wwn-0x6001405acd2ba9b1a974f55a9704061c ' '/dev/disk/by-diskseq/6', 'DEVNAME': '/dev/sdd', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd', 'DEVTYPE': 'disk', 'DISKSEQ': '6', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk3', 'ID_MODEL_ENC': 'disk3\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'acd2ba9b-1a97-4f55-a970-4061c67c3912', 'ID_SERIAL': '36001405acd2ba9b1a974f55a9704061c', 'ID_SERIAL_SHORT': '6001405acd2ba9b1a974f55a9704061c', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405acd2ba9b1', 'ID_WWN_VENDOR_EXTENSION': '0xa974f55a9704061c', 'ID_WWN_WITH_EXTENSION': '0x6001405acd2ba9b1a974f55a9704061c', 'MAJOR': '8', 'MINOR': '48', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdd', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193576323'} ; 2025-06-28 18:19:08,143 INFO blivet/MainThread: scanning sdd (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd)... 2025-06-28 18:19:08,146 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdd ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,149 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,154 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdd ; 2025-06-28 18:19:08,154 INFO blivet/MainThread: sdd is a disk 2025-06-28 18:19:08,154 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 28 2025-06-28 18:19:08,154 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 29 2025-06-28 18:19:08,158 DEBUG blivet/MainThread: DiskDevice._set_format: sdd ; type: None ; current: None ; 2025-06-28 18:19:08,161 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdd ; status: True ; 2025-06-28 18:19:08,161 DEBUG blivet/MainThread: sdd sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd 2025-06-28 18:19:08,164 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdd ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd ; 2025-06-28 18:19:08,164 DEBUG blivet/MainThread: updated sdd size to 3 GiB (3 GiB) 2025-06-28 18:19:08,165 INFO blivet/MainThread: added disk sdd (id 27) to device tree 2025-06-28 18:19:08,165 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95f90) -- name = sdd status = True id = 27 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 48 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd target size = 3 GiB path = /dev/sdd format args = [] original_format = None removable = False wwn = 6001405acd2ba9b1a974f55a9704061c 2025-06-28 18:19:08,168 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdd ; 2025-06-28 18:19:08,169 DEBUG blivet/MainThread: no type or existing type for sdd, bailing 2025-06-28 18:19:08,172 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sde ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-36001405378e6ca643c443e0b9c840399 ' '/dev/disk/by-id/wwn-0x6001405378e6ca643c443e0b9c840399 ' '/dev/disk/by-diskseq/7', 'DEVNAME': '/dev/sde', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde', 'DEVTYPE': 'disk', 'DISKSEQ': '7', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk4', 'ID_MODEL_ENC': 'disk4\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '378e6ca6-43c4-43e0-b9c8-4039999596e3', 'ID_SERIAL': '36001405378e6ca643c443e0b9c840399', 'ID_SERIAL_SHORT': '6001405378e6ca643c443e0b9c840399', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405378e6ca64', 'ID_WWN_VENDOR_EXTENSION': '0x3c443e0b9c840399', 'ID_WWN_WITH_EXTENSION': '0x6001405378e6ca643c443e0b9c840399', 'MAJOR': '8', 'MINOR': '64', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sde', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193646170'} ; 2025-06-28 18:19:08,172 INFO blivet/MainThread: scanning sde (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde)... 2025-06-28 18:19:08,175 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sde ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,178 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,183 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sde ; 2025-06-28 18:19:08,183 INFO blivet/MainThread: sde is a disk 2025-06-28 18:19:08,183 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 33 2025-06-28 18:19:08,183 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 34 2025-06-28 18:19:08,186 DEBUG blivet/MainThread: DiskDevice._set_format: sde ; type: None ; current: None ; 2025-06-28 18:19:08,190 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sde ; status: True ; 2025-06-28 18:19:08,190 DEBUG blivet/MainThread: sde sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde 2025-06-28 18:19:08,193 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sde ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde ; 2025-06-28 18:19:08,193 DEBUG blivet/MainThread: updated sde size to 3 GiB (3 GiB) 2025-06-28 18:19:08,193 INFO blivet/MainThread: added disk sde (id 32) to device tree 2025-06-28 18:19:08,194 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95e50) -- name = sde status = True id = 32 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 64 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde target size = 3 GiB path = /dev/sde format args = [] original_format = None removable = False wwn = 6001405378e6ca643c443e0b9c840399 2025-06-28 18:19:08,197 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sde ; 2025-06-28 18:19:08,197 DEBUG blivet/MainThread: no type or existing type for sde, bailing 2025-06-28 18:19:08,200 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdf ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-diskseq/8 ' '/dev/disk/by-id/scsi-36001405f858954a0e784149995a198ad ' '/dev/disk/by-id/wwn-0x6001405f858954a0e784149995a198ad', 'DEVNAME': '/dev/sdf', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf', 'DEVTYPE': 'disk', 'DISKSEQ': '8', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk5', 'ID_MODEL_ENC': 'disk5\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'f858954a-0e78-4149-995a-198adb87ba16', 'ID_SERIAL': '36001405f858954a0e784149995a198ad', 'ID_SERIAL_SHORT': '6001405f858954a0e784149995a198ad', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405f858954a0', 'ID_WWN_VENDOR_EXTENSION': '0xe784149995a198ad', 'ID_WWN_WITH_EXTENSION': '0x6001405f858954a0e784149995a198ad', 'MAJOR': '8', 'MINOR': '80', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdf', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193708083'} ; 2025-06-28 18:19:08,201 INFO blivet/MainThread: scanning sdf (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf)... 2025-06-28 18:19:08,204 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdf ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,207 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,212 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdf ; 2025-06-28 18:19:08,212 INFO blivet/MainThread: sdf is a disk 2025-06-28 18:19:08,212 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 38 2025-06-28 18:19:08,213 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 39 2025-06-28 18:19:08,216 DEBUG blivet/MainThread: DiskDevice._set_format: sdf ; type: None ; current: None ; 2025-06-28 18:19:08,219 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdf ; status: True ; 2025-06-28 18:19:08,219 DEBUG blivet/MainThread: sdf sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf 2025-06-28 18:19:08,222 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdf ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf ; 2025-06-28 18:19:08,223 DEBUG blivet/MainThread: updated sdf size to 3 GiB (3 GiB) 2025-06-28 18:19:08,223 INFO blivet/MainThread: added disk sdf (id 37) to device tree 2025-06-28 18:19:08,223 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95d10) -- name = sdf status = True id = 37 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 80 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf target size = 3 GiB path = /dev/sdf format args = [] original_format = None removable = False wwn = 6001405f858954a0e784149995a198ad 2025-06-28 18:19:08,227 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdf ; 2025-06-28 18:19:08,227 DEBUG blivet/MainThread: no type or existing type for sdf, bailing 2025-06-28 18:19:08,230 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdg ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/wwn-0x6001405a9efc0e1911c4201a970a6f85 ' '/dev/disk/by-diskseq/9 ' '/dev/disk/by-id/scsi-36001405a9efc0e1911c4201a970a6f85', 'DEVNAME': '/dev/sdg', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg', 'DEVTYPE': 'disk', 'DISKSEQ': '9', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk6', 'ID_MODEL_ENC': 'disk6\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'a9efc0e1-911c-4201-a970-a6f858e624d6', 'ID_SERIAL': '36001405a9efc0e1911c4201a970a6f85', 'ID_SERIAL_SHORT': '6001405a9efc0e1911c4201a970a6f85', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405a9efc0e19', 'ID_WWN_VENDOR_EXTENSION': '0x11c4201a970a6f85', 'ID_WWN_WITH_EXTENSION': '0x6001405a9efc0e1911c4201a970a6f85', 'MAJOR': '8', 'MINOR': '96', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdg', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193767499'} ; 2025-06-28 18:19:08,230 INFO blivet/MainThread: scanning sdg (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg)... 2025-06-28 18:19:08,233 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdg ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,236 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,241 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdg ; 2025-06-28 18:19:08,241 INFO blivet/MainThread: sdg is a disk 2025-06-28 18:19:08,241 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 43 2025-06-28 18:19:08,242 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 44 2025-06-28 18:19:08,245 DEBUG blivet/MainThread: DiskDevice._set_format: sdg ; type: None ; current: None ; 2025-06-28 18:19:08,248 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdg ; status: True ; 2025-06-28 18:19:08,248 DEBUG blivet/MainThread: sdg sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg 2025-06-28 18:19:08,251 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdg ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg ; 2025-06-28 18:19:08,252 DEBUG blivet/MainThread: updated sdg size to 3 GiB (3 GiB) 2025-06-28 18:19:08,252 INFO blivet/MainThread: added disk sdg (id 42) to device tree 2025-06-28 18:19:08,252 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95bd0) -- name = sdg status = True id = 42 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 96 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg target size = 3 GiB path = /dev/sdg format args = [] original_format = None removable = False wwn = 6001405a9efc0e1911c4201a970a6f85 2025-06-28 18:19:08,256 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdg ; 2025-06-28 18:19:08,256 DEBUG blivet/MainThread: no type or existing type for sdg, bailing 2025-06-28 18:19:08,259 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdh ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-360014058181fbe60fbb48f6bf65e97b7 ' '/dev/disk/by-id/wwn-0x60014058181fbe60fbb48f6bf65e97b7 ' '/dev/disk/by-diskseq/10', 'DEVNAME': '/dev/sdh', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh', 'DEVTYPE': 'disk', 'DISKSEQ': '10', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk7', 'ID_MODEL_ENC': 'disk7\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '8181fbe6-0fbb-48f6-bf65-e97b753d1f2b', 'ID_SERIAL': '360014058181fbe60fbb48f6bf65e97b7', 'ID_SERIAL_SHORT': '60014058181fbe60fbb48f6bf65e97b7', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x60014058181fbe60', 'ID_WWN_VENDOR_EXTENSION': '0xfbb48f6bf65e97b7', 'ID_WWN_WITH_EXTENSION': '0x60014058181fbe60fbb48f6bf65e97b7', 'MAJOR': '8', 'MINOR': '112', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdh', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193801085'} ; 2025-06-28 18:19:08,259 INFO blivet/MainThread: scanning sdh (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh)... 2025-06-28 18:19:08,262 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdh ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,265 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,270 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdh ; 2025-06-28 18:19:08,270 INFO blivet/MainThread: sdh is a disk 2025-06-28 18:19:08,270 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 48 2025-06-28 18:19:08,270 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 49 2025-06-28 18:19:08,274 DEBUG blivet/MainThread: DiskDevice._set_format: sdh ; type: None ; current: None ; 2025-06-28 18:19:08,278 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdh ; status: True ; 2025-06-28 18:19:08,278 DEBUG blivet/MainThread: sdh sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh 2025-06-28 18:19:08,281 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdh ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh ; 2025-06-28 18:19:08,282 DEBUG blivet/MainThread: updated sdh size to 3 GiB (3 GiB) 2025-06-28 18:19:08,282 INFO blivet/MainThread: added disk sdh (id 47) to device tree 2025-06-28 18:19:08,282 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95a90) -- name = sdh status = True id = 47 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 112 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh target size = 3 GiB path = /dev/sdh format args = [] original_format = None removable = False wwn = 60014058181fbe60fbb48f6bf65e97b7 2025-06-28 18:19:08,285 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdh ; 2025-06-28 18:19:08,285 DEBUG blivet/MainThread: no type or existing type for sdh, bailing 2025-06-28 18:19:08,288 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdi ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-diskseq/11 ' '/dev/disk/by-id/scsi-3600140536dcfebe092746238bb16a3fa ' '/dev/disk/by-id/wwn-0x600140536dcfebe092746238bb16a3fa', 'DEVNAME': '/dev/sdi', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi', 'DEVTYPE': 'disk', 'DISKSEQ': '11', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk8', 'ID_MODEL_ENC': 'disk8\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '36dcfebe-0927-4623-8bb1-6a3fa2891f0c', 'ID_SERIAL': '3600140536dcfebe092746238bb16a3fa', 'ID_SERIAL_SHORT': '600140536dcfebe092746238bb16a3fa', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x600140536dcfebe0', 'ID_WWN_VENDOR_EXTENSION': '0x92746238bb16a3fa', 'ID_WWN_WITH_EXTENSION': '0x600140536dcfebe092746238bb16a3fa', 'MAJOR': '8', 'MINOR': '128', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdi', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193876095'} ; 2025-06-28 18:19:08,288 INFO blivet/MainThread: scanning sdi (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi)... 2025-06-28 18:19:08,292 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdi ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,295 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,299 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdi ; 2025-06-28 18:19:08,300 INFO blivet/MainThread: sdi is a disk 2025-06-28 18:19:08,300 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 53 2025-06-28 18:19:08,300 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 54 2025-06-28 18:19:08,303 DEBUG blivet/MainThread: DiskDevice._set_format: sdi ; type: None ; current: None ; 2025-06-28 18:19:08,306 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdi ; status: True ; 2025-06-28 18:19:08,307 DEBUG blivet/MainThread: sdi sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi 2025-06-28 18:19:08,310 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdi ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi ; 2025-06-28 18:19:08,311 DEBUG blivet/MainThread: updated sdi size to 3 GiB (3 GiB) 2025-06-28 18:19:08,311 INFO blivet/MainThread: added disk sdi (id 52) to device tree 2025-06-28 18:19:08,311 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa97b10) -- name = sdi status = True id = 52 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 128 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi target size = 3 GiB path = /dev/sdi format args = [] original_format = None removable = False wwn = 600140536dcfebe092746238bb16a3fa 2025-06-28 18:19:08,314 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdi ; 2025-06-28 18:19:08,314 DEBUG blivet/MainThread: no type or existing type for sdi, bailing 2025-06-28 18:19:08,317 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdj ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/wwn-0x600140580c834ee801b48198b71671c5 ' '/dev/disk/by-id/scsi-3600140580c834ee801b48198b71671c5 ' '/dev/disk/by-diskseq/12', 'DEVNAME': '/dev/sdj', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj', 'DEVTYPE': 'disk', 'DISKSEQ': '12', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk9', 'ID_MODEL_ENC': 'disk9\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '80c834ee-801b-4819-8b71-671c5aee19bd', 'ID_SERIAL': '3600140580c834ee801b48198b71671c5', 'ID_SERIAL_SHORT': '600140580c834ee801b48198b71671c5', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x600140580c834ee8', 'ID_WWN_VENDOR_EXTENSION': '0x01b48198b71671c5', 'ID_WWN_WITH_EXTENSION': '0x600140580c834ee801b48198b71671c5', 'MAJOR': '8', 'MINOR': '144', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdj', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193941063'} ; 2025-06-28 18:19:08,317 INFO blivet/MainThread: scanning sdj (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj)... 2025-06-28 18:19:08,321 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdj ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,324 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,328 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdj ; 2025-06-28 18:19:08,329 INFO blivet/MainThread: sdj is a disk 2025-06-28 18:19:08,329 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 58 2025-06-28 18:19:08,329 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 59 2025-06-28 18:19:08,333 DEBUG blivet/MainThread: DiskDevice._set_format: sdj ; type: None ; current: None ; 2025-06-28 18:19:08,336 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdj ; status: True ; 2025-06-28 18:19:08,336 DEBUG blivet/MainThread: sdj sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj 2025-06-28 18:19:08,340 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdj ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj ; 2025-06-28 18:19:08,340 DEBUG blivet/MainThread: updated sdj size to 3 GiB (3 GiB) 2025-06-28 18:19:08,340 INFO blivet/MainThread: added disk sdj (id 57) to device tree 2025-06-28 18:19:08,340 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa97c50) -- name = sdj status = True id = 57 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 144 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj target size = 3 GiB path = /dev/sdj format args = [] original_format = None removable = False wwn = 600140580c834ee801b48198b71671c5 2025-06-28 18:19:08,344 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdj ; 2025-06-28 18:19:08,344 DEBUG blivet/MainThread: no type or existing type for sdj, bailing 2025-06-28 18:19:08,347 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-diskseq/1', 'DEVNAME': '/dev/xvda', 'DEVPATH': '/devices/vbd-768/block/xvda', 'DEVTYPE': 'disk', 'DISKSEQ': '1', 'ID_PART_TABLE_TYPE': 'gpt', 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2', 'MAJOR': '202', 'MINOR': '0', 'SUBSYSTEM': 'block', 'SYS_NAME': 'xvda', 'SYS_PATH': '/sys/devices/vbd-768/block/xvda', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '8070613'} ; 2025-06-28 18:19:08,347 INFO blivet/MainThread: scanning xvda (/sys/devices/vbd-768/block/xvda)... 2025-06-28 18:19:08,351 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,354 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,357 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: xvda ; 2025-06-28 18:19:08,357 WARNING blivet/MainThread: device/vendor is not a valid attribute 2025-06-28 18:19:08,358 WARNING blivet/MainThread: device/model is not a valid attribute 2025-06-28 18:19:08,358 INFO blivet/MainThread: xvda is a disk 2025-06-28 18:19:08,358 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 63 2025-06-28 18:19:08,358 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 64 2025-06-28 18:19:08,361 DEBUG blivet/MainThread: DiskDevice._set_format: xvda ; type: None ; current: None ; 2025-06-28 18:19:08,364 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: xvda ; status: True ; 2025-06-28 18:19:08,364 DEBUG blivet/MainThread: xvda sysfs_path set to /sys/devices/vbd-768/block/xvda 2025-06-28 18:19:08,368 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/xvda ; sysfs_path: /sys/devices/vbd-768/block/xvda ; 2025-06-28 18:19:08,369 DEBUG blivet/MainThread: updated xvda size to 250 GiB (250 GiB) 2025-06-28 18:19:08,369 INFO blivet/MainThread: added disk xvda (id 62) to device tree 2025-06-28 18:19:08,369 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa97d90) -- name = xvda status = True id = 62 children = [] parents = [] uuid = None size = 250 GiB format = existing None major = 202 minor = 0 exists = True protected = False sysfs path = /sys/devices/vbd-768/block/xvda target size = 250 GiB path = /dev/xvda format args = [] original_format = None removable = False wwn = None 2025-06-28 18:19:08,372 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda ; 2025-06-28 18:19:08,376 DEBUG blivet/MainThread: EFIFS.supported: supported: True ; 2025-06-28 18:19:08,376 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 66 2025-06-28 18:19:08,379 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:08,380 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 67 2025-06-28 18:19:08,384 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:08,384 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 68 2025-06-28 18:19:08,387 DEBUG blivet/MainThread: DiskLabelFormatPopulator.run: device: xvda ; label_type: gpt ; 2025-06-28 18:19:08,390 DEBUG blivet/MainThread: DiskDevice.setup: xvda ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:08,394 DEBUG blivet/MainThread: DiskLabel.__init__: uuid: 91c3c0f1-4957-4f21-b15a-28e9016b79c2 ; label: None ; device: /dev/xvda ; serial: None ; exists: True ; 2025-06-28 18:19:08,407 DEBUG blivet/MainThread: Set pmbr_boot on parted.Disk instance -- type: gpt primaryPartitionCount: 2 lastPartitionNumber: 2 maxPrimaryPartitionCount: 128 partitions: [, ] device: PedDisk: <_ped.Disk object at 0x7f2d01d06780> 2025-06-28 18:19:08,469 DEBUG blivet/MainThread: get_format('disklabel') returning DiskLabel instance with object id 69 2025-06-28 18:19:08,473 DEBUG blivet/MainThread: DiskDevice._set_format: xvda ; type: disklabel ; current: None ; 2025-06-28 18:19:08,473 INFO blivet/MainThread: got format: existing gpt disklabel 2025-06-28 18:19:08,473 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:08,476 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda1 ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-partuuid/fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda ' '/dev/disk/by-diskseq/1-part1', 'DEVNAME': '/dev/xvda1', 'DEVPATH': '/devices/vbd-768/block/xvda/xvda1', 'DEVTYPE': 'partition', 'DISKSEQ': '1', 'ID_PART_ENTRY_DISK': '202:0', 'ID_PART_ENTRY_NUMBER': '1', 'ID_PART_ENTRY_OFFSET': '2048', 'ID_PART_ENTRY_SCHEME': 'gpt', 'ID_PART_ENTRY_SIZE': '2048', 'ID_PART_ENTRY_TYPE': '21686148-6449-6e6f-744e-656564454649', 'ID_PART_ENTRY_UUID': 'fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda', 'ID_PART_TABLE_TYPE': 'gpt', 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2', 'MAJOR': '202', 'MINOR': '1', 'PARTN': '1', 'PARTUUID': 'fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda', 'SUBSYSTEM': 'block', 'SYS_NAME': 'xvda1', 'SYS_PATH': '/sys/devices/vbd-768/block/xvda/xvda1', 'TAGS': ':systemd:', 'UDISKS_IGNORE': '1', 'USEC_INITIALIZED': '8070648'} ; 2025-06-28 18:19:08,476 INFO blivet/MainThread: scanning xvda1 (/sys/devices/vbd-768/block/xvda/xvda1)... 2025-06-28 18:19:08,476 WARNING blivet/MainThread: hidden is not a valid attribute 2025-06-28 18:19:08,479 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,483 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,486 DEBUG blivet/MainThread: PartitionDevicePopulator.run: name: xvda1 ; 2025-06-28 18:19:08,489 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,492 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,492 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:08,505 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:08,519 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,522 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 250 GiB disk xvda (62) with existing gpt disklabel 2025-06-28 18:19:08,523 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:08,523 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 72 2025-06-28 18:19:08,527 DEBUG blivet/MainThread: DiskDevice.add_child: name: xvda ; child: xvda1 ; kids: 0 ; 2025-06-28 18:19:08,527 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 73 2025-06-28 18:19:08,530 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda1 ; type: None ; current: None ; 2025-06-28 18:19:08,534 DEBUG blivet/MainThread: PartitionDevice.update_sysfs_path: xvda1 ; status: True ; 2025-06-28 18:19:08,535 DEBUG blivet/MainThread: xvda1 sysfs_path set to /sys/devices/vbd-768/block/xvda/xvda1 2025-06-28 18:19:08,538 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda1 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda1 ; 2025-06-28 18:19:08,538 DEBUG blivet/MainThread: updated xvda1 size to 1024 KiB (1024 KiB) 2025-06-28 18:19:08,539 DEBUG blivet/MainThread: looking up parted Partition: /dev/xvda1 2025-06-28 18:19:08,542 DEBUG blivet/MainThread: PartitionDevice.probe: xvda1 ; exists: True ; 2025-06-28 18:19:08,545 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 1 ; 2025-06-28 18:19:08,549 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 10 ; 2025-06-28 18:19:08,552 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 12 ; 2025-06-28 18:19:08,552 DEBUG blivet/MainThread: get_format('biosboot') returning BIOSBoot instance with object id 75 2025-06-28 18:19:08,557 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda1 ; type: biosboot ; current: None ; 2025-06-28 18:19:08,560 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda1 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda1 ; 2025-06-28 18:19:08,560 DEBUG blivet/MainThread: updated xvda1 size to 1024 KiB (1024 KiB) 2025-06-28 18:19:08,560 INFO blivet/MainThread: added partition xvda1 (id 71) to device tree 2025-06-28 18:19:08,561 INFO blivet/MainThread: got device: PartitionDevice instance (0x7f2cffa39e80) -- name = xvda1 status = True id = 71 children = [] parents = ['existing 250 GiB disk xvda (62) with existing gpt disklabel'] uuid = fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda size = 1024 KiB format = existing biosboot major = 202 minor = 1 exists = True protected = False sysfs path = /sys/devices/vbd-768/block/xvda/xvda1 target size = 1024 KiB path = /dev/xvda1 format args = [] original_format = None grow = None max size = 0 B bootable = None part type = 0 primary = None start sector = None end sector = None parted_partition = parted.Partition instance -- disk: fileSystem: None number: 1 path: /dev/xvda1 type: 0 name: active: True busy: False geometry: PedPartition: <_ped.Partition object at 0x7f2cfdd7a660> disk = existing 250 GiB disk xvda (62) with existing gpt disklabel start = 2048 end = 4095 length = 2048 flags = bios_grub type_uuid = 21686148-6449-6e6f-744e-656564454649 2025-06-28 18:19:08,564 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda1 ; 2025-06-28 18:19:08,565 DEBUG blivet/MainThread: no type or existing type for xvda1, bailing 2025-06-28 18:19:08,568 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda2 ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-partuuid/782cc2d2-7936-4e3f-9cb4-9758a83f53fa ' '/dev/disk/by-uuid/8959a9f3-59d4-4eb7-8e53-e856bbc805e9 ' '/dev/disk/by-diskseq/1-part2', 'DEVNAME': '/dev/xvda2', 'DEVPATH': '/devices/vbd-768/block/xvda/xvda2', 'DEVTYPE': 'partition', 'DISKSEQ': '1', 'ID_FS_BLOCKSIZE': '4096', 'ID_FS_LASTBLOCK': '1572096', 'ID_FS_SIZE': '6439305216', 'ID_FS_TYPE': 'ext4', 'ID_FS_USAGE': 'filesystem', 'ID_FS_UUID': '8959a9f3-59d4-4eb7-8e53-e856bbc805e9', 'ID_FS_UUID_ENC': '8959a9f3-59d4-4eb7-8e53-e856bbc805e9', 'ID_FS_VERSION': '1.0', 'ID_PART_ENTRY_DISK': '202:0', 'ID_PART_ENTRY_NUMBER': '2', 'ID_PART_ENTRY_OFFSET': '4096', 'ID_PART_ENTRY_SCHEME': 'gpt', 'ID_PART_ENTRY_SIZE': '524283871', 'ID_PART_ENTRY_TYPE': '0fc63daf-8483-4772-8e79-3d69d8477de4', 'ID_PART_ENTRY_UUID': '782cc2d2-7936-4e3f-9cb4-9758a83f53fa', 'ID_PART_TABLE_TYPE': 'gpt', 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2', 'MAJOR': '202', 'MINOR': '2', 'PARTN': '2', 'PARTUUID': '782cc2d2-7936-4e3f-9cb4-9758a83f53fa', 'SUBSYSTEM': 'block', 'SYS_NAME': 'xvda2', 'SYS_PATH': '/sys/devices/vbd-768/block/xvda/xvda2', 'TAGS': ':systemd:', 'UDISKS_AUTO': '0', 'USEC_INITIALIZED': '8070794'} ; 2025-06-28 18:19:08,568 INFO blivet/MainThread: scanning xvda2 (/sys/devices/vbd-768/block/xvda/xvda2)... 2025-06-28 18:19:08,568 WARNING blivet/MainThread: hidden is not a valid attribute 2025-06-28 18:19:08,571 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,574 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,577 DEBUG blivet/MainThread: PartitionDevicePopulator.run: name: xvda2 ; 2025-06-28 18:19:08,580 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,583 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,583 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:08,596 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:08,611 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,614 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 250 GiB disk xvda (62) with existing gpt disklabel 2025-06-28 18:19:08,614 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:08,614 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 78 2025-06-28 18:19:08,618 DEBUG blivet/MainThread: DiskDevice.add_child: name: xvda ; child: xvda2 ; kids: 1 ; 2025-06-28 18:19:08,618 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 79 2025-06-28 18:19:08,621 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda2 ; type: None ; current: None ; 2025-06-28 18:19:08,625 DEBUG blivet/MainThread: PartitionDevice.update_sysfs_path: xvda2 ; status: True ; 2025-06-28 18:19:08,626 DEBUG blivet/MainThread: xvda2 sysfs_path set to /sys/devices/vbd-768/block/xvda/xvda2 2025-06-28 18:19:08,629 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda2 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda2 ; 2025-06-28 18:19:08,629 DEBUG blivet/MainThread: updated xvda2 size to 250 GiB (250 GiB) 2025-06-28 18:19:08,629 DEBUG blivet/MainThread: looking up parted Partition: /dev/xvda2 2025-06-28 18:19:08,633 DEBUG blivet/MainThread: PartitionDevice.probe: xvda2 ; exists: True ; 2025-06-28 18:19:08,637 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 1 ; 2025-06-28 18:19:08,640 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 10 ; 2025-06-28 18:19:08,646 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 12 ; 2025-06-28 18:19:08,649 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda2 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda2 ; 2025-06-28 18:19:08,650 DEBUG blivet/MainThread: updated xvda2 size to 250 GiB (250 GiB) 2025-06-28 18:19:08,650 INFO blivet/MainThread: added partition xvda2 (id 77) to device tree 2025-06-28 18:19:08,650 INFO blivet/MainThread: got device: PartitionDevice instance (0x7f2cfdd58910) -- name = xvda2 status = True id = 77 children = [] parents = ['existing 250 GiB disk xvda (62) with existing gpt disklabel'] uuid = 782cc2d2-7936-4e3f-9cb4-9758a83f53fa size = 250 GiB format = existing None major = 202 minor = 2 exists = True protected = False sysfs path = /sys/devices/vbd-768/block/xvda/xvda2 target size = 250 GiB path = /dev/xvda2 format args = [] original_format = None grow = None max size = 0 B bootable = None part type = 0 primary = None start sector = None end sector = None parted_partition = parted.Partition instance -- disk: fileSystem: number: 2 path: /dev/xvda2 type: 0 name: active: True busy: True geometry: PedPartition: <_ped.Partition object at 0x7f2cfdd7b0b0> disk = existing 250 GiB disk xvda (62) with existing gpt disklabel start = 4096 end = 524287966 length = 524283871 flags = type_uuid = 0fc63daf-8483-4772-8e79-3d69d8477de4 2025-06-28 18:19:08,653 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda2 ; 2025-06-28 18:19:08,657 DEBUG blivet/MainThread: EFIFS.supported: supported: True ; 2025-06-28 18:19:08,657 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 81 2025-06-28 18:19:08,661 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:08,661 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 82 2025-06-28 18:19:08,665 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:08,665 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 83 2025-06-28 18:19:08,666 WARNING blivet/MainThread: Stratis DBus service is not running 2025-06-28 18:19:08,666 INFO blivet/MainThread: type detected on 'xvda2' is 'ext4' 2025-06-28 18:19:08,670 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:08,670 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 84 2025-06-28 18:19:08,673 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda2 ; type: ext4 ; current: None ; 2025-06-28 18:19:08,673 INFO blivet/MainThread: got format: existing ext4 filesystem 2025-06-28 18:19:08,676 DEBUG blivet/MainThread: DeviceTree.handle_device: name: zram0 ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-label/zram0 ' '/dev/disk/by-uuid/9e4b39b6-8d8e-46c1-8981-c482cb670ee6 ' '/dev/disk/by-diskseq/2', 'DEVNAME': '/dev/zram0', 'DEVPATH': '/devices/virtual/block/zram0', 'DEVTYPE': 'disk', 'DISKSEQ': '2', 'ID_FS_BLOCKSIZE': '4096', 'ID_FS_LABEL': 'zram0', 'ID_FS_LABEL_ENC': 'zram0', 'ID_FS_LASTBLOCK': '950784', 'ID_FS_SIZE': '3894407168', 'ID_FS_TYPE': 'swap', 'ID_FS_USAGE': 'other', 'ID_FS_UUID': '9e4b39b6-8d8e-46c1-8981-c482cb670ee6', 'ID_FS_UUID_ENC': '9e4b39b6-8d8e-46c1-8981-c482cb670ee6', 'ID_FS_VERSION': '1', 'MAJOR': '251', 'MINOR': '0', 'SUBSYSTEM': 'block', 'SYS_NAME': 'zram0', 'SYS_PATH': '/sys/devices/virtual/block/zram0', 'TAGS': ':systemd:', 'UDISKS_IGNORE': '1', 'USEC_INITIALIZED': '8070909'} ; 2025-06-28 18:19:08,677 INFO blivet/MainThread: scanning zram0 (/sys/devices/virtual/block/zram0)... 2025-06-28 18:19:08,680 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: zram0 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,683 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,684 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/udev.py:1087: DeprecationWarning: Will be removed in 1.0. Access properties with Device.properties. while device: 2025-06-28 18:19:08,687 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: zram0 ; 2025-06-28 18:19:08,687 WARNING blivet/MainThread: device/vendor is not a valid attribute 2025-06-28 18:19:08,687 WARNING blivet/MainThread: device/model is not a valid attribute 2025-06-28 18:19:08,687 INFO blivet/MainThread: zram0 is a disk 2025-06-28 18:19:08,688 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 87 2025-06-28 18:19:08,688 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 88 2025-06-28 18:19:08,691 DEBUG blivet/MainThread: DiskDevice._set_format: zram0 ; type: None ; current: None ; 2025-06-28 18:19:08,694 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: zram0 ; status: True ; 2025-06-28 18:19:08,694 DEBUG blivet/MainThread: zram0 sysfs_path set to /sys/devices/virtual/block/zram0 2025-06-28 18:19:08,698 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/zram0 ; sysfs_path: /sys/devices/virtual/block/zram0 ; 2025-06-28 18:19:08,698 DEBUG blivet/MainThread: updated zram0 size to 3.63 GiB (3.63 GiB) 2025-06-28 18:19:08,699 INFO blivet/MainThread: added disk zram0 (id 86) to device tree 2025-06-28 18:19:08,699 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cfdd5ac10) -- name = zram0 status = True id = 86 children = [] parents = [] uuid = None size = 3.63 GiB format = existing None major = 251 minor = 0 exists = True protected = False sysfs path = /sys/devices/virtual/block/zram0 target size = 3.63 GiB path = /dev/zram0 format args = [] original_format = None removable = False wwn = None 2025-06-28 18:19:08,702 DEBUG blivet/MainThread: DeviceTree.handle_format: name: zram0 ; 2025-06-28 18:19:08,706 DEBUG blivet/MainThread: EFIFS.supported: supported: True ; 2025-06-28 18:19:08,706 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 90 2025-06-28 18:19:08,709 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:08,709 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 91 2025-06-28 18:19:08,713 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:08,713 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 92 2025-06-28 18:19:08,713 INFO blivet/MainThread: type detected on 'zram0' is 'swap' 2025-06-28 18:19:08,716 DEBUG blivet/MainThread: SwapSpace.__init__: uuid: 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 ; label: zram0 ; device: /dev/zram0 ; serial: None ; exists: True ; 2025-06-28 18:19:08,716 DEBUG blivet/MainThread: get_format('swap') returning SwapSpace instance with object id 93 2025-06-28 18:19:08,720 DEBUG blivet/MainThread: DiskDevice._set_format: zram0 ; type: swap ; current: None ; 2025-06-28 18:19:08,720 INFO blivet/MainThread: got format: existing swap 2025-06-28 18:19:08,720 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:08,732 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:08,743 INFO blivet/MainThread: edd: MBR signature on xvda is zero. new disk image? 2025-06-28 18:19:08,744 INFO blivet/MainThread: edd: collected mbr signatures: {} 2025-06-28 18:19:08,744 DEBUG blivet/MainThread: resolved 'UUID=8959a9f3-59d4-4eb7-8e53-e856bbc805e9' to 'xvda2' (partition) 2025-06-28 18:19:08,748 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,752 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,755 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,758 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:08,758 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat' 2025-06-28 18:19:08,761 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nest.test.redhat.com:/mnt/qa ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,764 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,767 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nest.test.redhat.com:/mnt/qa ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,770 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:08,770 DEBUG blivet/MainThread: failed to resolve '/dev/nest.test.redhat.com:/mnt/qa' 2025-06-28 18:19:08,773 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,776 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,779 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,782 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:08,782 DEBUG blivet/MainThread: failed to resolve '/dev/vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive' 2025-06-28 18:19:08,785 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nest.test.redhat.com:/mnt/tpsdist ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,787 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,792 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nest.test.redhat.com:/mnt/tpsdist ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,795 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:08,795 DEBUG blivet/MainThread: failed to resolve '/dev/nest.test.redhat.com:/mnt/tpsdist' 2025-06-28 18:19:08,798 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,801 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,804 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,807 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:08,807 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot' 2025-06-28 18:19:08,810 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,813 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:08,816 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch ; incomplete: False ; hidden: False ; 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch' 2025-06-28 18:19:08,819 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 95 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 96 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 97 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 98 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 99 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 100 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 101 2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 102 2025-06-28 18:19:13,885 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_v99ku3ej/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py'] 2025-06-28 18:19:13,901 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:13,915 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:13,916 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 0 2025-06-28 18:19:13,918 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:13,919 DEBUG blivet/MainThread: trying to set new default fstype to 'ext4' 2025-06-28 18:19:13,922 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:13,922 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 1 2025-06-28 18:19:13,924 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:13,925 INFO blivet/MainThread: Fstab file '' does not exist, setting fstab read path to None 2025-06-28 18:19:13,925 INFO program/MainThread: Running... lsblk --bytes -a -o NAME,SIZE,OWNER,GROUP,MODE,FSTYPE,LABEL,UUID,PARTUUID,MOUNTPOINT 2025-06-28 18:19:13,950 INFO program/MainThread: stdout: 2025-06-28 18:19:13,950 INFO program/MainThread: NAME SIZE OWNER GROUP MODE FSTYPE LABEL UUID PARTUUID MOUNTPOINT 2025-06-28 18:19:13,950 INFO program/MainThread: sda 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdb 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdc 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdd 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sde 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdf 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdg 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdh 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdi 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdj 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,950 INFO program/MainThread: sdk 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,951 INFO program/MainThread: sdl 3221225472 root disk brw-rw---- 2025-06-28 18:19:13,951 INFO program/MainThread: xvda 268435456000 root disk brw-rw---- 2025-06-28 18:19:13,951 INFO program/MainThread: |-xvda1 1048576 root disk brw-rw---- fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda 2025-06-28 18:19:13,951 INFO program/MainThread: `-xvda2 268433341952 root disk brw-rw---- ext4 8959a9f3-59d4-4eb7-8e53-e856bbc805e9 782cc2d2-7936-4e3f-9cb4-9758a83f53fa / 2025-06-28 18:19:13,951 INFO program/MainThread: zram0 3894411264 root disk brw-rw---- swap zram0 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 [SWAP] 2025-06-28 18:19:13,951 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:13,951 DEBUG blivet/MainThread: lsblk output: NAME SIZE OWNER GROUP MODE FSTYPE LABEL UUID PARTUUID MOUNTPOINT sda 3221225472 root disk brw-rw---- sdb 3221225472 root disk brw-rw---- sdc 3221225472 root disk brw-rw---- sdd 3221225472 root disk brw-rw---- sde 3221225472 root disk brw-rw---- sdf 3221225472 root disk brw-rw---- sdg 3221225472 root disk brw-rw---- sdh 3221225472 root disk brw-rw---- sdi 3221225472 root disk brw-rw---- sdj 3221225472 root disk brw-rw---- sdk 3221225472 root disk brw-rw---- sdl 3221225472 root disk brw-rw---- xvda 268435456000 root disk brw-rw---- |-xvda1 1048576 root disk brw-rw---- fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda `-xvda2 268433341952 root disk brw-rw---- ext4 8959a9f3-59d4-4eb7-8e53-e856bbc805e9 782cc2d2-7936-4e3f-9cb4-9758a83f53fa / zram0 3894411264 root disk brw-rw---- swap zram0 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 [SWAP] 2025-06-28 18:19:13,951 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:13,951 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:13,951 INFO blivet/MainThread: resetting Blivet (version 3.12.1) instance 2025-06-28 18:19:13,951 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:13,951 INFO blivet/MainThread: DeviceTree.populate: ignored_disks is [] ; exclusive_disks is [] 2025-06-28 18:19:13,952 WARNING blivet/MainThread: Failed to call the update_volume_info method: libstoragemgmt functionality not available 2025-06-28 18:19:13,952 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:13,961 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:13,973 INFO blivet/MainThread: devices to scan: ['sda', 'sdb', 'sdk', 'sdl', 'sdc', 'sdd', 'sde', 'sdf', 'sdg', 'sdh', 'sdi', 'sdj', 'xvda', 'xvda1', 'xvda2', 'zram0'] 2025-06-28 18:19:13,977 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sda ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-360014058847ce6dd73d4f01931e495d9 ' '/dev/disk/by-id/wwn-0x60014058847ce6dd73d4f01931e495d9 ' '/dev/disk/by-diskseq/3', 'DEVNAME': '/dev/sda', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda', 'DEVTYPE': 'disk', 'DISKSEQ': '3', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk0', 'ID_MODEL_ENC': 'disk0\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '8847ce6d-d73d-4f01-931e-495d93c2876b', 'ID_SERIAL': '360014058847ce6dd73d4f01931e495d9', 'ID_SERIAL_SHORT': '60014058847ce6dd73d4f01931e495d9', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x60014058847ce6dd', 'ID_WWN_VENDOR_EXTENSION': '0x73d4f01931e495d9', 'ID_WWN_WITH_EXTENSION': '0x60014058847ce6dd73d4f01931e495d9', 'MAJOR': '8', 'MINOR': '0', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sda', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193406043'} ; 2025-06-28 18:19:13,977 INFO blivet/MainThread: scanning sda (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda)... 2025-06-28 18:19:13,977 INFO program/MainThread: Running [3] lvm lvs --noheadings --nosuffix --nameprefixes --unquoted --units=b -a -o vg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype,origin,pool_lv,data_lv,metadata_lv,role,move_pv,data_percent,metadata_percent,copy_percent,lv_tags --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:14,003 INFO program/MainThread: stdout[3]: 2025-06-28 18:19:14,003 INFO program/MainThread: stderr[3]: 2025-06-28 18:19:14,003 INFO program/MainThread: ...done [3] (exit code: 0) 2025-06-28 18:19:14,007 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,010 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,011 INFO program/MainThread: Running [4] mdadm --version ... 2025-06-28 18:19:14,013 INFO program/MainThread: stdout[4]: 2025-06-28 18:19:14,013 INFO program/MainThread: stderr[4]: mdadm - v4.3 - 2024-02-15 2025-06-28 18:19:14,013 INFO program/MainThread: ...done [4] (exit code: 0) 2025-06-28 18:19:14,013 INFO program/MainThread: Running [5] dmsetup --version ... 2025-06-28 18:19:14,016 INFO program/MainThread: stdout[5]: Library version: 1.02.204 (2025-01-14) Driver version: 4.49.0 2025-06-28 18:19:14,017 INFO program/MainThread: stderr[5]: 2025-06-28 18:19:14,017 INFO program/MainThread: ...done [5] (exit code: 0) 2025-06-28 18:19:14,023 INFO blivet/MainThread: failed to get initiator name from iscsi firmware: UDisks iSCSI functionality not available 2025-06-28 18:19:14,025 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/udev.py:1087: DeprecationWarning: Will be removed in 1.0. Access properties with Device.properties. while device: 2025-06-28 18:19:14,030 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sda ; 2025-06-28 18:19:14,031 INFO blivet/MainThread: sda is a disk 2025-06-28 18:19:14,031 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:14,031 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 3 2025-06-28 18:19:14,031 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 4 2025-06-28 18:19:14,035 DEBUG blivet/MainThread: DiskDevice._set_format: sda ; type: None ; current: None ; 2025-06-28 18:19:14,039 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sda ; status: True ; 2025-06-28 18:19:14,039 DEBUG blivet/MainThread: sda sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda 2025-06-28 18:19:14,042 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sda ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda ; 2025-06-28 18:19:14,042 DEBUG blivet/MainThread: updated sda size to 3 GiB (3 GiB) 2025-06-28 18:19:14,043 INFO blivet/MainThread: added disk sda (id 2) to device tree 2025-06-28 18:19:14,043 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40678c0) -- name = sda status = True id = 2 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 0 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda target size = 3 GiB path = /dev/sda format args = [] original_format = None removable = False wwn = 60014058847ce6dd73d4f01931e495d9 2025-06-28 18:19:14,047 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sda ; 2025-06-28 18:19:14,047 DEBUG blivet/MainThread: no type or existing type for sda, bailing 2025-06-28 18:19:14,050 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdb ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/wwn-0x6001405c48b47cd2cda4408882faf8c6 ' '/dev/disk/by-diskseq/4 ' '/dev/disk/by-id/scsi-36001405c48b47cd2cda4408882faf8c6', 'DEVNAME': '/dev/sdb', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb', 'DEVTYPE': 'disk', 'DISKSEQ': '4', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk1', 'ID_MODEL_ENC': 'disk1\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'c48b47cd-2cda-4408-882f-af8c68d83c74', 'ID_SERIAL': '36001405c48b47cd2cda4408882faf8c6', 'ID_SERIAL_SHORT': '6001405c48b47cd2cda4408882faf8c6', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405c48b47cd2', 'ID_WWN_VENDOR_EXTENSION': '0xcda4408882faf8c6', 'ID_WWN_WITH_EXTENSION': '0x6001405c48b47cd2cda4408882faf8c6', 'MAJOR': '8', 'MINOR': '16', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdb', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193440060'} ; 2025-06-28 18:19:14,050 INFO blivet/MainThread: scanning sdb (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb)... 2025-06-28 18:19:14,053 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdb ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,056 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,061 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdb ; 2025-06-28 18:19:14,061 INFO blivet/MainThread: sdb is a disk 2025-06-28 18:19:14,062 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 8 2025-06-28 18:19:14,062 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 9 2025-06-28 18:19:14,065 DEBUG blivet/MainThread: DiskDevice._set_format: sdb ; type: None ; current: None ; 2025-06-28 18:19:14,068 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdb ; status: True ; 2025-06-28 18:19:14,068 DEBUG blivet/MainThread: sdb sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb 2025-06-28 18:19:14,072 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdb ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb ; 2025-06-28 18:19:14,072 DEBUG blivet/MainThread: updated sdb size to 3 GiB (3 GiB) 2025-06-28 18:19:14,072 INFO blivet/MainThread: added disk sdb (id 7) to device tree 2025-06-28 18:19:14,072 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c2710) -- name = sdb status = True id = 7 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 16 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb target size = 3 GiB path = /dev/sdb format args = [] original_format = None removable = False wwn = 6001405c48b47cd2cda4408882faf8c6 2025-06-28 18:19:14,076 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdb ; 2025-06-28 18:19:14,076 DEBUG blivet/MainThread: no type or existing type for sdb, bailing 2025-06-28 18:19:14,079 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdk ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-3600140532b7553a2ede45408ac592f3f ' '/dev/disk/by-diskseq/13 ' '/dev/disk/by-id/wwn-0x600140532b7553a2ede45408ac592f3f', 'DEVNAME': '/dev/sdk', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk', 'DEVTYPE': 'disk', 'DISKSEQ': '13', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk10', 'ID_MODEL_ENC': 'disk10\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '32b7553a-2ede-4540-8ac5-92f3f51fb680', 'ID_SERIAL': '3600140532b7553a2ede45408ac592f3f', 'ID_SERIAL_SHORT': '600140532b7553a2ede45408ac592f3f', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x600140532b7553a2', 'ID_WWN_VENDOR_EXTENSION': '0xede45408ac592f3f', 'ID_WWN_WITH_EXTENSION': '0x600140532b7553a2ede45408ac592f3f', 'MAJOR': '8', 'MINOR': '160', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdk', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '194002206'} ; 2025-06-28 18:19:14,079 INFO blivet/MainThread: scanning sdk (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk)... 2025-06-28 18:19:14,082 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdk ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,085 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,090 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdk ; 2025-06-28 18:19:14,090 INFO blivet/MainThread: sdk is a disk 2025-06-28 18:19:14,090 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 13 2025-06-28 18:19:14,091 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 14 2025-06-28 18:19:14,094 DEBUG blivet/MainThread: DiskDevice._set_format: sdk ; type: None ; current: None ; 2025-06-28 18:19:14,097 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdk ; status: True ; 2025-06-28 18:19:14,097 DEBUG blivet/MainThread: sdk sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk 2025-06-28 18:19:14,101 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdk ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk ; 2025-06-28 18:19:14,101 DEBUG blivet/MainThread: updated sdk size to 3 GiB (3 GiB) 2025-06-28 18:19:14,101 INFO blivet/MainThread: added disk sdk (id 12) to device tree 2025-06-28 18:19:14,101 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c2350) -- name = sdk status = True id = 12 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 160 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk target size = 3 GiB path = /dev/sdk format args = [] original_format = None removable = False wwn = 600140532b7553a2ede45408ac592f3f 2025-06-28 18:19:14,105 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdk ; 2025-06-28 18:19:14,105 DEBUG blivet/MainThread: no type or existing type for sdk, bailing 2025-06-28 18:19:14,108 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdl ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-36001405d599e4585dd1440490d5145a0 ' '/dev/disk/by-diskseq/14 ' '/dev/disk/by-id/wwn-0x6001405d599e4585dd1440490d5145a0', 'DEVNAME': '/dev/sdl', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl', 'DEVTYPE': 'disk', 'DISKSEQ': '14', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk11', 'ID_MODEL_ENC': 'disk11\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'd599e458-5dd1-4404-90d5-145a01a2b253', 'ID_SERIAL': '36001405d599e4585dd1440490d5145a0', 'ID_SERIAL_SHORT': '6001405d599e4585dd1440490d5145a0', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405d599e4585', 'ID_WWN_VENDOR_EXTENSION': '0xdd1440490d5145a0', 'ID_WWN_WITH_EXTENSION': '0x6001405d599e4585dd1440490d5145a0', 'MAJOR': '8', 'MINOR': '176', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdl', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '194051087'} ; 2025-06-28 18:19:14,108 INFO blivet/MainThread: scanning sdl (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl)... 2025-06-28 18:19:14,111 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdl ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,114 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,119 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdl ; 2025-06-28 18:19:14,119 INFO blivet/MainThread: sdl is a disk 2025-06-28 18:19:14,119 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 18 2025-06-28 18:19:14,119 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 19 2025-06-28 18:19:14,123 DEBUG blivet/MainThread: DiskDevice._set_format: sdl ; type: None ; current: None ; 2025-06-28 18:19:14,126 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdl ; status: True ; 2025-06-28 18:19:14,126 DEBUG blivet/MainThread: sdl sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl 2025-06-28 18:19:14,130 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdl ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl ; 2025-06-28 18:19:14,130 DEBUG blivet/MainThread: updated sdl size to 3 GiB (3 GiB) 2025-06-28 18:19:14,130 INFO blivet/MainThread: added disk sdl (id 17) to device tree 2025-06-28 18:19:14,130 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1950) -- name = sdl status = True id = 17 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 176 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl target size = 3 GiB path = /dev/sdl format args = [] original_format = None removable = False wwn = 6001405d599e4585dd1440490d5145a0 2025-06-28 18:19:14,133 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdl ; 2025-06-28 18:19:14,133 DEBUG blivet/MainThread: no type or existing type for sdl, bailing 2025-06-28 18:19:14,137 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdc ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/wwn-0x6001405582e0de585294686b36ae1d1e ' '/dev/disk/by-diskseq/5 ' '/dev/disk/by-id/scsi-36001405582e0de585294686b36ae1d1e', 'DEVNAME': '/dev/sdc', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc', 'DEVTYPE': 'disk', 'DISKSEQ': '5', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk2', 'ID_MODEL_ENC': 'disk2\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '582e0de5-8529-4686-b36a-e1d1e173bd3f', 'ID_SERIAL': '36001405582e0de585294686b36ae1d1e', 'ID_SERIAL_SHORT': '6001405582e0de585294686b36ae1d1e', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405582e0de58', 'ID_WWN_VENDOR_EXTENSION': '0x5294686b36ae1d1e', 'ID_WWN_WITH_EXTENSION': '0x6001405582e0de585294686b36ae1d1e', 'MAJOR': '8', 'MINOR': '32', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdc', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193503847'} ; 2025-06-28 18:19:14,137 INFO blivet/MainThread: scanning sdc (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc)... 2025-06-28 18:19:14,140 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdc ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,143 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,148 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdc ; 2025-06-28 18:19:14,148 INFO blivet/MainThread: sdc is a disk 2025-06-28 18:19:14,148 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 23 2025-06-28 18:19:14,148 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 24 2025-06-28 18:19:14,152 DEBUG blivet/MainThread: DiskDevice._set_format: sdc ; type: None ; current: None ; 2025-06-28 18:19:14,155 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdc ; status: True ; 2025-06-28 18:19:14,155 DEBUG blivet/MainThread: sdc sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc 2025-06-28 18:19:14,158 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdc ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc ; 2025-06-28 18:19:14,159 DEBUG blivet/MainThread: updated sdc size to 3 GiB (3 GiB) 2025-06-28 18:19:14,159 INFO blivet/MainThread: added disk sdc (id 22) to device tree 2025-06-28 18:19:14,159 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c20d0) -- name = sdc status = True id = 22 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 32 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc target size = 3 GiB path = /dev/sdc format args = [] original_format = None removable = False wwn = 6001405582e0de585294686b36ae1d1e 2025-06-28 18:19:14,162 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdc ; 2025-06-28 18:19:14,162 DEBUG blivet/MainThread: no type or existing type for sdc, bailing 2025-06-28 18:19:14,166 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdd ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-diskseq/6 ' '/dev/disk/by-id/scsi-36001405acd2ba9b1a974f55a9704061c ' '/dev/disk/by-id/wwn-0x6001405acd2ba9b1a974f55a9704061c', 'DEVNAME': '/dev/sdd', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd', 'DEVTYPE': 'disk', 'DISKSEQ': '6', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk3', 'ID_MODEL_ENC': 'disk3\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'acd2ba9b-1a97-4f55-a970-4061c67c3912', 'ID_SERIAL': '36001405acd2ba9b1a974f55a9704061c', 'ID_SERIAL_SHORT': '6001405acd2ba9b1a974f55a9704061c', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405acd2ba9b1', 'ID_WWN_VENDOR_EXTENSION': '0xa974f55a9704061c', 'ID_WWN_WITH_EXTENSION': '0x6001405acd2ba9b1a974f55a9704061c', 'MAJOR': '8', 'MINOR': '48', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdd', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193576323'} ; 2025-06-28 18:19:14,166 INFO blivet/MainThread: scanning sdd (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd)... 2025-06-28 18:19:14,169 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdd ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,172 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,177 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdd ; 2025-06-28 18:19:14,177 INFO blivet/MainThread: sdd is a disk 2025-06-28 18:19:14,177 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 28 2025-06-28 18:19:14,177 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 29 2025-06-28 18:19:14,181 DEBUG blivet/MainThread: DiskDevice._set_format: sdd ; type: None ; current: None ; 2025-06-28 18:19:14,184 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdd ; status: True ; 2025-06-28 18:19:14,184 DEBUG blivet/MainThread: sdd sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd 2025-06-28 18:19:14,188 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdd ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd ; 2025-06-28 18:19:14,188 DEBUG blivet/MainThread: updated sdd size to 3 GiB (3 GiB) 2025-06-28 18:19:14,188 INFO blivet/MainThread: added disk sdd (id 27) to device tree 2025-06-28 18:19:14,188 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1f90) -- name = sdd status = True id = 27 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 48 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd target size = 3 GiB path = /dev/sdd format args = [] original_format = None removable = False wwn = 6001405acd2ba9b1a974f55a9704061c 2025-06-28 18:19:14,191 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdd ; 2025-06-28 18:19:14,191 DEBUG blivet/MainThread: no type or existing type for sdd, bailing 2025-06-28 18:19:14,194 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sde ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-36001405378e6ca643c443e0b9c840399 ' '/dev/disk/by-diskseq/7 ' '/dev/disk/by-id/wwn-0x6001405378e6ca643c443e0b9c840399', 'DEVNAME': '/dev/sde', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde', 'DEVTYPE': 'disk', 'DISKSEQ': '7', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk4', 'ID_MODEL_ENC': 'disk4\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '378e6ca6-43c4-43e0-b9c8-4039999596e3', 'ID_SERIAL': '36001405378e6ca643c443e0b9c840399', 'ID_SERIAL_SHORT': '6001405378e6ca643c443e0b9c840399', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405378e6ca64', 'ID_WWN_VENDOR_EXTENSION': '0x3c443e0b9c840399', 'ID_WWN_WITH_EXTENSION': '0x6001405378e6ca643c443e0b9c840399', 'MAJOR': '8', 'MINOR': '64', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sde', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193646170'} ; 2025-06-28 18:19:14,195 INFO blivet/MainThread: scanning sde (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde)... 2025-06-28 18:19:14,198 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sde ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,201 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,206 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sde ; 2025-06-28 18:19:14,206 INFO blivet/MainThread: sde is a disk 2025-06-28 18:19:14,206 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 33 2025-06-28 18:19:14,206 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 34 2025-06-28 18:19:14,209 DEBUG blivet/MainThread: DiskDevice._set_format: sde ; type: None ; current: None ; 2025-06-28 18:19:14,213 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sde ; status: True ; 2025-06-28 18:19:14,213 DEBUG blivet/MainThread: sde sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde 2025-06-28 18:19:14,216 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sde ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde ; 2025-06-28 18:19:14,216 DEBUG blivet/MainThread: updated sde size to 3 GiB (3 GiB) 2025-06-28 18:19:14,217 INFO blivet/MainThread: added disk sde (id 32) to device tree 2025-06-28 18:19:14,217 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1e50) -- name = sde status = True id = 32 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 64 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde target size = 3 GiB path = /dev/sde format args = [] original_format = None removable = False wwn = 6001405378e6ca643c443e0b9c840399 2025-06-28 18:19:14,220 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sde ; 2025-06-28 18:19:14,220 DEBUG blivet/MainThread: no type or existing type for sde, bailing 2025-06-28 18:19:14,223 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdf ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/wwn-0x6001405f858954a0e784149995a198ad ' '/dev/disk/by-diskseq/8 ' '/dev/disk/by-id/scsi-36001405f858954a0e784149995a198ad', 'DEVNAME': '/dev/sdf', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf', 'DEVTYPE': 'disk', 'DISKSEQ': '8', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk5', 'ID_MODEL_ENC': 'disk5\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'f858954a-0e78-4149-995a-198adb87ba16', 'ID_SERIAL': '36001405f858954a0e784149995a198ad', 'ID_SERIAL_SHORT': '6001405f858954a0e784149995a198ad', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405f858954a0', 'ID_WWN_VENDOR_EXTENSION': '0xe784149995a198ad', 'ID_WWN_WITH_EXTENSION': '0x6001405f858954a0e784149995a198ad', 'MAJOR': '8', 'MINOR': '80', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdf', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193708083'} ; 2025-06-28 18:19:14,223 INFO blivet/MainThread: scanning sdf (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf)... 2025-06-28 18:19:14,227 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdf ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,230 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,235 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdf ; 2025-06-28 18:19:14,235 INFO blivet/MainThread: sdf is a disk 2025-06-28 18:19:14,235 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 38 2025-06-28 18:19:14,235 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 39 2025-06-28 18:19:14,238 DEBUG blivet/MainThread: DiskDevice._set_format: sdf ; type: None ; current: None ; 2025-06-28 18:19:14,242 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdf ; status: True ; 2025-06-28 18:19:14,242 DEBUG blivet/MainThread: sdf sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf 2025-06-28 18:19:14,245 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdf ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf ; 2025-06-28 18:19:14,245 DEBUG blivet/MainThread: updated sdf size to 3 GiB (3 GiB) 2025-06-28 18:19:14,246 INFO blivet/MainThread: added disk sdf (id 37) to device tree 2025-06-28 18:19:14,246 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1d10) -- name = sdf status = True id = 37 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 80 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf target size = 3 GiB path = /dev/sdf format args = [] original_format = None removable = False wwn = 6001405f858954a0e784149995a198ad 2025-06-28 18:19:14,249 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdf ; 2025-06-28 18:19:14,249 DEBUG blivet/MainThread: no type or existing type for sdf, bailing 2025-06-28 18:19:14,253 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdg ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-diskseq/9 ' '/dev/disk/by-id/scsi-36001405a9efc0e1911c4201a970a6f85 ' '/dev/disk/by-id/wwn-0x6001405a9efc0e1911c4201a970a6f85', 'DEVNAME': '/dev/sdg', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg', 'DEVTYPE': 'disk', 'DISKSEQ': '9', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk6', 'ID_MODEL_ENC': 'disk6\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': 'a9efc0e1-911c-4201-a970-a6f858e624d6', 'ID_SERIAL': '36001405a9efc0e1911c4201a970a6f85', 'ID_SERIAL_SHORT': '6001405a9efc0e1911c4201a970a6f85', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x6001405a9efc0e19', 'ID_WWN_VENDOR_EXTENSION': '0x11c4201a970a6f85', 'ID_WWN_WITH_EXTENSION': '0x6001405a9efc0e1911c4201a970a6f85', 'MAJOR': '8', 'MINOR': '96', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdg', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193767499'} ; 2025-06-28 18:19:14,253 INFO blivet/MainThread: scanning sdg (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg)... 2025-06-28 18:19:14,256 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdg ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,259 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,264 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdg ; 2025-06-28 18:19:14,264 INFO blivet/MainThread: sdg is a disk 2025-06-28 18:19:14,264 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 43 2025-06-28 18:19:14,264 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 44 2025-06-28 18:19:14,267 DEBUG blivet/MainThread: DiskDevice._set_format: sdg ; type: None ; current: None ; 2025-06-28 18:19:14,271 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdg ; status: True ; 2025-06-28 18:19:14,271 DEBUG blivet/MainThread: sdg sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg 2025-06-28 18:19:14,275 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdg ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg ; 2025-06-28 18:19:14,275 DEBUG blivet/MainThread: updated sdg size to 3 GiB (3 GiB) 2025-06-28 18:19:14,275 INFO blivet/MainThread: added disk sdg (id 42) to device tree 2025-06-28 18:19:14,275 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1bd0) -- name = sdg status = True id = 42 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 96 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg target size = 3 GiB path = /dev/sdg format args = [] original_format = None removable = False wwn = 6001405a9efc0e1911c4201a970a6f85 2025-06-28 18:19:14,278 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdg ; 2025-06-28 18:19:14,278 DEBUG blivet/MainThread: no type or existing type for sdg, bailing 2025-06-28 18:19:14,281 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdh ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-360014058181fbe60fbb48f6bf65e97b7 ' '/dev/disk/by-id/wwn-0x60014058181fbe60fbb48f6bf65e97b7 ' '/dev/disk/by-diskseq/10', 'DEVNAME': '/dev/sdh', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh', 'DEVTYPE': 'disk', 'DISKSEQ': '10', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk7', 'ID_MODEL_ENC': 'disk7\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '8181fbe6-0fbb-48f6-bf65-e97b753d1f2b', 'ID_SERIAL': '360014058181fbe60fbb48f6bf65e97b7', 'ID_SERIAL_SHORT': '60014058181fbe60fbb48f6bf65e97b7', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x60014058181fbe60', 'ID_WWN_VENDOR_EXTENSION': '0xfbb48f6bf65e97b7', 'ID_WWN_WITH_EXTENSION': '0x60014058181fbe60fbb48f6bf65e97b7', 'MAJOR': '8', 'MINOR': '112', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdh', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193801085'} ; 2025-06-28 18:19:14,282 INFO blivet/MainThread: scanning sdh (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh)... 2025-06-28 18:19:14,285 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdh ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,288 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,293 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdh ; 2025-06-28 18:19:14,293 INFO blivet/MainThread: sdh is a disk 2025-06-28 18:19:14,293 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 48 2025-06-28 18:19:14,293 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 49 2025-06-28 18:19:14,297 DEBUG blivet/MainThread: DiskDevice._set_format: sdh ; type: None ; current: None ; 2025-06-28 18:19:14,300 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdh ; status: True ; 2025-06-28 18:19:14,300 DEBUG blivet/MainThread: sdh sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh 2025-06-28 18:19:14,304 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdh ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh ; 2025-06-28 18:19:14,304 DEBUG blivet/MainThread: updated sdh size to 3 GiB (3 GiB) 2025-06-28 18:19:14,304 INFO blivet/MainThread: added disk sdh (id 47) to device tree 2025-06-28 18:19:14,304 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1a90) -- name = sdh status = True id = 47 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 112 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh target size = 3 GiB path = /dev/sdh format args = [] original_format = None removable = False wwn = 60014058181fbe60fbb48f6bf65e97b7 2025-06-28 18:19:14,307 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdh ; 2025-06-28 18:19:14,308 DEBUG blivet/MainThread: no type or existing type for sdh, bailing 2025-06-28 18:19:14,311 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdi ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-3600140536dcfebe092746238bb16a3fa ' '/dev/disk/by-diskseq/11 ' '/dev/disk/by-id/wwn-0x600140536dcfebe092746238bb16a3fa', 'DEVNAME': '/dev/sdi', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi', 'DEVTYPE': 'disk', 'DISKSEQ': '11', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk8', 'ID_MODEL_ENC': 'disk8\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '36dcfebe-0927-4623-8bb1-6a3fa2891f0c', 'ID_SERIAL': '3600140536dcfebe092746238bb16a3fa', 'ID_SERIAL_SHORT': '600140536dcfebe092746238bb16a3fa', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x600140536dcfebe0', 'ID_WWN_VENDOR_EXTENSION': '0x92746238bb16a3fa', 'ID_WWN_WITH_EXTENSION': '0x600140536dcfebe092746238bb16a3fa', 'MAJOR': '8', 'MINOR': '128', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdi', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193876095'} ; 2025-06-28 18:19:14,311 INFO blivet/MainThread: scanning sdi (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi)... 2025-06-28 18:19:14,314 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdi ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,317 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,322 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdi ; 2025-06-28 18:19:14,322 INFO blivet/MainThread: sdi is a disk 2025-06-28 18:19:14,322 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 53 2025-06-28 18:19:14,322 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 54 2025-06-28 18:19:14,325 DEBUG blivet/MainThread: DiskDevice._set_format: sdi ; type: None ; current: None ; 2025-06-28 18:19:14,329 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdi ; status: True ; 2025-06-28 18:19:14,329 DEBUG blivet/MainThread: sdi sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi 2025-06-28 18:19:14,333 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdi ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi ; 2025-06-28 18:19:14,333 DEBUG blivet/MainThread: updated sdi size to 3 GiB (3 GiB) 2025-06-28 18:19:14,333 INFO blivet/MainThread: added disk sdi (id 52) to device tree 2025-06-28 18:19:14,333 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c3b10) -- name = sdi status = True id = 52 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 128 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi target size = 3 GiB path = /dev/sdi format args = [] original_format = None removable = False wwn = 600140536dcfebe092746238bb16a3fa 2025-06-28 18:19:14,336 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdi ; 2025-06-28 18:19:14,336 DEBUG blivet/MainThread: no type or existing type for sdi, bailing 2025-06-28 18:19:14,339 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdj ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-id/scsi-3600140580c834ee801b48198b71671c5 ' '/dev/disk/by-diskseq/12 ' '/dev/disk/by-id/wwn-0x600140580c834ee801b48198b71671c5', 'DEVNAME': '/dev/sdj', 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj', 'DEVTYPE': 'disk', 'DISKSEQ': '12', 'ID_BUS': 'scsi', 'ID_MODEL': 'disk9', 'ID_MODEL_ENC': 'disk9\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20\\x20', 'ID_REVISION': '4.0', 'ID_SCSI': '1', 'ID_SCSI_SERIAL': '80c834ee-801b-4819-8b71-671c5aee19bd', 'ID_SERIAL': '3600140580c834ee801b48198b71671c5', 'ID_SERIAL_SHORT': '600140580c834ee801b48198b71671c5', 'ID_TARGET_PORT': '0', 'ID_TYPE': 'disk', 'ID_VENDOR': 'LIO-ORG', 'ID_VENDOR_ENC': 'LIO-ORG\\x20', 'ID_WWN': '0x600140580c834ee8', 'ID_WWN_VENDOR_EXTENSION': '0x01b48198b71671c5', 'ID_WWN_WITH_EXTENSION': '0x600140580c834ee801b48198b71671c5', 'MAJOR': '8', 'MINOR': '144', 'SUBSYSTEM': 'block', 'SYS_NAME': 'sdj', 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '193941063'} ; 2025-06-28 18:19:14,340 INFO blivet/MainThread: scanning sdj (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj)... 2025-06-28 18:19:14,343 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdj ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,346 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,351 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdj ; 2025-06-28 18:19:14,351 INFO blivet/MainThread: sdj is a disk 2025-06-28 18:19:14,351 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 58 2025-06-28 18:19:14,351 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 59 2025-06-28 18:19:14,354 DEBUG blivet/MainThread: DiskDevice._set_format: sdj ; type: None ; current: None ; 2025-06-28 18:19:14,358 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdj ; status: True ; 2025-06-28 18:19:14,358 DEBUG blivet/MainThread: sdj sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj 2025-06-28 18:19:14,362 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdj ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj ; 2025-06-28 18:19:14,362 DEBUG blivet/MainThread: updated sdj size to 3 GiB (3 GiB) 2025-06-28 18:19:14,362 INFO blivet/MainThread: added disk sdj (id 57) to device tree 2025-06-28 18:19:14,362 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c3c50) -- name = sdj status = True id = 57 children = [] parents = [] uuid = None size = 3 GiB format = existing None major = 8 minor = 144 exists = True protected = False sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj target size = 3 GiB path = /dev/sdj format args = [] original_format = None removable = False wwn = 600140580c834ee801b48198b71671c5 2025-06-28 18:19:14,366 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdj ; 2025-06-28 18:19:14,366 DEBUG blivet/MainThread: no type or existing type for sdj, bailing 2025-06-28 18:19:14,369 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-diskseq/1', 'DEVNAME': '/dev/xvda', 'DEVPATH': '/devices/vbd-768/block/xvda', 'DEVTYPE': 'disk', 'DISKSEQ': '1', 'ID_PART_TABLE_TYPE': 'gpt', 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2', 'MAJOR': '202', 'MINOR': '0', 'SUBSYSTEM': 'block', 'SYS_NAME': 'xvda', 'SYS_PATH': '/sys/devices/vbd-768/block/xvda', 'TAGS': ':systemd:', 'USEC_INITIALIZED': '8070613'} ; 2025-06-28 18:19:14,369 INFO blivet/MainThread: scanning xvda (/sys/devices/vbd-768/block/xvda)... 2025-06-28 18:19:14,372 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,375 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,379 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: xvda ; 2025-06-28 18:19:14,379 WARNING blivet/MainThread: device/vendor is not a valid attribute 2025-06-28 18:19:14,379 WARNING blivet/MainThread: device/model is not a valid attribute 2025-06-28 18:19:14,379 INFO blivet/MainThread: xvda is a disk 2025-06-28 18:19:14,379 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 63 2025-06-28 18:19:14,379 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 64 2025-06-28 18:19:14,383 DEBUG blivet/MainThread: DiskDevice._set_format: xvda ; type: None ; current: None ; 2025-06-28 18:19:14,386 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: xvda ; status: True ; 2025-06-28 18:19:14,386 DEBUG blivet/MainThread: xvda sysfs_path set to /sys/devices/vbd-768/block/xvda 2025-06-28 18:19:14,389 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/xvda ; sysfs_path: /sys/devices/vbd-768/block/xvda ; 2025-06-28 18:19:14,389 DEBUG blivet/MainThread: updated xvda size to 250 GiB (250 GiB) 2025-06-28 18:19:14,390 INFO blivet/MainThread: added disk xvda (id 62) to device tree 2025-06-28 18:19:14,390 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c3d90) -- name = xvda status = True id = 62 children = [] parents = [] uuid = None size = 250 GiB format = existing None major = 202 minor = 0 exists = True protected = False sysfs path = /sys/devices/vbd-768/block/xvda target size = 250 GiB path = /dev/xvda format args = [] original_format = None removable = False wwn = None 2025-06-28 18:19:14,393 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda ; 2025-06-28 18:19:14,397 DEBUG blivet/MainThread: EFIFS.supported: supported: True ; 2025-06-28 18:19:14,397 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 66 2025-06-28 18:19:14,401 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:14,401 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 67 2025-06-28 18:19:14,404 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:14,404 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 68 2025-06-28 18:19:14,408 DEBUG blivet/MainThread: DiskLabelFormatPopulator.run: device: xvda ; label_type: gpt ; 2025-06-28 18:19:14,411 DEBUG blivet/MainThread: DiskDevice.setup: xvda ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:14,414 DEBUG blivet/MainThread: DiskLabel.__init__: uuid: 91c3c0f1-4957-4f21-b15a-28e9016b79c2 ; label: None ; device: /dev/xvda ; serial: None ; exists: True ; 2025-06-28 18:19:14,428 DEBUG blivet/MainThread: Set pmbr_boot on parted.Disk instance -- type: gpt primaryPartitionCount: 2 lastPartitionNumber: 2 maxPrimaryPartitionCount: 128 partitions: [, ] device: PedDisk: <_ped.Disk object at 0x7fbad2382040> 2025-06-28 18:19:14,487 DEBUG blivet/MainThread: get_format('disklabel') returning DiskLabel instance with object id 69 2025-06-28 18:19:14,491 DEBUG blivet/MainThread: DiskDevice._set_format: xvda ; type: disklabel ; current: None ; 2025-06-28 18:19:14,491 INFO blivet/MainThread: got format: existing gpt disklabel 2025-06-28 18:19:14,491 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:14,494 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda1 ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-partuuid/fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda ' '/dev/disk/by-diskseq/1-part1', 'DEVNAME': '/dev/xvda1', 'DEVPATH': '/devices/vbd-768/block/xvda/xvda1', 'DEVTYPE': 'partition', 'DISKSEQ': '1', 'ID_PART_ENTRY_DISK': '202:0', 'ID_PART_ENTRY_NUMBER': '1', 'ID_PART_ENTRY_OFFSET': '2048', 'ID_PART_ENTRY_SCHEME': 'gpt', 'ID_PART_ENTRY_SIZE': '2048', 'ID_PART_ENTRY_TYPE': '21686148-6449-6e6f-744e-656564454649', 'ID_PART_ENTRY_UUID': 'fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda', 'ID_PART_TABLE_TYPE': 'gpt', 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2', 'MAJOR': '202', 'MINOR': '1', 'PARTN': '1', 'PARTUUID': 'fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda', 'SUBSYSTEM': 'block', 'SYS_NAME': 'xvda1', 'SYS_PATH': '/sys/devices/vbd-768/block/xvda/xvda1', 'TAGS': ':systemd:', 'UDISKS_IGNORE': '1', 'USEC_INITIALIZED': '8070648'} ; 2025-06-28 18:19:14,495 INFO blivet/MainThread: scanning xvda1 (/sys/devices/vbd-768/block/xvda/xvda1)... 2025-06-28 18:19:14,495 WARNING blivet/MainThread: hidden is not a valid attribute 2025-06-28 18:19:14,498 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,501 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,504 DEBUG blivet/MainThread: PartitionDevicePopulator.run: name: xvda1 ; 2025-06-28 18:19:14,507 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,510 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,510 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:14,523 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:14,537 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,540 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 250 GiB disk xvda (62) with existing gpt disklabel 2025-06-28 18:19:14,541 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:14,541 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 72 2025-06-28 18:19:14,545 DEBUG blivet/MainThread: DiskDevice.add_child: name: xvda ; child: xvda1 ; kids: 0 ; 2025-06-28 18:19:14,545 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 73 2025-06-28 18:19:14,548 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda1 ; type: None ; current: None ; 2025-06-28 18:19:14,552 DEBUG blivet/MainThread: PartitionDevice.update_sysfs_path: xvda1 ; status: True ; 2025-06-28 18:19:14,552 DEBUG blivet/MainThread: xvda1 sysfs_path set to /sys/devices/vbd-768/block/xvda/xvda1 2025-06-28 18:19:14,556 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda1 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda1 ; 2025-06-28 18:19:14,556 DEBUG blivet/MainThread: updated xvda1 size to 1024 KiB (1024 KiB) 2025-06-28 18:19:14,556 DEBUG blivet/MainThread: looking up parted Partition: /dev/xvda1 2025-06-28 18:19:14,560 DEBUG blivet/MainThread: PartitionDevice.probe: xvda1 ; exists: True ; 2025-06-28 18:19:14,563 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 1 ; 2025-06-28 18:19:14,567 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 10 ; 2025-06-28 18:19:14,570 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 12 ; 2025-06-28 18:19:14,570 DEBUG blivet/MainThread: get_format('biosboot') returning BIOSBoot instance with object id 75 2025-06-28 18:19:14,574 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda1 ; type: biosboot ; current: None ; 2025-06-28 18:19:14,577 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda1 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda1 ; 2025-06-28 18:19:14,578 DEBUG blivet/MainThread: updated xvda1 size to 1024 KiB (1024 KiB) 2025-06-28 18:19:14,578 INFO blivet/MainThread: added partition xvda1 (id 71) to device tree 2025-06-28 18:19:14,578 INFO blivet/MainThread: got device: PartitionDevice instance (0x7fbad4065e80) -- name = xvda1 status = True id = 71 children = [] parents = ['existing 250 GiB disk xvda (62) with existing gpt disklabel'] uuid = fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda size = 1024 KiB format = existing biosboot major = 202 minor = 1 exists = True protected = False sysfs path = /sys/devices/vbd-768/block/xvda/xvda1 target size = 1024 KiB path = /dev/xvda1 format args = [] original_format = None grow = None max size = 0 B bootable = None part type = 0 primary = None start sector = None end sector = None parted_partition = parted.Partition instance -- disk: fileSystem: None number: 1 path: /dev/xvda1 type: 0 name: active: True busy: False geometry: PedPartition: <_ped.Partition object at 0x7fbad23865c0> disk = existing 250 GiB disk xvda (62) with existing gpt disklabel start = 2048 end = 4095 length = 2048 flags = bios_grub type_uuid = 21686148-6449-6e6f-744e-656564454649 2025-06-28 18:19:14,582 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda1 ; 2025-06-28 18:19:14,582 DEBUG blivet/MainThread: no type or existing type for xvda1, bailing 2025-06-28 18:19:14,586 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda2 ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-diskseq/1-part2 ' '/dev/disk/by-uuid/8959a9f3-59d4-4eb7-8e53-e856bbc805e9 ' '/dev/disk/by-partuuid/782cc2d2-7936-4e3f-9cb4-9758a83f53fa', 'DEVNAME': '/dev/xvda2', 'DEVPATH': '/devices/vbd-768/block/xvda/xvda2', 'DEVTYPE': 'partition', 'DISKSEQ': '1', 'ID_FS_BLOCKSIZE': '4096', 'ID_FS_LASTBLOCK': '65535483', 'ID_FS_SIZE': '268433338368', 'ID_FS_TYPE': 'ext4', 'ID_FS_USAGE': 'filesystem', 'ID_FS_UUID': '8959a9f3-59d4-4eb7-8e53-e856bbc805e9', 'ID_FS_UUID_ENC': '8959a9f3-59d4-4eb7-8e53-e856bbc805e9', 'ID_FS_VERSION': '1.0', 'ID_PART_ENTRY_DISK': '202:0', 'ID_PART_ENTRY_NUMBER': '2', 'ID_PART_ENTRY_OFFSET': '4096', 'ID_PART_ENTRY_SCHEME': 'gpt', 'ID_PART_ENTRY_SIZE': '524283871', 'ID_PART_ENTRY_TYPE': '0fc63daf-8483-4772-8e79-3d69d8477de4', 'ID_PART_ENTRY_UUID': '782cc2d2-7936-4e3f-9cb4-9758a83f53fa', 'ID_PART_TABLE_TYPE': 'gpt', 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2', 'MAJOR': '202', 'MINOR': '2', 'PARTN': '2', 'PARTUUID': '782cc2d2-7936-4e3f-9cb4-9758a83f53fa', 'SUBSYSTEM': 'block', 'SYS_NAME': 'xvda2', 'SYS_PATH': '/sys/devices/vbd-768/block/xvda/xvda2', 'TAGS': ':systemd:', 'UDISKS_AUTO': '0', 'USEC_INITIALIZED': '8070794'} ; 2025-06-28 18:19:14,586 INFO blivet/MainThread: scanning xvda2 (/sys/devices/vbd-768/block/xvda/xvda2)... 2025-06-28 18:19:14,586 WARNING blivet/MainThread: hidden is not a valid attribute 2025-06-28 18:19:14,589 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,592 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,596 DEBUG blivet/MainThread: PartitionDevicePopulator.run: name: xvda2 ; 2025-06-28 18:19:14,599 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,602 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,602 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:14,614 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:14,628 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,632 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 250 GiB disk xvda (62) with existing gpt disklabel 2025-06-28 18:19:14,632 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:14,632 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 78 2025-06-28 18:19:14,636 DEBUG blivet/MainThread: DiskDevice.add_child: name: xvda ; child: xvda2 ; kids: 1 ; 2025-06-28 18:19:14,636 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 79 2025-06-28 18:19:14,639 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda2 ; type: None ; current: None ; 2025-06-28 18:19:14,642 DEBUG blivet/MainThread: PartitionDevice.update_sysfs_path: xvda2 ; status: True ; 2025-06-28 18:19:14,642 DEBUG blivet/MainThread: xvda2 sysfs_path set to /sys/devices/vbd-768/block/xvda/xvda2 2025-06-28 18:19:14,646 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda2 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda2 ; 2025-06-28 18:19:14,647 DEBUG blivet/MainThread: updated xvda2 size to 250 GiB (250 GiB) 2025-06-28 18:19:14,647 DEBUG blivet/MainThread: looking up parted Partition: /dev/xvda2 2025-06-28 18:19:14,650 DEBUG blivet/MainThread: PartitionDevice.probe: xvda2 ; exists: True ; 2025-06-28 18:19:14,653 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 1 ; 2025-06-28 18:19:14,657 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 10 ; 2025-06-28 18:19:14,663 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 12 ; 2025-06-28 18:19:14,666 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda2 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda2 ; 2025-06-28 18:19:14,666 DEBUG blivet/MainThread: updated xvda2 size to 250 GiB (250 GiB) 2025-06-28 18:19:14,666 INFO blivet/MainThread: added partition xvda2 (id 77) to device tree 2025-06-28 18:19:14,666 INFO blivet/MainThread: got device: PartitionDevice instance (0x7fbad2360910) -- name = xvda2 status = True id = 77 children = [] parents = ['existing 250 GiB disk xvda (62) with existing gpt disklabel'] uuid = 782cc2d2-7936-4e3f-9cb4-9758a83f53fa size = 250 GiB format = existing None major = 202 minor = 2 exists = True protected = False sysfs path = /sys/devices/vbd-768/block/xvda/xvda2 target size = 250 GiB path = /dev/xvda2 format args = [] original_format = None grow = None max size = 0 B bootable = None part type = 0 primary = None start sector = None end sector = None parted_partition = parted.Partition instance -- disk: fileSystem: number: 2 path: /dev/xvda2 type: 0 name: active: True busy: True geometry: PedPartition: <_ped.Partition object at 0x7fbad2387010> disk = existing 250 GiB disk xvda (62) with existing gpt disklabel start = 4096 end = 524287966 length = 524283871 flags = type_uuid = 0fc63daf-8483-4772-8e79-3d69d8477de4 2025-06-28 18:19:14,670 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda2 ; 2025-06-28 18:19:14,673 DEBUG blivet/MainThread: EFIFS.supported: supported: True ; 2025-06-28 18:19:14,673 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 81 2025-06-28 18:19:14,677 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:14,677 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 82 2025-06-28 18:19:14,681 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:14,681 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 83 2025-06-28 18:19:14,682 WARNING blivet/MainThread: Stratis DBus service is not running 2025-06-28 18:19:14,682 INFO blivet/MainThread: type detected on 'xvda2' is 'ext4' 2025-06-28 18:19:14,686 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ; 2025-06-28 18:19:14,686 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 84 2025-06-28 18:19:14,689 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda2 ; type: ext4 ; current: None ; 2025-06-28 18:19:14,689 INFO blivet/MainThread: got format: existing ext4 filesystem 2025-06-28 18:19:14,693 DEBUG blivet/MainThread: DeviceTree.handle_device: name: zram0 ; info: {'CURRENT_TAGS': ':systemd:', 'DEVLINKS': '/dev/disk/by-uuid/9e4b39b6-8d8e-46c1-8981-c482cb670ee6 ' '/dev/disk/by-label/zram0 /dev/disk/by-diskseq/2', 'DEVNAME': '/dev/zram0', 'DEVPATH': '/devices/virtual/block/zram0', 'DEVTYPE': 'disk', 'DISKSEQ': '2', 'ID_FS_BLOCKSIZE': '4096', 'ID_FS_LABEL': 'zram0', 'ID_FS_LABEL_ENC': 'zram0', 'ID_FS_LASTBLOCK': '950784', 'ID_FS_SIZE': '3894407168', 'ID_FS_TYPE': 'swap', 'ID_FS_USAGE': 'other', 'ID_FS_UUID': '9e4b39b6-8d8e-46c1-8981-c482cb670ee6', 'ID_FS_UUID_ENC': '9e4b39b6-8d8e-46c1-8981-c482cb670ee6', 'ID_FS_VERSION': '1', 'MAJOR': '251', 'MINOR': '0', 'SUBSYSTEM': 'block', 'SYS_NAME': 'zram0', 'SYS_PATH': '/sys/devices/virtual/block/zram0', 'TAGS': ':systemd:', 'UDISKS_IGNORE': '1', 'USEC_INITIALIZED': '8070909'} ; 2025-06-28 18:19:14,693 INFO blivet/MainThread: scanning zram0 (/sys/devices/virtual/block/zram0)... 2025-06-28 18:19:14,696 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: zram0 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,699 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,699 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/udev.py:1087: DeprecationWarning: Will be removed in 1.0. Access properties with Device.properties. while device: 2025-06-28 18:19:14,703 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: zram0 ; 2025-06-28 18:19:14,703 WARNING blivet/MainThread: device/vendor is not a valid attribute 2025-06-28 18:19:14,703 WARNING blivet/MainThread: device/model is not a valid attribute 2025-06-28 18:19:14,703 INFO blivet/MainThread: zram0 is a disk 2025-06-28 18:19:14,703 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 87 2025-06-28 18:19:14,703 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 88 2025-06-28 18:19:14,707 DEBUG blivet/MainThread: DiskDevice._set_format: zram0 ; type: None ; current: None ; 2025-06-28 18:19:14,710 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: zram0 ; status: True ; 2025-06-28 18:19:14,710 DEBUG blivet/MainThread: zram0 sysfs_path set to /sys/devices/virtual/block/zram0 2025-06-28 18:19:14,713 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/zram0 ; sysfs_path: /sys/devices/virtual/block/zram0 ; 2025-06-28 18:19:14,714 DEBUG blivet/MainThread: updated zram0 size to 3.63 GiB (3.63 GiB) 2025-06-28 18:19:14,714 INFO blivet/MainThread: added disk zram0 (id 86) to device tree 2025-06-28 18:19:14,714 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad2362c10) -- name = zram0 status = True id = 86 children = [] parents = [] uuid = None size = 3.63 GiB format = existing None major = 251 minor = 0 exists = True protected = False sysfs path = /sys/devices/virtual/block/zram0 target size = 3.63 GiB path = /dev/zram0 format args = [] original_format = None removable = False wwn = None 2025-06-28 18:19:14,717 DEBUG blivet/MainThread: DeviceTree.handle_format: name: zram0 ; 2025-06-28 18:19:14,721 DEBUG blivet/MainThread: EFIFS.supported: supported: True ; 2025-06-28 18:19:14,721 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 90 2025-06-28 18:19:14,725 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:14,725 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 91 2025-06-28 18:19:14,728 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ; 2025-06-28 18:19:14,728 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 92 2025-06-28 18:19:14,728 INFO blivet/MainThread: type detected on 'zram0' is 'swap' 2025-06-28 18:19:14,732 DEBUG blivet/MainThread: SwapSpace.__init__: uuid: 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 ; label: zram0 ; device: /dev/zram0 ; serial: None ; exists: True ; 2025-06-28 18:19:14,732 DEBUG blivet/MainThread: get_format('swap') returning SwapSpace instance with object id 93 2025-06-28 18:19:14,735 DEBUG blivet/MainThread: DiskDevice._set_format: zram0 ; type: swap ; current: None ; 2025-06-28 18:19:14,735 INFO blivet/MainThread: got format: existing swap 2025-06-28 18:19:14,736 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:14,749 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:14,760 INFO blivet/MainThread: edd: MBR signature on xvda is zero. new disk image? 2025-06-28 18:19:14,760 INFO blivet/MainThread: edd: collected mbr signatures: {} 2025-06-28 18:19:14,760 DEBUG blivet/MainThread: resolved 'UUID=8959a9f3-59d4-4eb7-8e53-e856bbc805e9' to 'xvda2' (partition) 2025-06-28 18:19:14,764 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,767 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,770 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,773 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,773 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat' 2025-06-28 18:19:14,776 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nest.test.redhat.com:/mnt/qa ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,779 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,783 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nest.test.redhat.com:/mnt/qa ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,786 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,786 DEBUG blivet/MainThread: failed to resolve '/dev/nest.test.redhat.com:/mnt/qa' 2025-06-28 18:19:14,789 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,791 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,794 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,797 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,797 DEBUG blivet/MainThread: failed to resolve '/dev/vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive' 2025-06-28 18:19:14,800 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nest.test.redhat.com:/mnt/tpsdist ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,804 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,807 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nest.test.redhat.com:/mnt/tpsdist ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,810 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,810 DEBUG blivet/MainThread: failed to resolve '/dev/nest.test.redhat.com:/mnt/tpsdist' 2025-06-28 18:19:14,813 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,816 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,819 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,822 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,822 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot' 2025-06-28 18:19:14,825 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,828 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,831 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,834 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,834 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch' 2025-06-28 18:19:14,837 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,840 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,843 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,846 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,846 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg1' 2025-06-28 18:19:14,849 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sda ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,852 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sda (2) 2025-06-28 18:19:14,852 DEBUG blivet/MainThread: resolved 'sda' to 'sda' (disk) 2025-06-28 18:19:14,855 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdb ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,858 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdb (7) 2025-06-28 18:19:14,858 DEBUG blivet/MainThread: resolved 'sdb' to 'sdb' (disk) 2025-06-28 18:19:14,861 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdc ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,864 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdc (22) 2025-06-28 18:19:14,864 DEBUG blivet/MainThread: resolved 'sdc' to 'sdc' (disk) 2025-06-28 18:19:14,864 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:14,867 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:14,867 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 95 2025-06-28 18:19:14,868 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 98 2025-06-28 18:19:14,872 DEBUG blivet/MainThread: DiskDevice._set_format: sda ; type: None ; current: None ; 2025-06-28 18:19:14,872 INFO blivet/MainThread: registered action: [96] destroy format None on disk sda (id 2) 2025-06-28 18:19:14,877 DEBUG blivet/MainThread: DiskDevice._set_format: sda ; type: lvmpv ; current: None ; 2025-06-28 18:19:14,877 INFO blivet/MainThread: registered action: [97] create format lvmpv on disk sda (id 2) 2025-06-28 18:19:14,882 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:14,882 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 99 2025-06-28 18:19:14,882 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 102 2025-06-28 18:19:14,885 DEBUG blivet/MainThread: DiskDevice._set_format: sdb ; type: None ; current: None ; 2025-06-28 18:19:14,885 INFO blivet/MainThread: registered action: [100] destroy format None on disk sdb (id 7) 2025-06-28 18:19:14,888 DEBUG blivet/MainThread: DiskDevice._set_format: sdb ; type: lvmpv ; current: None ; 2025-06-28 18:19:14,888 INFO blivet/MainThread: registered action: [101] create format lvmpv on disk sdb (id 7) 2025-06-28 18:19:14,891 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:14,891 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 103 2025-06-28 18:19:14,892 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 106 2025-06-28 18:19:14,895 DEBUG blivet/MainThread: DiskDevice._set_format: sdc ; type: None ; current: None ; 2025-06-28 18:19:14,895 INFO blivet/MainThread: registered action: [104] destroy format None on disk sdc (id 22) 2025-06-28 18:19:14,898 DEBUG blivet/MainThread: DiskDevice._set_format: sdc ; type: lvmpv ; current: None ; 2025-06-28 18:19:14,898 INFO blivet/MainThread: registered action: [105] create format lvmpv on disk sdc (id 22) 2025-06-28 18:19:14,898 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 108 2025-06-28 18:19:14,903 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg1 ; parent: sda ; 2025-06-28 18:19:14,906 DEBUG blivet/MainThread: DiskDevice.add_child: name: sda ; child: test_vg1 ; kids: 0 ; 2025-06-28 18:19:14,910 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg1 ; parent: sdb ; 2025-06-28 18:19:14,913 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdb ; child: test_vg1 ; kids: 0 ; 2025-06-28 18:19:14,917 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg1 ; parent: sdc ; 2025-06-28 18:19:14,921 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdc ; child: test_vg1 ; kids: 0 ; 2025-06-28 18:19:14,921 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 109 2025-06-28 18:19:14,924 DEBUG blivet/MainThread: LVMVolumeGroupDevice._set_format: test_vg1 ; type: None ; current: None ; 2025-06-28 18:19:14,925 INFO blivet/MainThread: added lvmvg test_vg1 (id 107) to device tree 2025-06-28 18:19:14,925 INFO blivet/MainThread: registered action: [111] create device lvmvg test_vg1 (id 107) 2025-06-28 18:19:14,928 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg1-lv1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,931 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,935 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg1-lv1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,938 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,939 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg1-lv1' 2025-06-28 18:19:14,939 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB 2025-06-28 18:19:14,939 DEBUG blivet/MainThread: vg test_vg1 has 8.99 GiB free 2025-06-28 18:19:14,939 DEBUG blivet.ansible/MainThread: size: 1.35 GiB ; -565.028901734104 2025-06-28 18:19:14,939 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB 2025-06-28 18:19:14,940 DEBUG blivet/MainThread: vg test_vg1 has 8.99 GiB free 2025-06-28 18:19:14,943 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:14,943 INFO program/MainThread: [libmkod] custom logging function 0x7fbad4c89570 registered 2025-06-28 18:19:14,943 INFO program/MainThread: [libmkod] context 0x562602e737f0 released 2025-06-28 18:19:14,943 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 112 2025-06-28 18:19:14,946 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:14,947 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 114 2025-06-28 18:19:14,950 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg1 ; child: lv1 ; kids: 0 ; 2025-06-28 18:19:14,955 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv1 ; type: xfs ; current: None ; 2025-06-28 18:19:14,955 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 116 2025-06-28 18:19:14,959 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg1 ; child: lv1 ; kids: 1 ; 2025-06-28 18:19:14,962 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg1 ; child: lv1 ; kids: 0 ; 2025-06-28 18:19:14,965 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv1 ; type: xfs ; current: None ; 2025-06-28 18:19:14,969 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg1-lv1 ; sysfs_path: ; 2025-06-28 18:19:14,969 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB 2025-06-28 18:19:14,970 DEBUG blivet/MainThread: vg test_vg1 has 8.99 GiB free 2025-06-28 18:19:14,970 DEBUG blivet/MainThread: Adding test_vg1-lv1/1.35 GiB to test_vg1 2025-06-28 18:19:14,970 INFO blivet/MainThread: added lvmlv test_vg1-lv1 (id 113) to device tree 2025-06-28 18:19:14,970 INFO blivet/MainThread: registered action: [118] create device lvmlv test_vg1-lv1 (id 113) 2025-06-28 18:19:14,970 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 120 2025-06-28 18:19:14,974 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv1 ; type: xfs ; current: xfs ; 2025-06-28 18:19:14,974 INFO blivet/MainThread: registered action: [119] create format xfs filesystem on lvmlv test_vg1-lv1 (id 113) 2025-06-28 18:19:14,978 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg1-lv2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,981 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:14,983 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg1-lv2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:14,987 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:14,987 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg1-lv2' 2025-06-28 18:19:14,987 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB 2025-06-28 18:19:14,987 DEBUG blivet/MainThread: vg test_vg1 has 7.64 GiB free 2025-06-28 18:19:14,987 DEBUG blivet.ansible/MainThread: size: 4.5 GiB ; -69.85230234578627 2025-06-28 18:19:14,988 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB 2025-06-28 18:19:14,988 DEBUG blivet/MainThread: vg test_vg1 has 7.64 GiB free 2025-06-28 18:19:14,992 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:14,992 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 121 2025-06-28 18:19:14,995 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:14,995 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 123 2025-06-28 18:19:14,999 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg1 ; child: lv2 ; kids: 1 ; 2025-06-28 18:19:15,002 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv2 ; type: xfs ; current: None ; 2025-06-28 18:19:15,002 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 125 2025-06-28 18:19:15,007 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg1 ; child: lv2 ; kids: 2 ; 2025-06-28 18:19:15,010 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg1 ; child: lv2 ; kids: 1 ; 2025-06-28 18:19:15,014 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv2 ; type: xfs ; current: None ; 2025-06-28 18:19:15,018 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg1-lv2 ; sysfs_path: ; 2025-06-28 18:19:15,018 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB 2025-06-28 18:19:15,018 DEBUG blivet/MainThread: vg test_vg1 has 7.64 GiB free 2025-06-28 18:19:15,018 DEBUG blivet/MainThread: Adding test_vg1-lv2/4.5 GiB to test_vg1 2025-06-28 18:19:15,018 INFO blivet/MainThread: added lvmlv test_vg1-lv2 (id 122) to device tree 2025-06-28 18:19:15,018 INFO blivet/MainThread: registered action: [127] create device lvmlv test_vg1-lv2 (id 122) 2025-06-28 18:19:15,018 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 129 2025-06-28 18:19:15,022 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv2 ; type: xfs ; current: xfs ; 2025-06-28 18:19:15,022 INFO blivet/MainThread: registered action: [128] create format xfs filesystem on lvmlv test_vg1-lv2 (id 122) 2025-06-28 18:19:15,025 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,029 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:15,031 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,034 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:15,034 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg2' 2025-06-28 18:19:15,037 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdd ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,040 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdd (27) 2025-06-28 18:19:15,040 DEBUG blivet/MainThread: resolved 'sdd' to 'sdd' (disk) 2025-06-28 18:19:15,044 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sde ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,047 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sde (32) 2025-06-28 18:19:15,047 DEBUG blivet/MainThread: resolved 'sde' to 'sde' (disk) 2025-06-28 18:19:15,051 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdf ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,053 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdf (37) 2025-06-28 18:19:15,054 DEBUG blivet/MainThread: resolved 'sdf' to 'sdf' (disk) 2025-06-28 18:19:15,056 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:15,057 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 130 2025-06-28 18:19:15,057 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 133 2025-06-28 18:19:15,060 DEBUG blivet/MainThread: DiskDevice._set_format: sdd ; type: None ; current: None ; 2025-06-28 18:19:15,060 INFO blivet/MainThread: registered action: [131] destroy format None on disk sdd (id 27) 2025-06-28 18:19:15,063 DEBUG blivet/MainThread: DiskDevice._set_format: sdd ; type: lvmpv ; current: None ; 2025-06-28 18:19:15,063 INFO blivet/MainThread: registered action: [132] create format lvmpv on disk sdd (id 27) 2025-06-28 18:19:15,066 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:15,066 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 134 2025-06-28 18:19:15,066 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 137 2025-06-28 18:19:15,070 DEBUG blivet/MainThread: DiskDevice._set_format: sde ; type: None ; current: None ; 2025-06-28 18:19:15,070 INFO blivet/MainThread: registered action: [135] destroy format None on disk sde (id 32) 2025-06-28 18:19:15,073 DEBUG blivet/MainThread: DiskDevice._set_format: sde ; type: lvmpv ; current: None ; 2025-06-28 18:19:15,073 INFO blivet/MainThread: registered action: [136] create format lvmpv on disk sde (id 32) 2025-06-28 18:19:15,076 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:15,076 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 138 2025-06-28 18:19:15,076 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 141 2025-06-28 18:19:15,079 DEBUG blivet/MainThread: DiskDevice._set_format: sdf ; type: None ; current: None ; 2025-06-28 18:19:15,079 INFO blivet/MainThread: registered action: [139] destroy format None on disk sdf (id 37) 2025-06-28 18:19:15,083 DEBUG blivet/MainThread: DiskDevice._set_format: sdf ; type: lvmpv ; current: None ; 2025-06-28 18:19:15,083 INFO blivet/MainThread: registered action: [140] create format lvmpv on disk sdf (id 37) 2025-06-28 18:19:15,083 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 143 2025-06-28 18:19:15,087 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg2 ; parent: sdd ; 2025-06-28 18:19:15,090 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdd ; child: test_vg2 ; kids: 0 ; 2025-06-28 18:19:15,094 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg2 ; parent: sde ; 2025-06-28 18:19:15,097 DEBUG blivet/MainThread: DiskDevice.add_child: name: sde ; child: test_vg2 ; kids: 0 ; 2025-06-28 18:19:15,100 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg2 ; parent: sdf ; 2025-06-28 18:19:15,105 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdf ; child: test_vg2 ; kids: 0 ; 2025-06-28 18:19:15,105 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 144 2025-06-28 18:19:15,108 DEBUG blivet/MainThread: LVMVolumeGroupDevice._set_format: test_vg2 ; type: None ; current: None ; 2025-06-28 18:19:15,109 INFO blivet/MainThread: added lvmvg test_vg2 (id 142) to device tree 2025-06-28 18:19:15,109 INFO blivet/MainThread: registered action: [146] create device lvmvg test_vg2 (id 142) 2025-06-28 18:19:15,112 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg2-lv3 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,115 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:15,119 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg2-lv3 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,122 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:15,122 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg2-lv3' 2025-06-28 18:19:15,122 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB 2025-06-28 18:19:15,123 DEBUG blivet/MainThread: vg test_vg2 has 8.99 GiB free 2025-06-28 18:19:15,123 DEBUG blivet.ansible/MainThread: size: 924 MiB ; -896.1038961038961 2025-06-28 18:19:15,123 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB 2025-06-28 18:19:15,123 DEBUG blivet/MainThread: vg test_vg2 has 8.99 GiB free 2025-06-28 18:19:15,126 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,126 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 147 2025-06-28 18:19:15,129 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,129 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 149 2025-06-28 18:19:15,133 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg2 ; child: lv3 ; kids: 0 ; 2025-06-28 18:19:15,136 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv3 ; type: xfs ; current: None ; 2025-06-28 18:19:15,137 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 151 2025-06-28 18:19:15,140 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg2 ; child: lv3 ; kids: 1 ; 2025-06-28 18:19:15,144 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg2 ; child: lv3 ; kids: 0 ; 2025-06-28 18:19:15,147 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv3 ; type: xfs ; current: None ; 2025-06-28 18:19:15,151 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg2-lv3 ; sysfs_path: ; 2025-06-28 18:19:15,151 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB 2025-06-28 18:19:15,152 DEBUG blivet/MainThread: vg test_vg2 has 8.99 GiB free 2025-06-28 18:19:15,152 DEBUG blivet/MainThread: Adding test_vg2-lv3/924 MiB to test_vg2 2025-06-28 18:19:15,152 INFO blivet/MainThread: added lvmlv test_vg2-lv3 (id 148) to device tree 2025-06-28 18:19:15,152 INFO blivet/MainThread: registered action: [153] create device lvmlv test_vg2-lv3 (id 148) 2025-06-28 18:19:15,152 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 155 2025-06-28 18:19:15,156 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv3 ; type: xfs ; current: xfs ; 2025-06-28 18:19:15,156 INFO blivet/MainThread: registered action: [154] create format xfs filesystem on lvmlv test_vg2-lv3 (id 148) 2025-06-28 18:19:15,159 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg2-lv4 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,162 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:15,165 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg2-lv4 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,168 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:15,168 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg2-lv4' 2025-06-28 18:19:15,169 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB 2025-06-28 18:19:15,169 DEBUG blivet/MainThread: vg test_vg2 has 8.09 GiB free 2025-06-28 18:19:15,169 DEBUG blivet.ansible/MainThread: size: 1.8 GiB ; -349.0238611713666 2025-06-28 18:19:15,169 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB 2025-06-28 18:19:15,170 DEBUG blivet/MainThread: vg test_vg2 has 8.09 GiB free 2025-06-28 18:19:15,173 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,173 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 156 2025-06-28 18:19:15,176 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,177 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 158 2025-06-28 18:19:15,180 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg2 ; child: lv4 ; kids: 1 ; 2025-06-28 18:19:15,184 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv4 ; type: xfs ; current: None ; 2025-06-28 18:19:15,184 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 160 2025-06-28 18:19:15,188 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg2 ; child: lv4 ; kids: 2 ; 2025-06-28 18:19:15,191 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg2 ; child: lv4 ; kids: 1 ; 2025-06-28 18:19:15,195 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv4 ; type: xfs ; current: None ; 2025-06-28 18:19:15,198 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg2-lv4 ; sysfs_path: ; 2025-06-28 18:19:15,199 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB 2025-06-28 18:19:15,199 DEBUG blivet/MainThread: vg test_vg2 has 8.09 GiB free 2025-06-28 18:19:15,199 DEBUG blivet/MainThread: Adding test_vg2-lv4/1.8 GiB to test_vg2 2025-06-28 18:19:15,199 INFO blivet/MainThread: added lvmlv test_vg2-lv4 (id 157) to device tree 2025-06-28 18:19:15,199 INFO blivet/MainThread: registered action: [162] create device lvmlv test_vg2-lv4 (id 157) 2025-06-28 18:19:15,199 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 164 2025-06-28 18:19:15,203 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv4 ; type: xfs ; current: xfs ; 2025-06-28 18:19:15,204 INFO blivet/MainThread: registered action: [163] create format xfs filesystem on lvmlv test_vg2-lv4 (id 157) 2025-06-28 18:19:15,206 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,210 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:15,213 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,216 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:15,216 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3' 2025-06-28 18:19:15,219 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdg ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,222 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdg (42) 2025-06-28 18:19:15,222 DEBUG blivet/MainThread: resolved 'sdg' to 'sdg' (disk) 2025-06-28 18:19:15,225 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdh ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,228 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdh (47) 2025-06-28 18:19:15,228 DEBUG blivet/MainThread: resolved 'sdh' to 'sdh' (disk) 2025-06-28 18:19:15,231 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdi ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,234 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdi (52) 2025-06-28 18:19:15,234 DEBUG blivet/MainThread: resolved 'sdi' to 'sdi' (disk) 2025-06-28 18:19:15,237 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdj ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,240 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdj (57) 2025-06-28 18:19:15,240 DEBUG blivet/MainThread: resolved 'sdj' to 'sdj' (disk) 2025-06-28 18:19:15,243 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:15,243 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 165 2025-06-28 18:19:15,243 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 168 2025-06-28 18:19:15,246 DEBUG blivet/MainThread: DiskDevice._set_format: sdg ; type: None ; current: None ; 2025-06-28 18:19:15,246 INFO blivet/MainThread: registered action: [166] destroy format None on disk sdg (id 42) 2025-06-28 18:19:15,249 DEBUG blivet/MainThread: DiskDevice._set_format: sdg ; type: lvmpv ; current: None ; 2025-06-28 18:19:15,250 INFO blivet/MainThread: registered action: [167] create format lvmpv on disk sdg (id 42) 2025-06-28 18:19:15,252 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:15,252 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 169 2025-06-28 18:19:15,253 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 172 2025-06-28 18:19:15,255 DEBUG blivet/MainThread: DiskDevice._set_format: sdh ; type: None ; current: None ; 2025-06-28 18:19:15,256 INFO blivet/MainThread: registered action: [170] destroy format None on disk sdh (id 47) 2025-06-28 18:19:15,259 DEBUG blivet/MainThread: DiskDevice._set_format: sdh ; type: lvmpv ; current: None ; 2025-06-28 18:19:15,259 INFO blivet/MainThread: registered action: [171] create format lvmpv on disk sdh (id 47) 2025-06-28 18:19:15,262 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:15,262 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 173 2025-06-28 18:19:15,262 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 176 2025-06-28 18:19:15,265 DEBUG blivet/MainThread: DiskDevice._set_format: sdi ; type: None ; current: None ; 2025-06-28 18:19:15,265 INFO blivet/MainThread: registered action: [174] destroy format None on disk sdi (id 52) 2025-06-28 18:19:15,268 DEBUG blivet/MainThread: DiskDevice._set_format: sdi ; type: lvmpv ; current: None ; 2025-06-28 18:19:15,268 INFO blivet/MainThread: registered action: [175] create format lvmpv on disk sdi (id 52) 2025-06-28 18:19:15,271 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__: 2025-06-28 18:19:15,271 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 177 2025-06-28 18:19:15,271 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 180 2025-06-28 18:19:15,275 DEBUG blivet/MainThread: DiskDevice._set_format: sdj ; type: None ; current: None ; 2025-06-28 18:19:15,275 INFO blivet/MainThread: registered action: [178] destroy format None on disk sdj (id 57) 2025-06-28 18:19:15,278 DEBUG blivet/MainThread: DiskDevice._set_format: sdj ; type: lvmpv ; current: None ; 2025-06-28 18:19:15,278 INFO blivet/MainThread: registered action: [179] create format lvmpv on disk sdj (id 57) 2025-06-28 18:19:15,279 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 182 2025-06-28 18:19:15,282 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg3 ; parent: sdg ; 2025-06-28 18:19:15,286 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdg ; child: test_vg3 ; kids: 0 ; 2025-06-28 18:19:15,289 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg3 ; parent: sdh ; 2025-06-28 18:19:15,293 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdh ; child: test_vg3 ; kids: 0 ; 2025-06-28 18:19:15,296 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg3 ; parent: sdi ; 2025-06-28 18:19:15,299 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdi ; child: test_vg3 ; kids: 0 ; 2025-06-28 18:19:15,303 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg3 ; parent: sdj ; 2025-06-28 18:19:15,307 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdj ; child: test_vg3 ; kids: 0 ; 2025-06-28 18:19:15,307 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 183 2025-06-28 18:19:15,310 DEBUG blivet/MainThread: LVMVolumeGroupDevice._set_format: test_vg3 ; type: None ; current: None ; 2025-06-28 18:19:15,310 INFO blivet/MainThread: added lvmvg test_vg3 (id 181) to device tree 2025-06-28 18:19:15,310 INFO blivet/MainThread: registered action: [185] create device lvmvg test_vg3 (id 181) 2025-06-28 18:19:15,314 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3-lv5 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,317 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:15,320 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3-lv5 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,324 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:15,324 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3-lv5' 2025-06-28 18:19:15,324 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,324 DEBUG blivet/MainThread: vg test_vg3 has 11.98 GiB free 2025-06-28 18:19:15,324 DEBUG blivet.ansible/MainThread: size: 3.6 GiB ; -233.11617806731815 2025-06-28 18:19:15,325 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,325 DEBUG blivet/MainThread: vg test_vg3 has 11.98 GiB free 2025-06-28 18:19:15,329 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,329 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 186 2025-06-28 18:19:15,332 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,332 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 188 2025-06-28 18:19:15,336 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv5 ; kids: 0 ; 2025-06-28 18:19:15,339 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv5 ; type: xfs ; current: None ; 2025-06-28 18:19:15,339 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 190 2025-06-28 18:19:15,343 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg3 ; child: lv5 ; kids: 1 ; 2025-06-28 18:19:15,347 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv5 ; kids: 0 ; 2025-06-28 18:19:15,350 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv5 ; type: xfs ; current: None ; 2025-06-28 18:19:15,354 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg3-lv5 ; sysfs_path: ; 2025-06-28 18:19:15,354 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,354 DEBUG blivet/MainThread: vg test_vg3 has 11.98 GiB free 2025-06-28 18:19:15,354 DEBUG blivet/MainThread: Adding test_vg3-lv5/3.6 GiB to test_vg3 2025-06-28 18:19:15,354 INFO blivet/MainThread: added lvmlv test_vg3-lv5 (id 187) to device tree 2025-06-28 18:19:15,354 INFO blivet/MainThread: registered action: [192] create device lvmlv test_vg3-lv5 (id 187) 2025-06-28 18:19:15,355 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 194 2025-06-28 18:19:15,359 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv5 ; type: xfs ; current: xfs ; 2025-06-28 18:19:15,359 INFO blivet/MainThread: registered action: [193] create format xfs filesystem on lvmlv test_vg3-lv5 (id 187) 2025-06-28 18:19:15,362 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3-lv6 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,365 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:15,368 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3-lv6 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,371 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:15,371 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3-lv6' 2025-06-28 18:19:15,372 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,372 DEBUG blivet/MainThread: vg test_vg3 has 8.39 GiB free 2025-06-28 18:19:15,372 DEBUG blivet.ansible/MainThread: size: 3 GiB ; -179.9217731421121 2025-06-28 18:19:15,372 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,373 DEBUG blivet/MainThread: vg test_vg3 has 8.39 GiB free 2025-06-28 18:19:15,376 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,376 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 195 2025-06-28 18:19:15,379 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,379 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 197 2025-06-28 18:19:15,384 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv6 ; kids: 1 ; 2025-06-28 18:19:15,387 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv6 ; type: xfs ; current: None ; 2025-06-28 18:19:15,388 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 199 2025-06-28 18:19:15,391 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg3 ; child: lv6 ; kids: 2 ; 2025-06-28 18:19:15,394 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv6 ; kids: 1 ; 2025-06-28 18:19:15,399 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv6 ; type: xfs ; current: None ; 2025-06-28 18:19:15,402 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg3-lv6 ; sysfs_path: ; 2025-06-28 18:19:15,402 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,403 DEBUG blivet/MainThread: vg test_vg3 has 8.39 GiB free 2025-06-28 18:19:15,403 DEBUG blivet/MainThread: Adding test_vg3-lv6/3 GiB to test_vg3 2025-06-28 18:19:15,403 INFO blivet/MainThread: added lvmlv test_vg3-lv6 (id 196) to device tree 2025-06-28 18:19:15,403 INFO blivet/MainThread: registered action: [201] create device lvmlv test_vg3-lv6 (id 196) 2025-06-28 18:19:15,403 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 203 2025-06-28 18:19:15,407 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv6 ; type: xfs ; current: xfs ; 2025-06-28 18:19:15,407 INFO blivet/MainThread: registered action: [202] create format xfs filesystem on lvmlv test_vg3-lv6 (id 196) 2025-06-28 18:19:15,410 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3-lv7 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,413 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:15,416 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3-lv7 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,420 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:15,420 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3-lv7' 2025-06-28 18:19:15,420 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,421 DEBUG blivet/MainThread: vg test_vg3 has 5.39 GiB free 2025-06-28 18:19:15,421 DEBUG blivet.ansible/MainThread: size: 1.2 GiB ; -349.5114006514658 2025-06-28 18:19:15,421 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,421 DEBUG blivet/MainThread: vg test_vg3 has 5.39 GiB free 2025-06-28 18:19:15,424 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,425 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 204 2025-06-28 18:19:15,427 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,428 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 206 2025-06-28 18:19:15,431 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv7 ; kids: 2 ; 2025-06-28 18:19:15,436 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv7 ; type: xfs ; current: None ; 2025-06-28 18:19:15,436 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 208 2025-06-28 18:19:15,440 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg3 ; child: lv7 ; kids: 3 ; 2025-06-28 18:19:15,443 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv7 ; kids: 2 ; 2025-06-28 18:19:15,446 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv7 ; type: xfs ; current: None ; 2025-06-28 18:19:15,450 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg3-lv7 ; sysfs_path: ; 2025-06-28 18:19:15,450 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,451 DEBUG blivet/MainThread: vg test_vg3 has 5.39 GiB free 2025-06-28 18:19:15,451 DEBUG blivet/MainThread: Adding test_vg3-lv7/1.2 GiB to test_vg3 2025-06-28 18:19:15,451 INFO blivet/MainThread: added lvmlv test_vg3-lv7 (id 205) to device tree 2025-06-28 18:19:15,451 INFO blivet/MainThread: registered action: [210] create device lvmlv test_vg3-lv7 (id 205) 2025-06-28 18:19:15,451 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 212 2025-06-28 18:19:15,455 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv7 ; type: xfs ; current: xfs ; 2025-06-28 18:19:15,455 INFO blivet/MainThread: registered action: [211] create format xfs filesystem on lvmlv test_vg3-lv7 (id 205) 2025-06-28 18:19:15,459 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3-lv8 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,462 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None 2025-06-28 18:19:15,465 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3-lv8 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:15,468 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None 2025-06-28 18:19:15,468 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3-lv8' 2025-06-28 18:19:15,468 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,469 DEBUG blivet/MainThread: vg test_vg3 has 4.19 GiB free 2025-06-28 18:19:15,469 DEBUG blivet.ansible/MainThread: size: 1.2 GiB ; -249.5114006514658 2025-06-28 18:19:15,469 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,469 DEBUG blivet/MainThread: vg test_vg3 has 4.19 GiB free 2025-06-28 18:19:15,473 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,473 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 213 2025-06-28 18:19:15,476 DEBUG blivet/MainThread: XFS.supported: supported: True ; 2025-06-28 18:19:15,476 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 215 2025-06-28 18:19:15,480 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv8 ; kids: 3 ; 2025-06-28 18:19:15,483 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv8 ; type: xfs ; current: None ; 2025-06-28 18:19:15,483 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 217 2025-06-28 18:19:15,488 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg3 ; child: lv8 ; kids: 4 ; 2025-06-28 18:19:15,491 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv8 ; kids: 3 ; 2025-06-28 18:19:15,495 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv8 ; type: xfs ; current: None ; 2025-06-28 18:19:15,499 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg3-lv8 ; sysfs_path: ; 2025-06-28 18:19:15,499 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB 2025-06-28 18:19:15,500 DEBUG blivet/MainThread: vg test_vg3 has 4.19 GiB free 2025-06-28 18:19:15,500 DEBUG blivet/MainThread: Adding test_vg3-lv8/1.2 GiB to test_vg3 2025-06-28 18:19:15,500 INFO blivet/MainThread: added lvmlv test_vg3-lv8 (id 214) to device tree 2025-06-28 18:19:15,500 INFO blivet/MainThread: registered action: [219] create device lvmlv test_vg3-lv8 (id 214) 2025-06-28 18:19:15,500 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 221 2025-06-28 18:19:15,504 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv8 ; type: xfs ; current: xfs ; 2025-06-28 18:19:15,504 INFO blivet/MainThread: registered action: [220] create format xfs filesystem on lvmlv test_vg3-lv8 (id 214) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [96] destroy format None on disk sda (id 2) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [97] create format lvmpv on disk sda (id 2) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [100] destroy format None on disk sdb (id 7) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [101] create format lvmpv on disk sdb (id 7) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [104] destroy format None on disk sdc (id 22) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [105] create format lvmpv on disk sdc (id 22) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [111] create device lvmvg test_vg1 (id 107) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [118] create device lvmlv test_vg1-lv1 (id 113) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [119] create format xfs filesystem on lvmlv test_vg1-lv1 (id 113) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [127] create device lvmlv test_vg1-lv2 (id 122) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [128] create format xfs filesystem on lvmlv test_vg1-lv2 (id 122) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [131] destroy format None on disk sdd (id 27) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [132] create format lvmpv on disk sdd (id 27) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [135] destroy format None on disk sde (id 32) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [136] create format lvmpv on disk sde (id 32) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [139] destroy format None on disk sdf (id 37) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [140] create format lvmpv on disk sdf (id 37) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [146] create device lvmvg test_vg2 (id 142) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [153] create device lvmlv test_vg2-lv3 (id 148) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [154] create format xfs filesystem on lvmlv test_vg2-lv3 (id 148) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [162] create device lvmlv test_vg2-lv4 (id 157) 2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [163] create format xfs filesystem on lvmlv test_vg2-lv4 (id 157) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [166] destroy format None on disk sdg (id 42) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [167] create format lvmpv on disk sdg (id 42) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [170] destroy format None on disk sdh (id 47) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [171] create format lvmpv on disk sdh (id 47) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [174] destroy format None on disk sdi (id 52) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [175] create format lvmpv on disk sdi (id 52) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [178] destroy format None on disk sdj (id 57) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [179] create format lvmpv on disk sdj (id 57) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [185] create device lvmvg test_vg3 (id 181) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [192] create device lvmlv test_vg3-lv5 (id 187) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [193] create format xfs filesystem on lvmlv test_vg3-lv5 (id 187) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [201] create device lvmlv test_vg3-lv6 (id 196) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [202] create format xfs filesystem on lvmlv test_vg3-lv6 (id 196) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [210] create device lvmlv test_vg3-lv7 (id 205) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [211] create format xfs filesystem on lvmlv test_vg3-lv7 (id 205) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [219] create device lvmlv test_vg3-lv8 (id 214) 2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [220] create format xfs filesystem on lvmlv test_vg3-lv8 (id 214) 2025-06-28 18:19:15,506 INFO blivet/MainThread: pruning action queue... 2025-06-28 18:19:15,507 INFO blivet/MainThread: resetting parted disks... 2025-06-28 18:19:15,511 DEBUG blivet/MainThread: DiskLabel.reset_parted_disk: device: /dev/xvda ; 2025-06-28 18:19:15,513 DEBUG blivet/MainThread: DiskLabel.reset_parted_disk: device: /dev/xvda ; 2025-06-28 18:19:15,517 DEBUG blivet/MainThread: PartitionDevice.pre_commit_fixup: xvda1 ; 2025-06-28 18:19:15,517 DEBUG blivet/MainThread: sector-based lookup found partition xvda1 2025-06-28 18:19:15,520 DEBUG blivet/MainThread: PartitionDevice._set_parted_partition: xvda1 ; 2025-06-28 18:19:15,520 DEBUG blivet/MainThread: device xvda1 new parted_partition parted.Partition instance -- disk: fileSystem: None number: 1 path: /dev/xvda1 type: 0 name: active: True busy: False geometry: PedPartition: <_ped.Partition object at 0x7fbad22336a0> 2025-06-28 18:19:15,524 DEBUG blivet/MainThread: PartitionDevice.pre_commit_fixup: xvda2 ; 2025-06-28 18:19:15,524 DEBUG blivet/MainThread: sector-based lookup found partition xvda2 2025-06-28 18:19:15,527 DEBUG blivet/MainThread: PartitionDevice._set_parted_partition: xvda2 ; 2025-06-28 18:19:15,527 DEBUG blivet/MainThread: device xvda2 new parted_partition parted.Partition instance -- disk: fileSystem: number: 2 path: /dev/xvda2 type: 0 name: active: True busy: True geometry: PedPartition: <_ped.Partition object at 0x7fbad2232de0> 2025-06-28 18:19:15,527 INFO blivet/MainThread: sorting actions... 2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [178] destroy format None on disk sdj (id 57) 2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [174] destroy format None on disk sdi (id 52) 2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [170] destroy format None on disk sdh (id 47) 2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [166] destroy format None on disk sdg (id 42) 2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [139] destroy format None on disk sdf (id 37) 2025-06-28 18:19:15,550 DEBUG blivet/MainThread: action: [135] destroy format None on disk sde (id 32) 2025-06-28 18:19:15,550 DEBUG blivet/MainThread: action: [131] destroy format None on disk sdd (id 27) 2025-06-28 18:19:15,550 DEBUG blivet/MainThread: action: [104] destroy format None on disk sdc (id 22) 2025-06-28 18:19:15,550 DEBUG blivet/MainThread: action: [100] destroy format None on disk sdb (id 7) 2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [96] destroy format None on disk sda (id 2) 2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [179] create format lvmpv on disk sdj (id 57) 2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [175] create format lvmpv on disk sdi (id 52) 2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [171] create format lvmpv on disk sdh (id 47) 2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [167] create format lvmpv on disk sdg (id 42) 2025-06-28 18:19:15,552 DEBUG blivet/MainThread: action: [185] create device lvmvg test_vg3 (id 181) 2025-06-28 18:19:15,552 DEBUG blivet/MainThread: action: [219] create device lvmlv test_vg3-lv8 (id 214) 2025-06-28 18:19:15,552 DEBUG blivet/MainThread: action: [220] create format xfs filesystem on lvmlv test_vg3-lv8 (id 214) 2025-06-28 18:19:15,552 DEBUG blivet/MainThread: action: [210] create device lvmlv test_vg3-lv7 (id 205) 2025-06-28 18:19:15,553 DEBUG blivet/MainThread: action: [211] create format xfs filesystem on lvmlv test_vg3-lv7 (id 205) 2025-06-28 18:19:15,553 DEBUG blivet/MainThread: action: [201] create device lvmlv test_vg3-lv6 (id 196) 2025-06-28 18:19:15,553 DEBUG blivet/MainThread: action: [202] create format xfs filesystem on lvmlv test_vg3-lv6 (id 196) 2025-06-28 18:19:15,553 DEBUG blivet/MainThread: action: [192] create device lvmlv test_vg3-lv5 (id 187) 2025-06-28 18:19:15,554 DEBUG blivet/MainThread: action: [193] create format xfs filesystem on lvmlv test_vg3-lv5 (id 187) 2025-06-28 18:19:15,554 DEBUG blivet/MainThread: action: [140] create format lvmpv on disk sdf (id 37) 2025-06-28 18:19:15,554 DEBUG blivet/MainThread: action: [136] create format lvmpv on disk sde (id 32) 2025-06-28 18:19:15,554 DEBUG blivet/MainThread: action: [132] create format lvmpv on disk sdd (id 27) 2025-06-28 18:19:15,555 DEBUG blivet/MainThread: action: [146] create device lvmvg test_vg2 (id 142) 2025-06-28 18:19:15,555 DEBUG blivet/MainThread: action: [162] create device lvmlv test_vg2-lv4 (id 157) 2025-06-28 18:19:15,555 DEBUG blivet/MainThread: action: [163] create format xfs filesystem on lvmlv test_vg2-lv4 (id 157) 2025-06-28 18:19:15,555 DEBUG blivet/MainThread: action: [153] create device lvmlv test_vg2-lv3 (id 148) 2025-06-28 18:19:15,556 DEBUG blivet/MainThread: action: [154] create format xfs filesystem on lvmlv test_vg2-lv3 (id 148) 2025-06-28 18:19:15,556 DEBUG blivet/MainThread: action: [105] create format lvmpv on disk sdc (id 22) 2025-06-28 18:19:15,556 DEBUG blivet/MainThread: action: [101] create format lvmpv on disk sdb (id 7) 2025-06-28 18:19:15,556 DEBUG blivet/MainThread: action: [97] create format lvmpv on disk sda (id 2) 2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [111] create device lvmvg test_vg1 (id 107) 2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [127] create device lvmlv test_vg1-lv2 (id 122) 2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [128] create format xfs filesystem on lvmlv test_vg1-lv2 (id 122) 2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [118] create device lvmlv test_vg1-lv1 (id 113) 2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [119] create format xfs filesystem on lvmlv test_vg1-lv1 (id 113) 2025-06-28 18:19:15,558 INFO blivet/MainThread: executing action: [178] destroy format None on disk sdj (id 57) 2025-06-28 18:19:15,561 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,564 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdj ; type: None ; status: False ; 2025-06-28 18:19:15,568 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,593 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,594 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,606 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,606 INFO blivet/MainThread: executing action: [174] destroy format None on disk sdi (id 52) 2025-06-28 18:19:15,610 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,613 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdi ; type: None ; status: False ; 2025-06-28 18:19:15,616 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,639 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,640 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,651 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,651 INFO blivet/MainThread: executing action: [170] destroy format None on disk sdh (id 47) 2025-06-28 18:19:15,655 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,658 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdh ; type: None ; status: False ; 2025-06-28 18:19:15,661 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,687 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,688 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,699 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,699 INFO blivet/MainThread: executing action: [166] destroy format None on disk sdg (id 42) 2025-06-28 18:19:15,704 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,707 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdg ; type: None ; status: False ; 2025-06-28 18:19:15,709 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,740 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,740 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,751 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,751 INFO blivet/MainThread: executing action: [139] destroy format None on disk sdf (id 37) 2025-06-28 18:19:15,755 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,758 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdf ; type: None ; status: False ; 2025-06-28 18:19:15,761 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,783 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,783 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,796 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,796 INFO blivet/MainThread: executing action: [135] destroy format None on disk sde (id 32) 2025-06-28 18:19:15,800 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,803 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sde ; type: None ; status: False ; 2025-06-28 18:19:15,806 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,828 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,828 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,839 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,840 INFO blivet/MainThread: executing action: [131] destroy format None on disk sdd (id 27) 2025-06-28 18:19:15,844 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,848 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdd ; type: None ; status: False ; 2025-06-28 18:19:15,851 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,872 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,872 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,883 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,884 INFO blivet/MainThread: executing action: [104] destroy format None on disk sdc (id 22) 2025-06-28 18:19:15,888 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,891 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdc ; type: None ; status: False ; 2025-06-28 18:19:15,894 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,918 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,919 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,929 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,930 INFO blivet/MainThread: executing action: [100] destroy format None on disk sdb (id 7) 2025-06-28 18:19:15,934 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,937 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdb ; type: None ; status: False ; 2025-06-28 18:19:15,940 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,964 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,965 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:15,975 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:15,976 INFO blivet/MainThread: executing action: [96] destroy format None on disk sda (id 2) 2025-06-28 18:19:15,980 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: True ; status: True ; controllable: True ; 2025-06-28 18:19:15,983 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sda ; type: None ; status: False ; 2025-06-28 18:19:15,986 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,011 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,012 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,024 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,024 INFO blivet/MainThread: executing action: [179] create format lvmpv on disk sdj (id 57) 2025-06-28 18:19:16,028 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,031 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdj ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,035 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdj ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,035 DEBUG blivet/MainThread: lvm filter: device /dev/sdj added to the list of allowed devices 2025-06-28 18:19:16,035 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:16,035 INFO program/MainThread: Running [6] lvm pvcreate /dev/sdj --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:16,076 INFO program/MainThread: stdout[6]: Physical volume "/dev/sdj" successfully created. Creating devices file /etc/lvm/devices/system.devices 2025-06-28 18:19:16,077 INFO program/MainThread: stderr[6]: 2025-06-28 18:19:16,077 INFO program/MainThread: ...done [6] (exit code: 0) 2025-06-28 18:19:16,077 INFO program/MainThread: Running [7] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:16,088 INFO program/MainThread: stdout[7]: use_devicesfile=1 2025-06-28 18:19:16,088 INFO program/MainThread: stderr[7]: 2025-06-28 18:19:16,088 INFO program/MainThread: ...done [7] (exit code: 0) 2025-06-28 18:19:16,088 INFO program/MainThread: Running [8] lvmdevices --adddev /dev/sdj ... 2025-06-28 18:19:16,116 INFO program/MainThread: stdout[8]: 2025-06-28 18:19:16,117 INFO program/MainThread: stderr[8]: 2025-06-28 18:19:16,117 INFO program/MainThread: ...done [8] (exit code: 0) 2025-06-28 18:19:16,117 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdj 2025-06-28 18:19:16,117 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,127 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,132 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdj ; status: True ; 2025-06-28 18:19:16,132 DEBUG blivet/MainThread: sdj sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj 2025-06-28 18:19:16,133 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:16,133 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdj 2025-06-28 18:19:16,141 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,141 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,156 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,157 INFO blivet/MainThread: executing action: [175] create format lvmpv on disk sdi (id 52) 2025-06-28 18:19:16,161 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,164 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdi ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,167 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdi ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,168 DEBUG blivet/MainThread: lvm filter: device /dev/sdi added to the list of allowed devices 2025-06-28 18:19:16,168 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:16,168 INFO program/MainThread: Running [9] lvm pvcreate /dev/sdi --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:16,203 INFO program/MainThread: stdout[9]: Physical volume "/dev/sdi" successfully created. 2025-06-28 18:19:16,203 INFO program/MainThread: stderr[9]: 2025-06-28 18:19:16,204 INFO program/MainThread: ...done [9] (exit code: 0) 2025-06-28 18:19:16,204 INFO program/MainThread: Running [10] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:16,215 INFO program/MainThread: stdout[10]: use_devicesfile=1 2025-06-28 18:19:16,215 INFO program/MainThread: stderr[10]: 2025-06-28 18:19:16,215 INFO program/MainThread: ...done [10] (exit code: 0) 2025-06-28 18:19:16,215 INFO program/MainThread: Running [11] lvmdevices --adddev /dev/sdi ... 2025-06-28 18:19:16,245 INFO program/MainThread: stdout[11]: 2025-06-28 18:19:16,245 INFO program/MainThread: stderr[11]: 2025-06-28 18:19:16,245 INFO program/MainThread: ...done [11] (exit code: 0) 2025-06-28 18:19:16,245 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdi 2025-06-28 18:19:16,246 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,256 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,261 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdi ; status: True ; 2025-06-28 18:19:16,261 DEBUG blivet/MainThread: sdi sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi 2025-06-28 18:19:16,262 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:16,262 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdi 2025-06-28 18:19:16,270 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,270 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,285 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,286 INFO blivet/MainThread: executing action: [171] create format lvmpv on disk sdh (id 47) 2025-06-28 18:19:16,290 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,293 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdh ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,297 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdh ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,297 DEBUG blivet/MainThread: lvm filter: device /dev/sdh added to the list of allowed devices 2025-06-28 18:19:16,298 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:16,298 INFO program/MainThread: Running [12] lvm pvcreate /dev/sdh --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:16,333 INFO program/MainThread: stdout[12]: Physical volume "/dev/sdh" successfully created. 2025-06-28 18:19:16,334 INFO program/MainThread: stderr[12]: 2025-06-28 18:19:16,334 INFO program/MainThread: ...done [12] (exit code: 0) 2025-06-28 18:19:16,334 INFO program/MainThread: Running [13] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:16,345 INFO program/MainThread: stdout[13]: use_devicesfile=1 2025-06-28 18:19:16,346 INFO program/MainThread: stderr[13]: 2025-06-28 18:19:16,346 INFO program/MainThread: ...done [13] (exit code: 0) 2025-06-28 18:19:16,346 INFO program/MainThread: Running [14] lvmdevices --adddev /dev/sdh ... 2025-06-28 18:19:16,375 INFO program/MainThread: stdout[14]: 2025-06-28 18:19:16,375 INFO program/MainThread: stderr[14]: 2025-06-28 18:19:16,375 INFO program/MainThread: ...done [14] (exit code: 0) 2025-06-28 18:19:16,375 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdh 2025-06-28 18:19:16,376 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,386 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,391 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdh ; status: True ; 2025-06-28 18:19:16,391 DEBUG blivet/MainThread: sdh sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh 2025-06-28 18:19:16,392 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:16,392 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdh 2025-06-28 18:19:16,400 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,400 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,416 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,417 INFO blivet/MainThread: executing action: [167] create format lvmpv on disk sdg (id 42) 2025-06-28 18:19:16,421 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,424 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdg ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,427 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdg ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,427 DEBUG blivet/MainThread: lvm filter: device /dev/sdg added to the list of allowed devices 2025-06-28 18:19:16,427 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:16,427 INFO program/MainThread: Running [15] lvm pvcreate /dev/sdg --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:16,463 INFO program/MainThread: stdout[15]: Physical volume "/dev/sdg" successfully created. 2025-06-28 18:19:16,463 INFO program/MainThread: stderr[15]: 2025-06-28 18:19:16,463 INFO program/MainThread: ...done [15] (exit code: 0) 2025-06-28 18:19:16,463 INFO program/MainThread: Running [16] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:16,474 INFO program/MainThread: stdout[16]: use_devicesfile=1 2025-06-28 18:19:16,474 INFO program/MainThread: stderr[16]: 2025-06-28 18:19:16,474 INFO program/MainThread: ...done [16] (exit code: 0) 2025-06-28 18:19:16,474 INFO program/MainThread: Running [17] lvmdevices --adddev /dev/sdg ... 2025-06-28 18:19:16,510 INFO program/MainThread: stdout[17]: 2025-06-28 18:19:16,510 INFO program/MainThread: stderr[17]: 2025-06-28 18:19:16,510 INFO program/MainThread: ...done [17] (exit code: 0) 2025-06-28 18:19:16,510 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdg 2025-06-28 18:19:16,511 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,522 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,527 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdg ; status: True ; 2025-06-28 18:19:16,527 DEBUG blivet/MainThread: sdg sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg 2025-06-28 18:19:16,528 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:16,529 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdg 2025-06-28 18:19:16,537 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,537 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,553 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,554 INFO blivet/MainThread: executing action: [185] create device lvmvg test_vg3 (id 181) 2025-06-28 18:19:16,558 DEBUG blivet/MainThread: LVMVolumeGroupDevice.create: test_vg3 ; status: False ; 2025-06-28 18:19:16,561 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg3 ; orig: False ; 2025-06-28 18:19:16,564 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,567 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdg ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,570 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,574 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdh ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,577 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,580 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdi ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,583 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,586 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdj ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,589 DEBUG blivet/MainThread: LVMVolumeGroupDevice._create: test_vg3 ; status: False ; 2025-06-28 18:19:16,589 INFO program/MainThread: Running [18] lvm vgcreate -s 4096K test_vg3 /dev/sdg /dev/sdh /dev/sdi /dev/sdj --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:16,655 INFO program/MainThread: stdout[18]: Volume group "test_vg3" successfully created 2025-06-28 18:19:16,655 INFO program/MainThread: stderr[18]: 2025-06-28 18:19:16,655 INFO program/MainThread: ...done [18] (exit code: 0) 2025-06-28 18:19:16,664 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: False ; controllable: True ; 2025-06-28 18:19:16,670 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg3 ; orig: False ; 2025-06-28 18:19:16,677 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,681 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdg ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,684 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,688 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdh ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,691 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,694 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdi ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,698 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,702 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdj ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,702 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,767 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,771 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg3 ; status: False ; 2025-06-28 18:19:16,775 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg3 ; status: False ; 2025-06-28 18:19:16,775 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,787 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,788 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=test_vg3 2025-06-28 18:19:16,796 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,796 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,806 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,807 INFO blivet/MainThread: executing action: [219] create device lvmlv test_vg3-lv8 (id 214) 2025-06-28 18:19:16,811 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg3-lv8 ; status: False ; 2025-06-28 18:19:16,814 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg3-lv8 ; orig: False ; 2025-06-28 18:19:16,817 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: False ; controllable: True ; 2025-06-28 18:19:16,821 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg3 ; orig: False ; 2025-06-28 18:19:16,824 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,828 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdg ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,831 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,835 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdh ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,838 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,842 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdi ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,845 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,848 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdj ; type: lvmpv ; status: False ; 2025-06-28 18:19:16,848 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:16,857 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:16,862 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg3 ; status: False ; 2025-06-28 18:19:16,862 INFO program/MainThread: Running [19] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:16,894 INFO program/MainThread: stdout[19]: LVM2_VG_NAME=test_vg3 LVM2_VG_UUID=ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM LVM2_VG_SIZE=12851347456 LVM2_VG_FREE=12851347456 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=3064 LVM2_VG_FREE_COUNT=3064 LVM2_PV_COUNT=4 LVM2_VG_EXPORTED= LVM2_VG_TAGS= 2025-06-28 18:19:16,894 INFO program/MainThread: stderr[19]: 2025-06-28 18:19:16,894 INFO program/MainThread: ...done [19] (exit code: 0) 2025-06-28 18:19:16,899 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg3-lv8 ; status: False ; 2025-06-28 18:19:16,899 INFO program/MainThread: Running [20] lvm lvcreate -n lv8 -L 1257472K -y --type linear test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:16,956 INFO program/MainThread: stdout[20]: Logical volume "lv8" created. 2025-06-28 18:19:16,957 INFO program/MainThread: stderr[20]: 2025-06-28 18:19:16,957 INFO program/MainThread: ...done [20] (exit code: 0) 2025-06-28 18:19:16,961 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv8 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:16,964 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv8 ; status: True ; 2025-06-28 18:19:16,965 DEBUG blivet/MainThread: test_vg3-lv8 sysfs_path set to /sys/devices/virtual/block/dm-0 2025-06-28 18:19:16,965 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:17,013 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,017 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg3-lv8 ; sysfs_path: /sys/devices/virtual/block/dm-0 ; 2025-06-28 18:19:17,018 DEBUG blivet/MainThread: updated test_vg3-lv8 size to 1.2 GiB (1.2 GiB) 2025-06-28 18:19:17,018 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-0 2025-06-28 18:19:17,026 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,027 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:17,042 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,043 INFO blivet/MainThread: executing action: [220] create format xfs filesystem on lvmlv test_vg3-lv8 (id 214) 2025-06-28 18:19:17,047 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv8 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:17,050 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg3-lv8 ; type: xfs ; status: False ; 2025-06-28 18:19:17,054 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg3-lv8 ; mountpoint: ; 2025-06-28 18:19:17,054 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored. bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None, 2025-06-28 18:19:17,054 INFO program/MainThread: Running [21] mkfs.xfs /dev/mapper/test_vg3-lv8 -f ... 2025-06-28 18:19:17,184 INFO program/MainThread: stdout[21]: meta-data=/dev/mapper/test_vg3-lv8 isize=512 agcount=4, agsize=78592 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=314368, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 2025-06-28 18:19:17,184 INFO program/MainThread: stderr[21]: 2025-06-28 18:19:17,184 INFO program/MainThread: ...done [21] (exit code: 0) 2025-06-28 18:19:17,185 INFO program/MainThread: Running [22] xfs_admin -L -- /dev/mapper/test_vg3-lv8 ... 2025-06-28 18:19:17,211 INFO program/MainThread: stdout[22]: writing all SBs new label = "" 2025-06-28 18:19:17,211 INFO program/MainThread: stderr[22]: 2025-06-28 18:19:17,211 INFO program/MainThread: ...done [22] (exit code: 0) 2025-06-28 18:19:17,211 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:17,230 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,235 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv8 ; status: True ; 2025-06-28 18:19:17,236 DEBUG blivet/MainThread: test_vg3-lv8 sysfs_path set to /sys/devices/virtual/block/dm-0 2025-06-28 18:19:17,236 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:17,237 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-0 2025-06-28 18:19:17,244 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,245 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:17,259 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,260 INFO blivet/MainThread: executing action: [210] create device lvmlv test_vg3-lv7 (id 205) 2025-06-28 18:19:17,264 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg3-lv7 ; status: False ; 2025-06-28 18:19:17,267 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg3-lv7 ; orig: False ; 2025-06-28 18:19:17,270 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:17,270 INFO program/MainThread: Running [23] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:17,299 INFO program/MainThread: stdout[23]: LVM2_VG_NAME=test_vg3 LVM2_VG_UUID=ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM LVM2_VG_SIZE=12851347456 LVM2_VG_FREE=11563696128 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=3064 LVM2_VG_FREE_COUNT=2757 LVM2_PV_COUNT=4 LVM2_VG_EXPORTED= LVM2_VG_TAGS= 2025-06-28 18:19:17,299 INFO program/MainThread: stderr[23]: 2025-06-28 18:19:17,299 INFO program/MainThread: ...done [23] (exit code: 0) 2025-06-28 18:19:17,305 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg3-lv7 ; status: False ; 2025-06-28 18:19:17,305 INFO program/MainThread: Running [24] lvm lvcreate -n lv7 -L 1257472K -y --type linear test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:17,356 INFO program/MainThread: stdout[24]: Logical volume "lv7" created. 2025-06-28 18:19:17,356 INFO program/MainThread: stderr[24]: 2025-06-28 18:19:17,356 INFO program/MainThread: ...done [24] (exit code: 0) 2025-06-28 18:19:17,362 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv7 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:17,365 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv7 ; status: True ; 2025-06-28 18:19:17,366 DEBUG blivet/MainThread: test_vg3-lv7 sysfs_path set to /sys/devices/virtual/block/dm-1 2025-06-28 18:19:17,366 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:17,407 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,411 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg3-lv7 ; sysfs_path: /sys/devices/virtual/block/dm-1 ; 2025-06-28 18:19:17,412 DEBUG blivet/MainThread: updated test_vg3-lv7 size to 1.2 GiB (1.2 GiB) 2025-06-28 18:19:17,412 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-1 2025-06-28 18:19:17,421 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,421 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:17,439 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:17,440 INFO blivet/MainThread: executing action: [211] create format xfs filesystem on lvmlv test_vg3-lv7 (id 205) 2025-06-28 18:19:17,444 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv7 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:17,447 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg3-lv7 ; type: xfs ; status: False ; 2025-06-28 18:19:17,451 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg3-lv7 ; mountpoint: ; 2025-06-28 18:19:17,451 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored. bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None, 2025-06-28 18:19:17,451 INFO program/MainThread: Running [25] mkfs.xfs /dev/mapper/test_vg3-lv7 -f ... 2025-06-28 18:19:18,278 INFO program/MainThread: stdout[25]: meta-data=/dev/mapper/test_vg3-lv7 isize=512 agcount=4, agsize=78592 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=314368, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 2025-06-28 18:19:18,278 INFO program/MainThread: stderr[25]: 2025-06-28 18:19:18,278 INFO program/MainThread: ...done [25] (exit code: 0) 2025-06-28 18:19:18,279 INFO program/MainThread: Running [26] xfs_admin -L -- /dev/mapper/test_vg3-lv7 ... 2025-06-28 18:19:18,294 INFO program/MainThread: stdout[26]: writing all SBs new label = "" 2025-06-28 18:19:18,294 INFO program/MainThread: stderr[26]: 2025-06-28 18:19:18,294 INFO program/MainThread: ...done [26] (exit code: 0) 2025-06-28 18:19:18,294 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:18,311 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:18,316 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv7 ; status: True ; 2025-06-28 18:19:18,317 DEBUG blivet/MainThread: test_vg3-lv7 sysfs_path set to /sys/devices/virtual/block/dm-1 2025-06-28 18:19:18,317 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:18,318 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-1 2025-06-28 18:19:18,325 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:18,326 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:18,338 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:18,339 INFO blivet/MainThread: executing action: [201] create device lvmlv test_vg3-lv6 (id 196) 2025-06-28 18:19:18,343 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg3-lv6 ; status: False ; 2025-06-28 18:19:18,346 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg3-lv6 ; orig: False ; 2025-06-28 18:19:18,349 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:18,350 INFO program/MainThread: Running [27] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:18,381 INFO program/MainThread: stdout[27]: LVM2_VG_NAME=test_vg3 LVM2_VG_UUID=ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM LVM2_VG_SIZE=12851347456 LVM2_VG_FREE=10276044800 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=3064 LVM2_VG_FREE_COUNT=2450 LVM2_PV_COUNT=4 LVM2_VG_EXPORTED= LVM2_VG_TAGS= 2025-06-28 18:19:18,381 INFO program/MainThread: stderr[27]: 2025-06-28 18:19:18,381 INFO program/MainThread: ...done [27] (exit code: 0) 2025-06-28 18:19:18,385 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg3-lv6 ; status: False ; 2025-06-28 18:19:18,386 INFO program/MainThread: Running [28] lvm lvcreate -n lv6 -L 3141632K -y --type linear test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:18,431 INFO program/MainThread: stdout[28]: Logical volume "lv6" created. 2025-06-28 18:19:18,431 INFO program/MainThread: stderr[28]: 2025-06-28 18:19:18,431 INFO program/MainThread: ...done [28] (exit code: 0) 2025-06-28 18:19:18,464 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv6 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:18,476 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv6 ; status: True ; 2025-06-28 18:19:18,477 DEBUG blivet/MainThread: test_vg3-lv6 sysfs_path set to /sys/devices/virtual/block/dm-2 2025-06-28 18:19:18,477 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:18,491 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:18,497 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg3-lv6 ; sysfs_path: /sys/devices/virtual/block/dm-2 ; 2025-06-28 18:19:18,497 DEBUG blivet/MainThread: updated test_vg3-lv6 size to 3 GiB (3 GiB) 2025-06-28 18:19:18,497 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-2 2025-06-28 18:19:18,505 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:18,505 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:18,517 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:18,517 INFO blivet/MainThread: executing action: [202] create format xfs filesystem on lvmlv test_vg3-lv6 (id 196) 2025-06-28 18:19:18,522 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv6 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:18,525 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg3-lv6 ; type: xfs ; status: False ; 2025-06-28 18:19:18,528 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg3-lv6 ; mountpoint: ; 2025-06-28 18:19:18,528 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored. bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None, 2025-06-28 18:19:18,529 INFO program/MainThread: Running [29] mkfs.xfs /dev/mapper/test_vg3-lv6 -f ... 2025-06-28 18:19:19,365 INFO program/MainThread: stdout[29]: meta-data=/dev/mapper/test_vg3-lv6 isize=512 agcount=4, agsize=196352 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=785408, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 2025-06-28 18:19:19,365 INFO program/MainThread: stderr[29]: 2025-06-28 18:19:19,365 INFO program/MainThread: ...done [29] (exit code: 0) 2025-06-28 18:19:19,365 INFO program/MainThread: Running [30] xfs_admin -L -- /dev/mapper/test_vg3-lv6 ... 2025-06-28 18:19:19,380 INFO program/MainThread: stdout[30]: writing all SBs new label = "" 2025-06-28 18:19:19,380 INFO program/MainThread: stderr[30]: 2025-06-28 18:19:19,380 INFO program/MainThread: ...done [30] (exit code: 0) 2025-06-28 18:19:19,380 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:19,399 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:19,404 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv6 ; status: True ; 2025-06-28 18:19:19,405 DEBUG blivet/MainThread: test_vg3-lv6 sysfs_path set to /sys/devices/virtual/block/dm-2 2025-06-28 18:19:19,405 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:19,406 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-2 2025-06-28 18:19:19,415 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:19,415 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:19,428 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:19,429 INFO blivet/MainThread: executing action: [192] create device lvmlv test_vg3-lv5 (id 187) 2025-06-28 18:19:19,433 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg3-lv5 ; status: False ; 2025-06-28 18:19:19,436 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg3-lv5 ; orig: False ; 2025-06-28 18:19:19,440 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:19,440 INFO program/MainThread: Running [31] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:19,467 INFO program/MainThread: stdout[31]: LVM2_VG_NAME=test_vg3 LVM2_VG_UUID=ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM LVM2_VG_SIZE=12851347456 LVM2_VG_FREE=7059013632 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=3064 LVM2_VG_FREE_COUNT=1683 LVM2_PV_COUNT=4 LVM2_VG_EXPORTED= LVM2_VG_TAGS= 2025-06-28 18:19:19,467 INFO program/MainThread: stderr[31]: 2025-06-28 18:19:19,467 INFO program/MainThread: ...done [31] (exit code: 0) 2025-06-28 18:19:19,471 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg3-lv5 ; status: False ; 2025-06-28 18:19:19,471 INFO program/MainThread: Running [32] lvm lvcreate -n lv5 -L 3772416K -y --type linear test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:19,518 INFO program/MainThread: stdout[32]: Logical volume "lv5" created. 2025-06-28 18:19:19,519 INFO program/MainThread: stderr[32]: 2025-06-28 18:19:19,519 INFO program/MainThread: ...done [32] (exit code: 0) 2025-06-28 18:19:19,540 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv5 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:19,554 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv5 ; status: True ; 2025-06-28 18:19:19,555 DEBUG blivet/MainThread: test_vg3-lv5 sysfs_path set to /sys/devices/virtual/block/dm-3 2025-06-28 18:19:19,555 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:19,567 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:19,572 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg3-lv5 ; sysfs_path: /sys/devices/virtual/block/dm-3 ; 2025-06-28 18:19:19,572 DEBUG blivet/MainThread: updated test_vg3-lv5 size to 3.6 GiB (3.6 GiB) 2025-06-28 18:19:19,573 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-3 2025-06-28 18:19:19,581 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:19,581 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:19,596 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:19,597 INFO blivet/MainThread: executing action: [193] create format xfs filesystem on lvmlv test_vg3-lv5 (id 187) 2025-06-28 18:19:19,601 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv5 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:19,604 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg3-lv5 ; type: xfs ; status: False ; 2025-06-28 18:19:19,608 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg3-lv5 ; mountpoint: ; 2025-06-28 18:19:19,608 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored. bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None, 2025-06-28 18:19:19,608 INFO program/MainThread: Running [33] mkfs.xfs /dev/mapper/test_vg3-lv5 -f ... 2025-06-28 18:19:20,451 INFO program/MainThread: stdout[33]: meta-data=/dev/mapper/test_vg3-lv5 isize=512 agcount=4, agsize=235776 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=943104, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 2025-06-28 18:19:20,452 INFO program/MainThread: stderr[33]: 2025-06-28 18:19:20,452 INFO program/MainThread: ...done [33] (exit code: 0) 2025-06-28 18:19:20,452 INFO program/MainThread: Running [34] xfs_admin -L -- /dev/mapper/test_vg3-lv5 ... 2025-06-28 18:19:20,467 INFO program/MainThread: stdout[34]: writing all SBs new label = "" 2025-06-28 18:19:20,468 INFO program/MainThread: stderr[34]: 2025-06-28 18:19:20,468 INFO program/MainThread: ...done [34] (exit code: 0) 2025-06-28 18:19:20,468 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:20,487 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,492 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv5 ; status: True ; 2025-06-28 18:19:20,492 DEBUG blivet/MainThread: test_vg3-lv5 sysfs_path set to /sys/devices/virtual/block/dm-3 2025-06-28 18:19:20,493 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:20,493 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-3 2025-06-28 18:19:20,501 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,501 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:20,515 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,516 INFO blivet/MainThread: executing action: [140] create format lvmpv on disk sdf (id 37) 2025-06-28 18:19:20,520 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:20,523 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdf ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,527 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdf ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,527 DEBUG blivet/MainThread: lvm filter: device /dev/sdf added to the list of allowed devices 2025-06-28 18:19:20,527 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:20,527 INFO program/MainThread: Running [35] lvm pvcreate /dev/sdf --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:20,559 INFO program/MainThread: stdout[35]: Physical volume "/dev/sdf" successfully created. 2025-06-28 18:19:20,560 INFO program/MainThread: stderr[35]: 2025-06-28 18:19:20,560 INFO program/MainThread: ...done [35] (exit code: 0) 2025-06-28 18:19:20,560 INFO program/MainThread: Running [36] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:20,571 INFO program/MainThread: stdout[36]: use_devicesfile=1 2025-06-28 18:19:20,571 INFO program/MainThread: stderr[36]: 2025-06-28 18:19:20,571 INFO program/MainThread: ...done [36] (exit code: 0) 2025-06-28 18:19:20,571 INFO program/MainThread: Running [37] lvmdevices --adddev /dev/sdf ... 2025-06-28 18:19:20,597 INFO program/MainThread: stdout[37]: 2025-06-28 18:19:20,597 INFO program/MainThread: stderr[37]: 2025-06-28 18:19:20,597 INFO program/MainThread: ...done [37] (exit code: 0) 2025-06-28 18:19:20,597 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdf 2025-06-28 18:19:20,598 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:20,608 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,613 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdf ; status: True ; 2025-06-28 18:19:20,613 DEBUG blivet/MainThread: sdf sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf 2025-06-28 18:19:20,614 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:20,614 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdf 2025-06-28 18:19:20,622 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,623 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:20,637 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,638 INFO blivet/MainThread: executing action: [136] create format lvmpv on disk sde (id 32) 2025-06-28 18:19:20,642 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:20,645 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sde ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,648 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sde ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,649 DEBUG blivet/MainThread: lvm filter: device /dev/sde added to the list of allowed devices 2025-06-28 18:19:20,649 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:20,649 INFO program/MainThread: Running [38] lvm pvcreate /dev/sde --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:20,679 INFO program/MainThread: stdout[38]: Physical volume "/dev/sde" successfully created. 2025-06-28 18:19:20,680 INFO program/MainThread: stderr[38]: 2025-06-28 18:19:20,680 INFO program/MainThread: ...done [38] (exit code: 0) 2025-06-28 18:19:20,680 INFO program/MainThread: Running [39] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:20,690 INFO program/MainThread: stdout[39]: use_devicesfile=1 2025-06-28 18:19:20,690 INFO program/MainThread: stderr[39]: 2025-06-28 18:19:20,690 INFO program/MainThread: ...done [39] (exit code: 0) 2025-06-28 18:19:20,690 INFO program/MainThread: Running [40] lvmdevices --adddev /dev/sde ... 2025-06-28 18:19:20,716 INFO program/MainThread: stdout[40]: 2025-06-28 18:19:20,716 INFO program/MainThread: stderr[40]: 2025-06-28 18:19:20,716 INFO program/MainThread: ...done [40] (exit code: 0) 2025-06-28 18:19:20,716 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sde 2025-06-28 18:19:20,717 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:20,727 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,732 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sde ; status: True ; 2025-06-28 18:19:20,732 DEBUG blivet/MainThread: sde sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde 2025-06-28 18:19:20,733 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:20,733 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sde 2025-06-28 18:19:20,742 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,742 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:20,755 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,755 INFO blivet/MainThread: executing action: [132] create format lvmpv on disk sdd (id 27) 2025-06-28 18:19:20,760 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:20,764 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdd ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,767 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdd ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,767 DEBUG blivet/MainThread: lvm filter: device /dev/sdd added to the list of allowed devices 2025-06-28 18:19:20,767 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:20,768 INFO program/MainThread: Running [41] lvm pvcreate /dev/sdd --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:20,803 INFO program/MainThread: stdout[41]: Physical volume "/dev/sdd" successfully created. 2025-06-28 18:19:20,803 INFO program/MainThread: stderr[41]: 2025-06-28 18:19:20,804 INFO program/MainThread: ...done [41] (exit code: 0) 2025-06-28 18:19:20,804 INFO program/MainThread: Running [42] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:20,814 INFO program/MainThread: stdout[42]: use_devicesfile=1 2025-06-28 18:19:20,815 INFO program/MainThread: stderr[42]: 2025-06-28 18:19:20,815 INFO program/MainThread: ...done [42] (exit code: 0) 2025-06-28 18:19:20,815 INFO program/MainThread: Running [43] lvmdevices --adddev /dev/sdd ... 2025-06-28 18:19:20,846 INFO program/MainThread: stdout[43]: 2025-06-28 18:19:20,846 INFO program/MainThread: stderr[43]: 2025-06-28 18:19:20,846 INFO program/MainThread: ...done [43] (exit code: 0) 2025-06-28 18:19:20,846 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdd 2025-06-28 18:19:20,847 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:20,854 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,859 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdd ; status: True ; 2025-06-28 18:19:20,859 DEBUG blivet/MainThread: sdd sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd 2025-06-28 18:19:20,859 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:20,860 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdd 2025-06-28 18:19:20,867 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,868 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:20,880 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:20,881 INFO blivet/MainThread: executing action: [146] create device lvmvg test_vg2 (id 142) 2025-06-28 18:19:20,885 DEBUG blivet/MainThread: LVMVolumeGroupDevice.create: test_vg2 ; status: False ; 2025-06-28 18:19:20,888 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg2 ; orig: False ; 2025-06-28 18:19:20,892 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:20,895 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdd ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,898 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:20,901 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sde ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,904 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:20,907 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdf ; type: lvmpv ; status: False ; 2025-06-28 18:19:20,911 DEBUG blivet/MainThread: LVMVolumeGroupDevice._create: test_vg2 ; status: False ; 2025-06-28 18:19:20,911 INFO program/MainThread: Running [44] lvm vgcreate -s 4096K test_vg2 /dev/sdd /dev/sde /dev/sdf --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:20,977 INFO program/MainThread: stdout[44]: Volume group "test_vg2" successfully created 2025-06-28 18:19:20,977 INFO program/MainThread: stderr[44]: 2025-06-28 18:19:20,977 INFO program/MainThread: ...done [44] (exit code: 0) 2025-06-28 18:19:20,991 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg2 ; orig: False ; status: False ; controllable: True ; 2025-06-28 18:19:21,000 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg2 ; orig: False ; 2025-06-28 18:19:21,009 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,025 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdd ; type: lvmpv ; status: False ; 2025-06-28 18:19:21,038 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,047 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sde ; type: lvmpv ; status: False ; 2025-06-28 18:19:21,051 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,055 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdf ; type: lvmpv ; status: False ; 2025-06-28 18:19:21,055 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,068 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,072 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg2 ; status: False ; 2025-06-28 18:19:21,075 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg2 ; status: False ; 2025-06-28 18:19:21,076 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,088 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,088 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=test_vg2 2025-06-28 18:19:21,095 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,096 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,106 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,107 INFO blivet/MainThread: executing action: [162] create device lvmlv test_vg2-lv4 (id 157) 2025-06-28 18:19:21,111 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg2-lv4 ; status: False ; 2025-06-28 18:19:21,114 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg2-lv4 ; orig: False ; 2025-06-28 18:19:21,118 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg2 ; orig: False ; status: False ; controllable: True ; 2025-06-28 18:19:21,121 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg2 ; orig: False ; 2025-06-28 18:19:21,124 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,128 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdd ; type: lvmpv ; status: False ; 2025-06-28 18:19:21,131 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,135 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sde ; type: lvmpv ; status: False ; 2025-06-28 18:19:21,138 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,142 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdf ; type: lvmpv ; status: False ; 2025-06-28 18:19:21,142 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,152 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,157 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg2 ; status: False ; 2025-06-28 18:19:21,157 INFO program/MainThread: Running [45] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg2 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:21,185 INFO program/MainThread: stdout[45]: LVM2_VG_NAME=test_vg2 LVM2_VG_UUID=lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV LVM2_VG_SIZE=9638510592 LVM2_VG_FREE=9638510592 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=2298 LVM2_VG_FREE_COUNT=2298 LVM2_PV_COUNT=3 LVM2_VG_EXPORTED= LVM2_VG_TAGS= 2025-06-28 18:19:21,185 INFO program/MainThread: stderr[45]: 2025-06-28 18:19:21,185 INFO program/MainThread: ...done [45] (exit code: 0) 2025-06-28 18:19:21,190 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg2-lv4 ; status: False ; 2025-06-28 18:19:21,190 INFO program/MainThread: Running [46] lvm lvcreate -n lv4 -L 1888256K -y --type linear test_vg2 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:21,237 INFO program/MainThread: stdout[46]: Logical volume "lv4" created. 2025-06-28 18:19:21,237 INFO program/MainThread: stderr[46]: 2025-06-28 18:19:21,237 INFO program/MainThread: ...done [46] (exit code: 0) 2025-06-28 18:19:21,250 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg2-lv4 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,257 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg2-lv4 ; status: True ; 2025-06-28 18:19:21,257 DEBUG blivet/MainThread: test_vg2-lv4 sysfs_path set to /sys/devices/virtual/block/dm-4 2025-06-28 18:19:21,257 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,281 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,286 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg2-lv4 ; sysfs_path: /sys/devices/virtual/block/dm-4 ; 2025-06-28 18:19:21,287 DEBUG blivet/MainThread: updated test_vg2-lv4 size to 1.8 GiB (1.8 GiB) 2025-06-28 18:19:21,287 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-4 2025-06-28 18:19:21,295 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,296 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,309 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,310 INFO blivet/MainThread: executing action: [163] create format xfs filesystem on lvmlv test_vg2-lv4 (id 157) 2025-06-28 18:19:21,314 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg2-lv4 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,317 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg2-lv4 ; type: xfs ; status: False ; 2025-06-28 18:19:21,321 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg2-lv4 ; mountpoint: ; 2025-06-28 18:19:21,321 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored. bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None, 2025-06-28 18:19:21,321 INFO program/MainThread: Running [47] mkfs.xfs /dev/mapper/test_vg2-lv4 -f ... 2025-06-28 18:19:21,551 INFO program/MainThread: stdout[47]: meta-data=/dev/mapper/test_vg2-lv4 isize=512 agcount=4, agsize=118016 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=472064, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 2025-06-28 18:19:21,551 INFO program/MainThread: stderr[47]: 2025-06-28 18:19:21,551 INFO program/MainThread: ...done [47] (exit code: 0) 2025-06-28 18:19:21,552 INFO program/MainThread: Running [48] xfs_admin -L -- /dev/mapper/test_vg2-lv4 ... 2025-06-28 18:19:21,566 INFO program/MainThread: stdout[48]: writing all SBs new label = "" 2025-06-28 18:19:21,567 INFO program/MainThread: stderr[48]: 2025-06-28 18:19:21,567 INFO program/MainThread: ...done [48] (exit code: 0) 2025-06-28 18:19:21,567 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,586 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,592 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg2-lv4 ; status: True ; 2025-06-28 18:19:21,592 DEBUG blivet/MainThread: test_vg2-lv4 sysfs_path set to /sys/devices/virtual/block/dm-4 2025-06-28 18:19:21,592 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:21,593 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-4 2025-06-28 18:19:21,601 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,601 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,614 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,615 INFO blivet/MainThread: executing action: [153] create device lvmlv test_vg2-lv3 (id 148) 2025-06-28 18:19:21,619 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg2-lv3 ; status: False ; 2025-06-28 18:19:21,623 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg2-lv3 ; orig: False ; 2025-06-28 18:19:21,626 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg2 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,626 INFO program/MainThread: Running [49] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg2 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:21,653 INFO program/MainThread: stdout[49]: LVM2_VG_NAME=test_vg2 LVM2_VG_UUID=lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV LVM2_VG_SIZE=9638510592 LVM2_VG_FREE=7704936448 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=2298 LVM2_VG_FREE_COUNT=1837 LVM2_PV_COUNT=3 LVM2_VG_EXPORTED= LVM2_VG_TAGS= 2025-06-28 18:19:21,653 INFO program/MainThread: stderr[49]: 2025-06-28 18:19:21,653 INFO program/MainThread: ...done [49] (exit code: 0) 2025-06-28 18:19:21,657 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg2-lv3 ; status: False ; 2025-06-28 18:19:21,657 INFO program/MainThread: Running [50] lvm lvcreate -n lv3 -L 946176K -y --type linear test_vg2 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:21,709 INFO program/MainThread: stdout[50]: Logical volume "lv3" created. 2025-06-28 18:19:21,709 INFO program/MainThread: stderr[50]: 2025-06-28 18:19:21,709 INFO program/MainThread: ...done [50] (exit code: 0) 2025-06-28 18:19:21,714 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg2-lv3 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,719 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg2-lv3 ; status: True ; 2025-06-28 18:19:21,719 DEBUG blivet/MainThread: test_vg2-lv3 sysfs_path set to /sys/devices/virtual/block/dm-5 2025-06-28 18:19:21,719 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,748 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,753 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg2-lv3 ; sysfs_path: /sys/devices/virtual/block/dm-5 ; 2025-06-28 18:19:21,753 DEBUG blivet/MainThread: updated test_vg2-lv3 size to 924 MiB (924 MiB) 2025-06-28 18:19:21,754 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-5 2025-06-28 18:19:21,762 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,763 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:21,778 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:21,779 INFO blivet/MainThread: executing action: [154] create format xfs filesystem on lvmlv test_vg2-lv3 (id 148) 2025-06-28 18:19:21,783 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg2-lv3 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:21,787 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg2-lv3 ; type: xfs ; status: False ; 2025-06-28 18:19:21,790 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg2-lv3 ; mountpoint: ; 2025-06-28 18:19:21,790 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored. bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None, 2025-06-28 18:19:21,791 INFO program/MainThread: Running [51] mkfs.xfs /dev/mapper/test_vg2-lv3 -f ... 2025-06-28 18:19:22,635 INFO program/MainThread: stdout[51]: meta-data=/dev/mapper/test_vg2-lv3 isize=512 agcount=4, agsize=59136 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=236544, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 2025-06-28 18:19:22,636 INFO program/MainThread: stderr[51]: 2025-06-28 18:19:22,636 INFO program/MainThread: ...done [51] (exit code: 0) 2025-06-28 18:19:22,636 INFO program/MainThread: Running [52] xfs_admin -L -- /dev/mapper/test_vg2-lv3 ... 2025-06-28 18:19:22,656 INFO program/MainThread: stdout[52]: writing all SBs new label = "" 2025-06-28 18:19:22,656 INFO program/MainThread: stderr[52]: 2025-06-28 18:19:22,656 INFO program/MainThread: ...done [52] (exit code: 0) 2025-06-28 18:19:22,656 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:22,671 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,676 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg2-lv3 ; status: True ; 2025-06-28 18:19:22,676 DEBUG blivet/MainThread: test_vg2-lv3 sysfs_path set to /sys/devices/virtual/block/dm-5 2025-06-28 18:19:22,677 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:22,677 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-5 2025-06-28 18:19:22,685 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,685 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:22,701 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,702 INFO blivet/MainThread: executing action: [105] create format lvmpv on disk sdc (id 22) 2025-06-28 18:19:22,706 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:22,709 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdc ; type: lvmpv ; status: False ; 2025-06-28 18:19:22,714 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdc ; type: lvmpv ; status: False ; 2025-06-28 18:19:22,714 DEBUG blivet/MainThread: lvm filter: device /dev/sdc added to the list of allowed devices 2025-06-28 18:19:22,714 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:22,714 INFO program/MainThread: Running [53] lvm pvcreate /dev/sdc --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:22,747 INFO program/MainThread: stdout[53]: Physical volume "/dev/sdc" successfully created. 2025-06-28 18:19:22,747 INFO program/MainThread: stderr[53]: 2025-06-28 18:19:22,747 INFO program/MainThread: ...done [53] (exit code: 0) 2025-06-28 18:19:22,747 INFO program/MainThread: Running [54] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:22,758 INFO program/MainThread: stdout[54]: use_devicesfile=1 2025-06-28 18:19:22,758 INFO program/MainThread: stderr[54]: 2025-06-28 18:19:22,758 INFO program/MainThread: ...done [54] (exit code: 0) 2025-06-28 18:19:22,758 INFO program/MainThread: Running [55] lvmdevices --adddev /dev/sdc ... 2025-06-28 18:19:22,786 INFO program/MainThread: stdout[55]: 2025-06-28 18:19:22,786 INFO program/MainThread: stderr[55]: 2025-06-28 18:19:22,786 INFO program/MainThread: ...done [55] (exit code: 0) 2025-06-28 18:19:22,786 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdc 2025-06-28 18:19:22,787 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:22,798 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,803 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdc ; status: True ; 2025-06-28 18:19:22,803 DEBUG blivet/MainThread: sdc sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc 2025-06-28 18:19:22,804 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:22,804 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdc 2025-06-28 18:19:22,813 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,813 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:22,829 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,830 INFO blivet/MainThread: executing action: [101] create format lvmpv on disk sdb (id 7) 2025-06-28 18:19:22,834 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:22,837 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdb ; type: lvmpv ; status: False ; 2025-06-28 18:19:22,840 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdb ; type: lvmpv ; status: False ; 2025-06-28 18:19:22,841 DEBUG blivet/MainThread: lvm filter: device /dev/sdb added to the list of allowed devices 2025-06-28 18:19:22,841 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:22,841 INFO program/MainThread: Running [56] lvm pvcreate /dev/sdb --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:22,879 INFO program/MainThread: stdout[56]: Physical volume "/dev/sdb" successfully created. 2025-06-28 18:19:22,880 INFO program/MainThread: stderr[56]: 2025-06-28 18:19:22,880 INFO program/MainThread: ...done [56] (exit code: 0) 2025-06-28 18:19:22,880 INFO program/MainThread: Running [57] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:22,891 INFO program/MainThread: stdout[57]: use_devicesfile=1 2025-06-28 18:19:22,891 INFO program/MainThread: stderr[57]: 2025-06-28 18:19:22,891 INFO program/MainThread: ...done [57] (exit code: 0) 2025-06-28 18:19:22,891 INFO program/MainThread: Running [58] lvmdevices --adddev /dev/sdb ... 2025-06-28 18:19:22,923 INFO program/MainThread: stdout[58]: 2025-06-28 18:19:22,923 INFO program/MainThread: stderr[58]: 2025-06-28 18:19:22,923 INFO program/MainThread: ...done [58] (exit code: 0) 2025-06-28 18:19:22,923 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdb 2025-06-28 18:19:22,924 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:22,935 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,940 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdb ; status: True ; 2025-06-28 18:19:22,940 DEBUG blivet/MainThread: sdb sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb 2025-06-28 18:19:22,941 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:22,941 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdb 2025-06-28 18:19:22,950 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,950 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:22,964 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:22,965 INFO blivet/MainThread: executing action: [97] create format lvmpv on disk sda (id 2) 2025-06-28 18:19:22,969 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:22,973 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sda ; type: lvmpv ; status: False ; 2025-06-28 18:19:22,976 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sda ; type: lvmpv ; status: False ; 2025-06-28 18:19:22,976 DEBUG blivet/MainThread: lvm filter: device /dev/sda added to the list of allowed devices 2025-06-28 18:19:22,976 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list 2025-06-28 18:19:22,976 INFO program/MainThread: Running [59] lvm pvcreate /dev/sda --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ... 2025-06-28 18:19:23,009 INFO program/MainThread: stdout[59]: Physical volume "/dev/sda" successfully created. 2025-06-28 18:19:23,010 INFO program/MainThread: stderr[59]: 2025-06-28 18:19:23,010 INFO program/MainThread: ...done [59] (exit code: 0) 2025-06-28 18:19:23,010 INFO program/MainThread: Running [60] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:23,021 INFO program/MainThread: stdout[60]: use_devicesfile=1 2025-06-28 18:19:23,021 INFO program/MainThread: stderr[60]: 2025-06-28 18:19:23,021 INFO program/MainThread: ...done [60] (exit code: 0) 2025-06-28 18:19:23,021 INFO program/MainThread: Running [61] lvmdevices --adddev /dev/sda ... 2025-06-28 18:19:23,051 INFO program/MainThread: stdout[61]: 2025-06-28 18:19:23,051 INFO program/MainThread: stderr[61]: 2025-06-28 18:19:23,051 INFO program/MainThread: ...done [61] (exit code: 0) 2025-06-28 18:19:23,051 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sda 2025-06-28 18:19:23,051 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,062 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,069 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sda ; status: True ; 2025-06-28 18:19:23,069 DEBUG blivet/MainThread: sda sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda 2025-06-28 18:19:23,071 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:23,071 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sda 2025-06-28 18:19:23,079 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,079 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,092 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,092 INFO blivet/MainThread: executing action: [111] create device lvmvg test_vg1 (id 107) 2025-06-28 18:19:23,096 DEBUG blivet/MainThread: LVMVolumeGroupDevice.create: test_vg1 ; status: False ; 2025-06-28 18:19:23,099 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg1 ; orig: False ; 2025-06-28 18:19:23,102 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,105 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sda ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,108 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,111 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdb ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,114 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,117 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdc ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,121 DEBUG blivet/MainThread: LVMVolumeGroupDevice._create: test_vg1 ; status: False ; 2025-06-28 18:19:23,121 INFO program/MainThread: Running [62] lvm vgcreate -s 4096K test_vg1 /dev/sda /dev/sdb /dev/sdc --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:23,192 INFO program/MainThread: stdout[62]: Volume group "test_vg1" successfully created 2025-06-28 18:19:23,192 INFO program/MainThread: stderr[62]: 2025-06-28 18:19:23,192 INFO program/MainThread: ...done [62] (exit code: 0) 2025-06-28 18:19:23,206 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg1 ; orig: False ; status: False ; controllable: True ; 2025-06-28 18:19:23,215 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg1 ; orig: False ; 2025-06-28 18:19:23,231 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,238 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sda ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,251 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,257 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdb ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,262 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,265 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdc ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,265 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,277 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,281 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg1 ; status: False ; 2025-06-28 18:19:23,284 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg1 ; status: False ; 2025-06-28 18:19:23,284 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,296 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,296 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=test_vg1 2025-06-28 18:19:23,303 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,304 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,313 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,314 INFO blivet/MainThread: executing action: [127] create device lvmlv test_vg1-lv2 (id 122) 2025-06-28 18:19:23,318 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg1-lv2 ; status: False ; 2025-06-28 18:19:23,321 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg1-lv2 ; orig: False ; 2025-06-28 18:19:23,325 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg1 ; orig: False ; status: False ; controllable: True ; 2025-06-28 18:19:23,328 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg1 ; orig: False ; 2025-06-28 18:19:23,332 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,335 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sda ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,338 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,342 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdb ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,346 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,349 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdc ; type: lvmpv ; status: False ; 2025-06-28 18:19:23,349 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,361 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,365 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg1 ; status: False ; 2025-06-28 18:19:23,366 INFO program/MainThread: Running [63] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg1 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:23,395 INFO program/MainThread: stdout[63]: LVM2_VG_NAME=test_vg1 LVM2_VG_UUID=JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg LVM2_VG_SIZE=9638510592 LVM2_VG_FREE=9638510592 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=2298 LVM2_VG_FREE_COUNT=2298 LVM2_PV_COUNT=3 LVM2_VG_EXPORTED= LVM2_VG_TAGS= 2025-06-28 18:19:23,395 INFO program/MainThread: stderr[63]: 2025-06-28 18:19:23,395 INFO program/MainThread: ...done [63] (exit code: 0) 2025-06-28 18:19:23,400 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg1-lv2 ; status: False ; 2025-06-28 18:19:23,400 INFO program/MainThread: Running [64] lvm lvcreate -n lv2 -L 4714496K -y --type linear test_vg1 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:23,461 INFO program/MainThread: stdout[64]: Logical volume "lv2" created. 2025-06-28 18:19:23,461 INFO program/MainThread: stderr[64]: 2025-06-28 18:19:23,461 INFO program/MainThread: ...done [64] (exit code: 0) 2025-06-28 18:19:23,472 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg1-lv2 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,479 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg1-lv2 ; status: True ; 2025-06-28 18:19:23,479 DEBUG blivet/MainThread: test_vg1-lv2 sysfs_path set to /sys/devices/virtual/block/dm-6 2025-06-28 18:19:23,479 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,493 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,498 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg1-lv2 ; sysfs_path: /sys/devices/virtual/block/dm-6 ; 2025-06-28 18:19:23,498 DEBUG blivet/MainThread: updated test_vg1-lv2 size to 4.5 GiB (4.5 GiB) 2025-06-28 18:19:23,498 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-6 2025-06-28 18:19:23,506 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,506 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,522 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,523 INFO blivet/MainThread: executing action: [128] create format xfs filesystem on lvmlv test_vg1-lv2 (id 122) 2025-06-28 18:19:23,527 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg1-lv2 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,530 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg1-lv2 ; type: xfs ; status: False ; 2025-06-28 18:19:23,534 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg1-lv2 ; mountpoint: ; 2025-06-28 18:19:23,534 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored. bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None, 2025-06-28 18:19:23,534 INFO program/MainThread: Running [65] mkfs.xfs /dev/mapper/test_vg1-lv2 -f ... 2025-06-28 18:19:23,772 INFO program/MainThread: stdout[65]: meta-data=/dev/mapper/test_vg1-lv2 isize=512 agcount=4, agsize=294656 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=1178624, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 2025-06-28 18:19:23,772 INFO program/MainThread: stderr[65]: 2025-06-28 18:19:23,772 INFO program/MainThread: ...done [65] (exit code: 0) 2025-06-28 18:19:23,772 INFO program/MainThread: Running [66] xfs_admin -L -- /dev/mapper/test_vg1-lv2 ... 2025-06-28 18:19:23,790 INFO program/MainThread: stdout[66]: writing all SBs new label = "" 2025-06-28 18:19:23,790 INFO program/MainThread: stderr[66]: 2025-06-28 18:19:23,790 INFO program/MainThread: ...done [66] (exit code: 0) 2025-06-28 18:19:23,790 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,809 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,814 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg1-lv2 ; status: True ; 2025-06-28 18:19:23,814 DEBUG blivet/MainThread: test_vg1-lv2 sysfs_path set to /sys/devices/virtual/block/dm-6 2025-06-28 18:19:23,815 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:23,815 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-6 2025-06-28 18:19:23,823 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,823 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,839 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,840 INFO blivet/MainThread: executing action: [118] create device lvmlv test_vg1-lv1 (id 113) 2025-06-28 18:19:23,844 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg1-lv1 ; status: False ; 2025-06-28 18:19:23,847 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg1-lv1 ; orig: False ; 2025-06-28 18:19:23,851 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg1 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,851 INFO program/MainThread: Running [67] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg1 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:23,890 INFO program/MainThread: stdout[67]: LVM2_VG_NAME=test_vg1 LVM2_VG_UUID=JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg LVM2_VG_SIZE=9638510592 LVM2_VG_FREE=4810866688 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=2298 LVM2_VG_FREE_COUNT=1147 LVM2_PV_COUNT=3 LVM2_VG_EXPORTED= LVM2_VG_TAGS= 2025-06-28 18:19:23,890 INFO program/MainThread: stderr[67]: 2025-06-28 18:19:23,890 INFO program/MainThread: ...done [67] (exit code: 0) 2025-06-28 18:19:23,894 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg1-lv1 ; status: False ; 2025-06-28 18:19:23,894 INFO program/MainThread: Running [68] lvm lvcreate -n lv1 -L 1417216K -y --type linear test_vg1 --config=log {level=7 file=/tmp/lvm.log syslog=0} ... 2025-06-28 18:19:23,943 INFO program/MainThread: stdout[68]: Logical volume "lv1" created. 2025-06-28 18:19:23,943 INFO program/MainThread: stderr[68]: 2025-06-28 18:19:23,943 INFO program/MainThread: ...done [68] (exit code: 0) 2025-06-28 18:19:23,961 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg1-lv1 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:23,969 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg1-lv1 ; status: True ; 2025-06-28 18:19:23,969 DEBUG blivet/MainThread: test_vg1-lv1 sysfs_path set to /sys/devices/virtual/block/dm-7 2025-06-28 18:19:23,969 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:23,982 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,988 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg1-lv1 ; sysfs_path: /sys/devices/virtual/block/dm-7 ; 2025-06-28 18:19:23,989 DEBUG blivet/MainThread: updated test_vg1-lv1 size to 1.35 GiB (1.35 GiB) 2025-06-28 18:19:23,989 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-7 2025-06-28 18:19:23,997 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:23,997 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:24,011 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:24,012 INFO blivet/MainThread: executing action: [119] create format xfs filesystem on lvmlv test_vg1-lv1 (id 113) 2025-06-28 18:19:24,016 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg1-lv1 ; orig: False ; status: True ; controllable: True ; 2025-06-28 18:19:24,019 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg1-lv1 ; type: xfs ; status: False ; 2025-06-28 18:19:24,023 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg1-lv1 ; mountpoint: ; 2025-06-28 18:19:24,023 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored. bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None, 2025-06-28 18:19:24,023 INFO program/MainThread: Running [69] mkfs.xfs /dev/mapper/test_vg1-lv1 -f ... 2025-06-28 18:19:24,855 INFO program/MainThread: stdout[69]: meta-data=/dev/mapper/test_vg1-lv1 isize=512 agcount=4, agsize=88576 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=354304, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=16384, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 2025-06-28 18:19:24,855 INFO program/MainThread: stderr[69]: 2025-06-28 18:19:24,855 INFO program/MainThread: ...done [69] (exit code: 0) 2025-06-28 18:19:24,855 INFO program/MainThread: Running [70] xfs_admin -L -- /dev/mapper/test_vg1-lv1 ... 2025-06-28 18:19:24,871 INFO program/MainThread: stdout[70]: writing all SBs new label = "" 2025-06-28 18:19:24,871 INFO program/MainThread: stderr[70]: 2025-06-28 18:19:24,871 INFO program/MainThread: ...done [70] (exit code: 0) 2025-06-28 18:19:24,871 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:24,891 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:24,895 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg1-lv1 ; status: True ; 2025-06-28 18:19:24,896 DEBUG blivet/MainThread: test_vg1-lv1 sysfs_path set to /sys/devices/virtual/block/dm-7 2025-06-28 18:19:24,896 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:24,897 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-7 2025-06-28 18:19:24,904 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:24,905 INFO program/MainThread: Running... udevadm settle --timeout=300 2025-06-28 18:19:24,922 DEBUG program/MainThread: Return code: 0 2025-06-28 18:19:24,923 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return 2025-06-28 18:19:24,928 DEBUG blivet/MainThread: PartitionDevice._set_parted_partition: xvda1 ; 2025-06-28 18:19:24,928 DEBUG blivet/MainThread: device xvda1 new parted_partition parted.Partition instance -- disk: fileSystem: None number: 1 path: /dev/xvda1 type: 0 name: active: True busy: False geometry: PedPartition: <_ped.Partition object at 0x7fbad22453a0> 2025-06-28 18:19:24,931 DEBUG blivet/MainThread: PartitionDevice._set_parted_partition: xvda2 ; 2025-06-28 18:19:24,931 DEBUG blivet/MainThread: device xvda2 new parted_partition parted.Partition instance -- disk: fileSystem: number: 2 path: /dev/xvda2 type: 0 name: active: True busy: True geometry: PedPartition: <_ped.Partition object at 0x7fbad2273b00> 2025-06-28 18:19:24,935 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg1-lv1 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:24,938 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 1.35 GiB lvmlv test_vg1-lv1 (113) with existing xfs filesystem 2025-06-28 18:19:24,938 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg1-lv1' to 'test_vg1-lv1' (lvmlv) 2025-06-28 18:19:24,942 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg1-lv2 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:24,946 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 4.5 GiB lvmlv test_vg1-lv2 (122) with existing xfs filesystem 2025-06-28 18:19:24,946 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg1-lv2' to 'test_vg1-lv2' (lvmlv) 2025-06-28 18:19:24,949 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg2-lv3 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:24,952 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 924 MiB lvmlv test_vg2-lv3 (148) with existing xfs filesystem 2025-06-28 18:19:24,952 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg2-lv3' to 'test_vg2-lv3' (lvmlv) 2025-06-28 18:19:24,955 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg2-lv4 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:24,959 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 1.8 GiB lvmlv test_vg2-lv4 (157) with existing xfs filesystem 2025-06-28 18:19:24,959 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg2-lv4' to 'test_vg2-lv4' (lvmlv) 2025-06-28 18:19:24,962 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg3-lv5 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:24,965 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 3.6 GiB lvmlv test_vg3-lv5 (187) with existing xfs filesystem 2025-06-28 18:19:24,965 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg3-lv5' to 'test_vg3-lv5' (lvmlv) 2025-06-28 18:19:24,969 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg3-lv6 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:24,972 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 3 GiB lvmlv test_vg3-lv6 (196) with existing xfs filesystem 2025-06-28 18:19:24,972 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg3-lv6' to 'test_vg3-lv6' (lvmlv) 2025-06-28 18:19:24,976 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg3-lv7 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:24,979 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 1.2 GiB lvmlv test_vg3-lv7 (205) with existing xfs filesystem 2025-06-28 18:19:24,979 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg3-lv7' to 'test_vg3-lv7' (lvmlv) 2025-06-28 18:19:24,982 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg3-lv8 ; incomplete: False ; hidden: False ; 2025-06-28 18:19:24,985 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 1.2 GiB lvmlv test_vg3-lv8 (214) with existing xfs filesystem 2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg3-lv8' to 'test_vg3-lv8' (lvmlv) 2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=d16d04d4-8c39-458d-91ab-62a71063488e' to 'test_vg1-lv1' (lvmlv) 2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=991d22ac-80b0-450e-8c69-466187ba5696' to 'test_vg1-lv2' (lvmlv) 2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=13c0d833-a00e-49fa-a58c-aa045851ccb6' to 'test_vg2-lv3' (lvmlv) 2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=8ff069e1-541f-476d-a94a-129e7396e539' to 'test_vg2-lv4' (lvmlv) 2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=eb47ddaa-1d93-495c-b8f9-7231835c82c5' to 'test_vg3-lv5' (lvmlv) 2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=84001af2-01d9-4734-b51a-79efe7f59395' to 'test_vg3-lv6' (lvmlv) 2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=596601cb-c0d2-421d-af31-da3ca0a3650c' to 'test_vg3-lv7' (lvmlv) 2025-06-28 18:19:24,987 DEBUG blivet/MainThread: resolved 'UUID=77628583-14fb-431d-8722-115eadb2c621' to 'test_vg3-lv8' (lvmlv) 2025-06-28 18:20:04,961 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_h6q1vr90/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py'] 2025-06-28 18:20:10,997 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_t3_0a_er/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py'] PLAY RECAP ********************************************************************* managed-node1 : ok=139 changed=6 unreachable=0 failed=1 skipped=100 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.19.0b7", "delta": "0:00:00.275215", "end_time": "2025-06-28 18:20:16.871243", "host": "managed-node1", "message": "", "rc": 0, "start_time": "2025-06-28 18:20:16.596028", "stderr": "+ exec\n+ mount -f -l\n/dev/xvda2 on / type ext4 (rw,relatime,seclabel)\ndevtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=1882264k,nr_inodes=470566,mode=755,inode64)\ntmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel,inode64)\ndevpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)\nsysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)\nsecurityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)\ncgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)\nnone on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)\nbpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)\nconfigfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)\nproc on /proc type proc (rw,nosuid,nodev,noexec,relatime)\ntmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,size=760632k,nr_inodes=819200,mode=755,inode64)\nselinuxfs on /sys/fs/selinux type selinuxfs (rw,nosuid,noexec,relatime)\nsystemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=37,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=4693)\ntracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime,seclabel)\ndebugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime,seclabel)\nhugetlbfs on /dev/hugepages type hugetlbfs (rw,nosuid,nodev,relatime,seclabel,pagesize=2M)\nmqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel)\ntmpfs on /run/credentials/systemd-journald.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,seclabel,size=1024k,nr_inodes=1024,mode=700,inode64,noswap)\nfusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)\ntmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,nr_inodes=1048576,inode64)\ntmpfs on /run/credentials/systemd-resolved.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,seclabel,size=1024k,nr_inodes=1024,mode=700,inode64,noswap)\nsunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)\ntmpfs on /run/credentials/getty@tty1.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,seclabel,size=1024k,nr_inodes=1024,mode=700,inode64,noswap)\ntmpfs on /run/credentials/serial-getty@ttyS0.service type tmpfs (ro,nosuid,nodev,noexec,relatime,nosymfollow,seclabel,size=1024k,nr_inodes=1024,mode=700,inode64,noswap)\ntmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=380312k,nr_inodes=95078,mode=700,inode64)\n+ df -H\nFilesystem Size Used Avail Use% Mounted on\n/dev/xvda2 265G 2.8G 251G 2% /\ndevtmpfs 2.0G 0 2.0G 0% /dev\ntmpfs 2.0G 0 2.0G 0% /dev/shm\ntmpfs 779M 783k 779M 1% /run\ntmpfs 1.1M 0 1.1M 0% /run/credentials/systemd-journald.service\ntmpfs 2.0G 2.7M 2.0G 1% /tmp\ntmpfs 1.1M 0 1.1M 0% /run/credentials/systemd-resolved.service\ntmpfs 1.1M 0 1.1M 0% /run/credentials/getty@tty1.service\ntmpfs 1.1M 0 1.1M 0% /run/credentials/serial-getty@ttyS0.service\ntmpfs 390M 4.1k 390M 1% /run/user/0\n+ lvs --all\n LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert\n lv1 test_vg1 -wi-a----- 1.35g \n lv2 test_vg1 -wi-a----- <4.50g \n lv3 test_vg2 -wi-a----- 924.00m \n lv4 test_vg2 -wi-a----- 1.80g \n lv5 test_vg3 -wi-a----- <3.60g \n lv6 test_vg3 -wi-a----- <3.00g \n lv7 test_vg3 -wi-a----- <1.20g \n lv8 test_vg3 -wi-a----- <1.20g \n+ pvs --all\n PV VG Fmt Attr PSize PFree \n /dev/sda test_vg1 lvm2 a-- 2.99g 0 \n /dev/sdb test_vg1 lvm2 a-- 2.99g 140.00m\n /dev/sdc test_vg1 lvm2 a-- 2.99g 2.99g\n /dev/sdd test_vg2 lvm2 a-- 2.99g 296.00m\n /dev/sde test_vg2 lvm2 a-- 2.99g 2.99g\n /dev/sdf test_vg2 lvm2 a-- 2.99g 2.99g\n /dev/sdg test_vg3 lvm2 a-- 2.99g 604.00m\n /dev/sdh test_vg3 lvm2 a-- 2.99g 0 \n /dev/sdi test_vg3 lvm2 a-- 2.99g 0 \n /dev/sdj test_vg3 lvm2 a-- 2.99g <2.39g\n+ vgs --all\n VG #PV #LV #SN Attr VSize VFree \n test_vg1 3 2 0 wz--n- <8.98g <3.13g\n test_vg2 3 2 0 wz--n- <8.98g 6.27g\n test_vg3 4 4 0 wz--n- <11.97g <2.98g\n+ cat /tmp/snapshot_role.log\n2025-06-28 18:19:32,265 INFO snapshot-role/MainThread: run_module()\n2025-06-28 18:19:32,268 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'snapshot', 'snapshot_lvm_all_vgs': True, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '15', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': False, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'}\n2025-06-28 18:19:32,268 INFO snapshot-role/MainThread: get_json_from_args: BEGIN\n2025-06-28 18:19:32,298 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '6736052224', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:32,350 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': '15'}\n2025-06-28 18:19:32,403 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': '15'}\n2025-06-28 18:19:32,403 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '3196059648', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:32,456 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': '15'}\n2025-06-28 18:19:32,509 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': '15'}\n2025-06-28 18:19:32,564 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': '15'}\n2025-06-28 18:19:32,619 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': '15'}\n2025-06-28 18:19:32,619 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '3359637504', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:32,674 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': '15'}\n2025-06-28 18:19:32,729 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': '15'}\n2025-06-28 18:19:32,729 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': '15'}]}\n2025-06-28 18:19:32,729 INFO snapshot-role/MainThread: mgr_snapshot_cmd: snapset1\n2025-06-28 18:19:32,730 INFO snapshot-role/MainThread: verify snapsset : snapset1\n2025-06-28 18:19:33,176 INFO snapshot-role/MainThread: snapsset ok: snapset1\n2025-06-28 18:19:35,872 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': True}\n2025-06-28 18:19:35,873 INFO snapshot-role/MainThread: result: {'changed': True, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''}\n2025-06-28 18:19:38,430 INFO snapshot-role/MainThread: run_module()\n2025-06-28 18:19:38,433 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'check', 'snapshot_lvm_all_vgs': True, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': True, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'}\n2025-06-28 18:19:38,433 INFO snapshot-role/MainThread: get_json_from_args: BEGIN\n2025-06-28 18:19:38,470 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '5662310400', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '1W7efB-S01h-s24U-T43G-WUtx-dA3v-QRf3nY', 'lv_name': 'lv3-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv3', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'jxi2kf-Vt6s-vojQ-wS3k-65qH-Uin5-JEfIIY', 'lv_name': 'lv4-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv4', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:38,517 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}\n2025-06-28 18:19:38,549 INFO snapshot-role/MainThread: get_json_from_args: lv lv3-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:38,598 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}\n2025-06-28 18:19:38,625 INFO snapshot-role/MainThread: get_json_from_args: lv lv4-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:38,625 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '1002438656', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'wdqgo2-uU5H-wlvg-aoYF-y2Qt-4fDN-pOAyBj', 'lv_name': 'lv5-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_size': '583008256', 'origin': 'lv5', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'TvUg1o-wzGo-dQFn-JyNr-Ak2n-f08U-XoAZ7A', 'lv_name': 'lv6-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv6', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'clujZs-C4oH-5Lg0-88kw-Ofm5-Balm-rBrqPw', 'lv_name': 'lv7-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv7', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'Cek7N7-hMOC-epqr-JdpY-aDBW-qRMX-cDnbP9', 'lv_name': 'lv8-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv8', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:38,684 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}\n2025-06-28 18:19:38,711 INFO snapshot-role/MainThread: get_json_from_args: lv lv5-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:38,768 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}\n2025-06-28 18:19:38,794 INFO snapshot-role/MainThread: get_json_from_args: lv lv6-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:38,847 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}\n2025-06-28 18:19:38,877 INFO snapshot-role/MainThread: get_json_from_args: lv lv7-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:38,924 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}\n2025-06-28 18:19:38,950 INFO snapshot-role/MainThread: get_json_from_args: lv lv8-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:38,950 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '2097152000', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CQ7FFq-VkRL-SUQj-hYSb-Woin-YHi2-cFvW5x', 'lv_name': 'lv1-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv1', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aXPoo7-LiNs-8W62-S2aO-QMwN-ln1F-WzAPCu', 'lv_name': 'lv2-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_size': '725614592', 'origin': 'lv2', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:39,009 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}\n2025-06-28 18:19:39,031 INFO snapshot-role/MainThread: get_json_from_args: lv lv1-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:39,084 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}\n2025-06-28 18:19:39,108 INFO snapshot-role/MainThread: get_json_from_args: lv lv2-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:39,109 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]}\n2025-06-28 18:19:39,109 INFO snapshot-role/MainThread: check_cmd: snapset1\n2025-06-28 18:19:39,235 INFO snapshot-role/MainThread: mgr_check_verify_lvs_set: snapset1\n2025-06-28 18:19:39,235 INFO snapshot-role/MainThread: verify snapsset : snapset1\n2025-06-28 18:19:39,674 INFO snapshot-role/MainThread: snapsset ok: snapset1\n2025-06-28 18:19:39,674 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False}\n2025-06-28 18:19:39,674 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''}\n2025-06-28 18:19:42,784 INFO snapshot-role/MainThread: run_module()\n2025-06-28 18:19:42,787 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'snapshot', 'snapshot_lvm_all_vgs': True, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '15', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': False, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'}\n2025-06-28 18:19:42,787 INFO snapshot-role/MainThread: get_json_from_args: BEGIN\n2025-06-28 18:19:42,824 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '5662310400', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '1W7efB-S01h-s24U-T43G-WUtx-dA3v-QRf3nY', 'lv_name': 'lv3-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv3', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'jxi2kf-Vt6s-vojQ-wS3k-65qH-Uin5-JEfIIY', 'lv_name': 'lv4-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv4', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:42,878 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': '15'}\n2025-06-28 18:19:42,904 INFO snapshot-role/MainThread: get_json_from_args: lv lv3-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:42,957 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': '15'}\n2025-06-28 18:19:42,982 INFO snapshot-role/MainThread: get_json_from_args: lv lv4-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:42,982 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '1002438656', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'wdqgo2-uU5H-wlvg-aoYF-y2Qt-4fDN-pOAyBj', 'lv_name': 'lv5-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_size': '583008256', 'origin': 'lv5', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'TvUg1o-wzGo-dQFn-JyNr-Ak2n-f08U-XoAZ7A', 'lv_name': 'lv6-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv6', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'clujZs-C4oH-5Lg0-88kw-Ofm5-Balm-rBrqPw', 'lv_name': 'lv7-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv7', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'Cek7N7-hMOC-epqr-JdpY-aDBW-qRMX-cDnbP9', 'lv_name': 'lv8-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv8', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:43,034 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': '15'}\n2025-06-28 18:19:43,064 INFO snapshot-role/MainThread: get_json_from_args: lv lv5-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:43,120 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': '15'}\n2025-06-28 18:19:43,151 INFO snapshot-role/MainThread: get_json_from_args: lv lv6-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:43,206 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': '15'}\n2025-06-28 18:19:43,237 INFO snapshot-role/MainThread: get_json_from_args: lv lv7-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:43,293 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': '15'}\n2025-06-28 18:19:43,323 INFO snapshot-role/MainThread: get_json_from_args: lv lv8-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:43,323 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '2097152000', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CQ7FFq-VkRL-SUQj-hYSb-Woin-YHi2-cFvW5x', 'lv_name': 'lv1-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv1', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aXPoo7-LiNs-8W62-S2aO-QMwN-ln1F-WzAPCu', 'lv_name': 'lv2-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_size': '725614592', 'origin': 'lv2', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:43,379 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': '15'}\n2025-06-28 18:19:43,404 INFO snapshot-role/MainThread: get_json_from_args: lv lv1-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:43,457 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': '15'}\n2025-06-28 18:19:43,484 INFO snapshot-role/MainThread: get_json_from_args: lv lv2-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:43,484 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': '15'}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': '15'}]}\n2025-06-28 18:19:43,484 INFO snapshot-role/MainThread: mgr_snapshot_cmd: snapset1\n2025-06-28 18:19:43,485 INFO snapshot-role/MainThread: verify snapsset : snapset1\n2025-06-28 18:19:43,911 INFO snapshot-role/MainThread: snapsset ok: snapset1\n2025-06-28 18:19:44,035 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False}\n2025-06-28 18:19:44,035 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''}\n2025-06-28 18:19:46,681 INFO snapshot-role/MainThread: run_module()\n2025-06-28 18:19:46,683 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'check', 'snapshot_lvm_all_vgs': True, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': True, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'}\n2025-06-28 18:19:46,683 INFO snapshot-role/MainThread: get_json_from_args: BEGIN\n2025-06-28 18:19:46,721 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '5662310400', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '1W7efB-S01h-s24U-T43G-WUtx-dA3v-QRf3nY', 'lv_name': 'lv3-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv3', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'jxi2kf-Vt6s-vojQ-wS3k-65qH-Uin5-JEfIIY', 'lv_name': 'lv4-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv4', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:46,771 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}\n2025-06-28 18:19:46,802 INFO snapshot-role/MainThread: get_json_from_args: lv lv3-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:46,857 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}\n2025-06-28 18:19:46,883 INFO snapshot-role/MainThread: get_json_from_args: lv lv4-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:46,883 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '1002438656', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'wdqgo2-uU5H-wlvg-aoYF-y2Qt-4fDN-pOAyBj', 'lv_name': 'lv5-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_size': '583008256', 'origin': 'lv5', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'TvUg1o-wzGo-dQFn-JyNr-Ak2n-f08U-XoAZ7A', 'lv_name': 'lv6-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv6', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'clujZs-C4oH-5Lg0-88kw-Ofm5-Balm-rBrqPw', 'lv_name': 'lv7-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv7', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'Cek7N7-hMOC-epqr-JdpY-aDBW-qRMX-cDnbP9', 'lv_name': 'lv8-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv8', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:46,939 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}\n2025-06-28 18:19:46,966 INFO snapshot-role/MainThread: get_json_from_args: lv lv5-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:47,024 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}\n2025-06-28 18:19:47,050 INFO snapshot-role/MainThread: get_json_from_args: lv lv6-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:47,103 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}\n2025-06-28 18:19:47,135 INFO snapshot-role/MainThread: get_json_from_args: lv lv7-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:47,189 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}\n2025-06-28 18:19:47,221 INFO snapshot-role/MainThread: get_json_from_args: lv lv8-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:47,222 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '2097152000', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CQ7FFq-VkRL-SUQj-hYSb-Woin-YHi2-cFvW5x', 'lv_name': 'lv1-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv1', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aXPoo7-LiNs-8W62-S2aO-QMwN-ln1F-WzAPCu', 'lv_name': 'lv2-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_size': '725614592', 'origin': 'lv2', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:47,283 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}\n2025-06-28 18:19:47,314 INFO snapshot-role/MainThread: get_json_from_args: lv lv1-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:47,371 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}\n2025-06-28 18:19:47,397 INFO snapshot-role/MainThread: get_json_from_args: lv lv2-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:47,397 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]}\n2025-06-28 18:19:47,397 INFO snapshot-role/MainThread: check_cmd: snapset1\n2025-06-28 18:19:47,520 INFO snapshot-role/MainThread: mgr_check_verify_lvs_set: snapset1\n2025-06-28 18:19:47,520 INFO snapshot-role/MainThread: verify snapsset : snapset1\n2025-06-28 18:19:47,979 INFO snapshot-role/MainThread: snapsset ok: snapset1\n2025-06-28 18:19:47,979 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False}\n2025-06-28 18:19:47,979 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''}\n2025-06-28 18:19:50,906 INFO snapshot-role/MainThread: run_module()\n2025-06-28 18:19:50,909 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'remove', 'snapshot_lvm_all_vgs': False, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': False, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'}\n2025-06-28 18:19:50,909 INFO snapshot-role/MainThread: get_json_from_args: BEGIN\n2025-06-28 18:19:50,938 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '5662310400', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '1W7efB-S01h-s24U-T43G-WUtx-dA3v-QRf3nY', 'lv_name': 'lv3-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv3-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv3', 'origin_size': '968884224', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'jxi2kf-Vt6s-vojQ-wS3k-65qH-Uin5-JEfIIY', 'lv_name': 'lv4-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg2/lv4-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv4', 'origin_size': '1933574144', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg2', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:50,991 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}\n2025-06-28 18:19:51,022 INFO snapshot-role/MainThread: get_json_from_args: lv lv3-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:51,078 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}\n2025-06-28 18:19:51,102 INFO snapshot-role/MainThread: get_json_from_args: lv lv4-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:51,102 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '1002438656', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'wdqgo2-uU5H-wlvg-aoYF-y2Qt-4fDN-pOAyBj', 'lv_name': 'lv5-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv5-snapset_snapset1_1751149174_', 'lv_size': '583008256', 'origin': 'lv5', 'origin_size': '3862953984', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'TvUg1o-wzGo-dQFn-JyNr-Ak2n-f08U-XoAZ7A', 'lv_name': 'lv6-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv6-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv6', 'origin_size': '3217031168', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'clujZs-C4oH-5Lg0-88kw-Ofm5-Balm-rBrqPw', 'lv_name': 'lv7-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv7-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv7', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'Cek7N7-hMOC-epqr-JdpY-aDBW-qRMX-cDnbP9', 'lv_name': 'lv8-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg3/lv8-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv8', 'origin_size': '1287651328', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg3', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:51,154 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}\n2025-06-28 18:19:51,181 INFO snapshot-role/MainThread: get_json_from_args: lv lv5-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:51,231 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}\n2025-06-28 18:19:51,262 INFO snapshot-role/MainThread: get_json_from_args: lv lv6-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:51,318 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}\n2025-06-28 18:19:51,349 INFO snapshot-role/MainThread: get_json_from_args: lv lv7-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:51,412 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}\n2025-06-28 18:19:51,437 INFO snapshot-role/MainThread: get_json_from_args: lv lv8-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:51,437 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '2097152000', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CQ7FFq-VkRL-SUQj-hYSb-Woin-YHi2-cFvW5x', 'lv_name': 'lv1-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv1-snapset_snapset1_1751149174_', 'lv_size': '536870912', 'origin': 'lv1', 'origin_size': '1451229184', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'owi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aXPoo7-LiNs-8W62-S2aO-QMwN-ln1F-WzAPCu', 'lv_name': 'lv2-snapset_snapset1_1751149174_', 'lv_full_name': 'test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_path': '/dev/test_vg1/lv2-snapset_snapset1_1751149174_', 'lv_size': '725614592', 'origin': 'lv2', 'origin_size': '4827643904', 'pool_lv': '', 'lv_tags': '', 'lv_attr': 'swi-a-s---', 'vg_name': 'test_vg1', 'data_percent': '0.00', 'metadata_percent': ''}]\n2025-06-28 18:19:51,487 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}\n2025-06-28 18:19:51,513 INFO snapshot-role/MainThread: get_json_from_args: lv lv1-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:51,562 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}\n2025-06-28 18:19:51,589 INFO snapshot-role/MainThread: get_json_from_args: lv lv2-snapset_snapset1_1751149174_ is a snapshot - skipping\n2025-06-28 18:19:51,589 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]}\n2025-06-28 18:19:51,589 INFO snapshot-role/MainThread: remove_cmd: snapset1\n2025-06-28 18:19:52,584 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': True}\n2025-06-28 18:19:52,584 INFO snapshot-role/MainThread: result: {'changed': True, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''}\n2025-06-28 18:19:54,952 INFO snapshot-role/MainThread: run_module()\n2025-06-28 18:19:54,955 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'remove', 'snapshot_lvm_all_vgs': False, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': True, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'}\n2025-06-28 18:19:54,955 INFO snapshot-role/MainThread: get_json_from_args: BEGIN\n2025-06-28 18:19:54,987 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '6736052224', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:55,039 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}\n2025-06-28 18:19:55,092 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}\n2025-06-28 18:19:55,092 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '3196059648', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:55,148 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}\n2025-06-28 18:19:55,203 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}\n2025-06-28 18:19:55,259 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}\n2025-06-28 18:19:55,321 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}\n2025-06-28 18:19:55,322 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '3359637504', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:55,379 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}\n2025-06-28 18:19:55,439 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}\n2025-06-28 18:19:55,440 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]}\n2025-06-28 18:19:55,440 INFO snapshot-role/MainThread: remove_cmd: snapset1\n2025-06-28 18:19:55,567 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False}\n2025-06-28 18:19:55,567 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''}\n2025-06-28 18:19:57,961 INFO snapshot-role/MainThread: run_module()\n2025-06-28 18:19:57,963 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'remove', 'snapshot_lvm_all_vgs': False, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': False, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'}\n2025-06-28 18:19:57,963 INFO snapshot-role/MainThread: get_json_from_args: BEGIN\n2025-06-28 18:19:57,992 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '6736052224', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:58,052 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}\n2025-06-28 18:19:58,110 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}\n2025-06-28 18:19:58,110 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '3196059648', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:58,163 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}\n2025-06-28 18:19:58,211 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}\n2025-06-28 18:19:58,268 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}\n2025-06-28 18:19:58,320 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}\n2025-06-28 18:19:58,320 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '3359637504', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:19:58,375 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}\n2025-06-28 18:19:58,429 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}\n2025-06-28 18:19:58,429 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]}\n2025-06-28 18:19:58,429 INFO snapshot-role/MainThread: remove_cmd: snapset1\n2025-06-28 18:19:58,549 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False}\n2025-06-28 18:19:58,549 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''}\n2025-06-28 18:20:01,323 INFO snapshot-role/MainThread: run_module()\n2025-06-28 18:20:01,326 INFO snapshot-role/MainThread: module params: {'ansible_check_mode': False, 'snapshot_lvm_action': 'remove', 'snapshot_lvm_all_vgs': False, 'snapshot_lvm_fstype': '', 'snapshot_lvm_lv': '', 'snapshot_lvm_mount_options': '', 'snapshot_lvm_mount_origin': False, 'snapshot_lvm_mountpoint': '', 'snapshot_lvm_mountpoint_create': False, 'snapshot_lvm_percent_space_required': '', 'snapshot_lvm_set': {'volumes': [], 'name': None}, 'snapshot_lvm_snapset_name': 'snapset1', 'snapshot_lvm_unmount_all': False, 'snapshot_lvm_verify_only': True, 'snapshot_lvm_vg': '', 'snapshot_lvm_vg_include': '^test_'}\n2025-06-28 18:20:01,327 INFO snapshot-role/MainThread: get_json_from_args: BEGIN\n2025-06-28 18:20:01,358 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg2', 'vg_uuid': 'lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV', 'vg_size': '9638510592', 'vg_free': '6736052224', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': '7UpgGf-4TgH-1P8F-PtN5-MJ9z-Eir0-LFMe0A', 'lv_name': 'lv3', 'lv_full_name': 'test_vg2/lv3', 'lv_path': '/dev/test_vg2/lv3', 'lv_size': '968884224', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'ey9jyE-TATQ-M9kq-qJku-dDL0-7X2I-RVGu8w', 'lv_name': 'lv4', 'lv_full_name': 'test_vg2/lv4', 'lv_path': '/dev/test_vg2/lv4', 'lv_size': '1933574144', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg2', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:20:01,415 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}\n2025-06-28 18:20:01,471 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}\n2025-06-28 18:20:01,471 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg3', 'vg_uuid': 'ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM', 'vg_size': '12851347456', 'vg_free': '3196059648', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'pSPvNC-PrrD-hXMq-n2c6-kQct-v7Kt-pm2uGR', 'lv_name': 'lv5', 'lv_full_name': 'test_vg3/lv5', 'lv_path': '/dev/test_vg3/lv5', 'lv_size': '3862953984', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': '8PxrGq-uw0D-KrCt-KcKa-UBKe-2n7z-j7nxHx', 'lv_name': 'lv6', 'lv_full_name': 'test_vg3/lv6', 'lv_path': '/dev/test_vg3/lv6', 'lv_size': '3217031168', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'O5OlZy-xxqx-i2Qs-nTlB-kF5J-yEaI-tnVaTK', 'lv_name': 'lv7', 'lv_full_name': 'test_vg3/lv7', 'lv_path': '/dev/test_vg3/lv7', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'aecfd8-wohG-OYQM-OIn4-0Lzc-cVy3-2geqAx', 'lv_name': 'lv8', 'lv_full_name': 'test_vg3/lv8', 'lv_path': '/dev/test_vg3/lv8', 'lv_size': '1287651328', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg3', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:20:01,536 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}\n2025-06-28 18:20:01,596 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}\n2025-06-28 18:20:01,649 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}\n2025-06-28 18:20:01,706 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}\n2025-06-28 18:20:01,707 INFO snapshot-role/MainThread: get_json_from_args: vg {'vg_name': 'test_vg1', 'vg_uuid': 'JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg', 'vg_size': '9638510592', 'vg_free': '3359637504', 'vg_extent_size': '4194304'} lv_list [{'lv_uuid': 'aan576-XsID-bAeb-JtfF-DpII-2K7h-pGHI0S', 'lv_name': 'lv1', 'lv_full_name': 'test_vg1/lv1', 'lv_path': '/dev/test_vg1/lv1', 'lv_size': '1451229184', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}, {'lv_uuid': 'CwWxDt-Iv75-Ww9d-Kq0L-bsde-oz5q-X1vMch', 'lv_name': 'lv2', 'lv_full_name': 'test_vg1/lv2', 'lv_path': '/dev/test_vg1/lv2', 'lv_size': '4827643904', 'origin': '', 'origin_size': '', 'pool_lv': '', 'lv_tags': '', 'lv_attr': '-wi-a-----', 'vg_name': 'test_vg1', 'data_percent': '', 'metadata_percent': ''}]\n2025-06-28 18:20:01,764 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}\n2025-06-28 18:20:01,822 INFO snapshot-role/MainThread: get_json_from_args: adding volume {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}\n2025-06-28 18:20:01,823 INFO snapshot-role/MainThread: validate_snapset_args: END snapset_dict is {'name': 'snapset1', 'volumes': [{'name': ('snapshot : test_vg2/lv3',), 'vg': 'test_vg2', 'lv': 'lv3', 'percent_space_required': ''}, {'name': ('snapshot : test_vg2/lv4',), 'vg': 'test_vg2', 'lv': 'lv4', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv5',), 'vg': 'test_vg3', 'lv': 'lv5', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv6',), 'vg': 'test_vg3', 'lv': 'lv6', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv7',), 'vg': 'test_vg3', 'lv': 'lv7', 'percent_space_required': ''}, {'name': ('snapshot : test_vg3/lv8',), 'vg': 'test_vg3', 'lv': 'lv8', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv1',), 'vg': 'test_vg1', 'lv': 'lv1', 'percent_space_required': ''}, {'name': ('snapshot : test_vg1/lv2',), 'vg': 'test_vg1', 'lv': 'lv2', 'percent_space_required': ''}]}\n2025-06-28 18:20:01,823 INFO snapshot-role/MainThread: remove_cmd: snapset1\n2025-06-28 18:20:01,942 INFO snapshot-role/MainThread: cmd_result: {'return_code': 0, 'errors': '', 'changed': False}\n2025-06-28 18:20:01,942 INFO snapshot-role/MainThread: result: {'changed': False, 'return_code': 0, 'message': '', 'errors': '', 'msg': ''}\n+ cat /etc/lvm/devices/system.devices\n# LVM uses devices listed in this file.\n# Created by LVM command vgcreate pid 8426 at Sat Jun 28 18:19:23 2025\n# HASH=3642833769\nPRODUCT_UUID=ec2257d3-6085-ae99-95b3-ebfab5fe05d2\nVERSION=1.1.23\nIDTYPE=sys_wwid IDNAME=naa.6001405a9efc0e1911c4201a970a6f85 DEVNAME=/dev/sdg PVID=fGZQyexmygUivBN2OeyOC9UQbrOSL6Yg\nIDTYPE=sys_wwid IDNAME=naa.60014058181fbe60fbb48f6bf65e97b7 DEVNAME=/dev/sdh PVID=4xDnwooI2hV7TQpKHN0kasYYMi7EXCjq\nIDTYPE=sys_wwid IDNAME=naa.600140536dcfebe092746238bb16a3fa DEVNAME=/dev/sdi PVID=s2n1Tr6dKzyCoZ1UN6yOuMAF23fu2ozK\nIDTYPE=sys_wwid IDNAME=naa.600140580c834ee801b48198b71671c5 DEVNAME=/dev/sdj PVID=WrvtiyQHAQm4D0TKnuJDRehcWFGRqc2u\nIDTYPE=sys_wwid IDNAME=naa.6001405acd2ba9b1a974f55a9704061c DEVNAME=/dev/sdd PVID=g4qDrcspQRjs5FniDaADHDx4Xf8P9sOS\nIDTYPE=sys_wwid IDNAME=naa.6001405378e6ca643c443e0b9c840399 DEVNAME=/dev/sde PVID=gDmTuzMWP5wqBf9PmL2JdYTsUYgCGQFw\nIDTYPE=sys_wwid IDNAME=naa.6001405f858954a0e784149995a198ad DEVNAME=/dev/sdf PVID=cvneN6LIbklm4KHpVz00roxXzwTXdY8B\nIDTYPE=sys_wwid IDNAME=naa.60014058847ce6dd73d4f01931e495d9 DEVNAME=/dev/sda PVID=PRn8hB5TjCWyYHYuFPLMZFk3jUqH1ryT\nIDTYPE=sys_wwid IDNAME=naa.6001405c48b47cd2cda4408882faf8c6 DEVNAME=/dev/sdb PVID=wYtbK0nntGubftNog8syUu4xOsdENa3q\nIDTYPE=sys_wwid IDNAME=naa.6001405582e0de585294686b36ae1d1e DEVNAME=/dev/sdc PVID=93Kz1c1GE6bJpqEiUO0JUc6pm1Y5YKad\n++ lsblk -l -p -o NAME\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' NAME = NAME ']'\n+ continue\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sda = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sda\n/dev/sda: UUID=\"PRn8hB-5TjC-WyYH-YuFP-LMZF-k3jU-qH1ryT\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sda\n/dev/sda: UUID=\"PRn8hB-5TjC-WyYH-YuFP-LMZF-k3jU-qH1ryT\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdb = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdb\n/dev/sdb: UUID=\"wYtbK0-nntG-ubft-Nog8-syUu-4xOs-dENa3q\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdb\n/dev/sdb: UUID=\"wYtbK0-nntG-ubft-Nog8-syUu-4xOs-dENa3q\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdc = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdc\n/dev/sdc: UUID=\"93Kz1c-1GE6-bJpq-EiUO-0JUc-6pm1-Y5YKad\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdc\n/dev/sdc: UUID=\"93Kz1c-1GE6-bJpq-EiUO-0JUc-6pm1-Y5YKad\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdd = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdd\n/dev/sdd: UUID=\"g4qDrc-spQR-js5F-niDa-ADHD-x4Xf-8P9sOS\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdd\n/dev/sdd: UUID=\"g4qDrc-spQR-js5F-niDa-ADHD-x4Xf-8P9sOS\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sde = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sde\n/dev/sde: UUID=\"gDmTuz-MWP5-wqBf-9PmL-2JdY-TsUY-gCGQFw\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sde\n/dev/sde: UUID=\"gDmTuz-MWP5-wqBf-9PmL-2JdY-TsUY-gCGQFw\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdf = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdf\n/dev/sdf: UUID=\"cvneN6-LIbk-lm4K-HpVz-00ro-xXzw-TXdY8B\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdf\n/dev/sdf: UUID=\"cvneN6-LIbk-lm4K-HpVz-00ro-xXzw-TXdY8B\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdg = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdg\n/dev/sdg: UUID=\"fGZQye-xmyg-UivB-N2Oe-yOC9-UQbr-OSL6Yg\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdg\n/dev/sdg: UUID=\"fGZQye-xmyg-UivB-N2Oe-yOC9-UQbr-OSL6Yg\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdh = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdh\n/dev/sdh: UUID=\"4xDnwo-oI2h-V7TQ-pKHN-0kas-YYMi-7EXCjq\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdh\n/dev/sdh: UUID=\"4xDnwo-oI2h-V7TQ-pKHN-0kas-YYMi-7EXCjq\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdi = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdi\n/dev/sdi: UUID=\"s2n1Tr-6dKz-yCoZ-1UN6-yOuM-AF23-fu2ozK\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdi\n/dev/sdi: UUID=\"s2n1Tr-6dKz-yCoZ-1UN6-yOuM-AF23-fu2ozK\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdj = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdj\n/dev/sdj: UUID=\"Wrvtiy-QHAQ-m4D0-TKnu-JDRe-hcWF-GRqc2u\" TYPE=\"LVM2_member\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdj\n/dev/sdj: UUID=\"Wrvtiy-QHAQ-m4D0-TKnu-JDRe-hcWF-GRqc2u\" VERSION=\"LVM2 001\" TYPE=\"LVM2_member\" USAGE=\"raid\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdk = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdk\n+ :\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdk\n+ :\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/sdl = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/sdl\n+ :\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/sdl\n+ :\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/xvda = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/xvda\n/dev/xvda: PTUUID=\"91c3c0f1-4957-4f21-b15a-28e9016b79c2\" PTTYPE=\"gpt\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/xvda\n/dev/xvda: PTUUID=\"91c3c0f1-4957-4f21-b15a-28e9016b79c2\" PTTYPE=\"gpt\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/xvda1 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/xvda1\n/dev/xvda1: PARTUUID=\"fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/xvda1\n/dev/xvda1: PART_ENTRY_SCHEME=\"gpt\" PART_ENTRY_UUID=\"fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda\" PART_ENTRY_TYPE=\"21686148-6449-6e6f-744e-656564454649\" PART_ENTRY_NUMBER=\"1\" PART_ENTRY_OFFSET=\"2048\" PART_ENTRY_SIZE=\"2048\" PART_ENTRY_DISK=\"202:0\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/xvda2 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/xvda2\n/dev/xvda2: UUID=\"8959a9f3-59d4-4eb7-8e53-e856bbc805e9\" BLOCK_SIZE=\"4096\" TYPE=\"ext4\" PARTUUID=\"782cc2d2-7936-4e3f-9cb4-9758a83f53fa\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/xvda2\n/dev/xvda2: UUID=\"8959a9f3-59d4-4eb7-8e53-e856bbc805e9\" VERSION=\"1.0\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"4096\" FSLASTBLOCK=\"65535483\" FSSIZE=\"268433338368\" TYPE=\"ext4\" USAGE=\"filesystem\" PART_ENTRY_SCHEME=\"gpt\" PART_ENTRY_UUID=\"782cc2d2-7936-4e3f-9cb4-9758a83f53fa\" PART_ENTRY_TYPE=\"0fc63daf-8483-4772-8e79-3d69d8477de4\" PART_ENTRY_NUMBER=\"2\" PART_ENTRY_OFFSET=\"4096\" PART_ENTRY_SIZE=\"524283871\" PART_ENTRY_DISK=\"202:0\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/zram0 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/zram0\n/dev/zram0: LABEL=\"zram0\" UUID=\"9e4b39b6-8d8e-46c1-8981-c482cb670ee6\" TYPE=\"swap\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/zram0\n/dev/zram0: ENDIANNESS=\"LITTLE\" FSBLOCKSIZE=\"4096\" FSSIZE=\"3894407168\" FSLASTBLOCK=\"950784\" LABEL=\"zram0\" UUID=\"9e4b39b6-8d8e-46c1-8981-c482cb670ee6\" VERSION=\"1\" TYPE=\"swap\" USAGE=\"other\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/mapper/test_vg3-lv8 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/mapper/test_vg3-lv8\n/dev/mapper/test_vg3-lv8: UUID=\"77628583-14fb-431d-8722-115eadb2c621\" BLOCK_SIZE=\"512\" TYPE=\"xfs\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/mapper/test_vg3-lv8\n/dev/mapper/test_vg3-lv8: UUID=\"77628583-14fb-431d-8722-115eadb2c621\" FSSIZE=\"1220542464\" FSLASTBLOCK=\"314368\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"512\" TYPE=\"xfs\" USAGE=\"filesystem\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/mapper/test_vg3-lv7 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/mapper/test_vg3-lv7\n/dev/mapper/test_vg3-lv7: UUID=\"596601cb-c0d2-421d-af31-da3ca0a3650c\" BLOCK_SIZE=\"512\" TYPE=\"xfs\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/mapper/test_vg3-lv7\n/dev/mapper/test_vg3-lv7: UUID=\"596601cb-c0d2-421d-af31-da3ca0a3650c\" FSSIZE=\"1220542464\" FSLASTBLOCK=\"314368\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"512\" TYPE=\"xfs\" USAGE=\"filesystem\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/mapper/test_vg3-lv6 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/mapper/test_vg3-lv6\n/dev/mapper/test_vg3-lv6: UUID=\"84001af2-01d9-4734-b51a-79efe7f59395\" BLOCK_SIZE=\"512\" TYPE=\"xfs\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/mapper/test_vg3-lv6\n/dev/mapper/test_vg3-lv6: UUID=\"84001af2-01d9-4734-b51a-79efe7f59395\" FSSIZE=\"3149922304\" FSLASTBLOCK=\"785408\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"512\" TYPE=\"xfs\" USAGE=\"filesystem\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/mapper/test_vg3-lv5 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/mapper/test_vg3-lv5\n/dev/mapper/test_vg3-lv5: UUID=\"eb47ddaa-1d93-495c-b8f9-7231835c82c5\" BLOCK_SIZE=\"512\" TYPE=\"xfs\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/mapper/test_vg3-lv5\n/dev/mapper/test_vg3-lv5: UUID=\"eb47ddaa-1d93-495c-b8f9-7231835c82c5\" FSSIZE=\"3795845120\" FSLASTBLOCK=\"943104\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"512\" TYPE=\"xfs\" USAGE=\"filesystem\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/mapper/test_vg2-lv4 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/mapper/test_vg2-lv4\n/dev/mapper/test_vg2-lv4: UUID=\"8ff069e1-541f-476d-a94a-129e7396e539\" BLOCK_SIZE=\"512\" TYPE=\"xfs\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/mapper/test_vg2-lv4\n/dev/mapper/test_vg2-lv4: UUID=\"8ff069e1-541f-476d-a94a-129e7396e539\" FSSIZE=\"1866465280\" FSLASTBLOCK=\"472064\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"512\" TYPE=\"xfs\" USAGE=\"filesystem\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/mapper/test_vg2-lv3 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/mapper/test_vg2-lv3\n/dev/mapper/test_vg2-lv3: UUID=\"13c0d833-a00e-49fa-a58c-aa045851ccb6\" BLOCK_SIZE=\"512\" TYPE=\"xfs\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/mapper/test_vg2-lv3\n/dev/mapper/test_vg2-lv3: UUID=\"13c0d833-a00e-49fa-a58c-aa045851ccb6\" FSSIZE=\"901775360\" FSLASTBLOCK=\"236544\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"512\" TYPE=\"xfs\" USAGE=\"filesystem\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/mapper/test_vg1-lv2 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/mapper/test_vg1-lv2\n/dev/mapper/test_vg1-lv2: UUID=\"991d22ac-80b0-450e-8c69-466187ba5696\" BLOCK_SIZE=\"512\" TYPE=\"xfs\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/mapper/test_vg1-lv2\n/dev/mapper/test_vg1-lv2: UUID=\"991d22ac-80b0-450e-8c69-466187ba5696\" FSSIZE=\"4760535040\" FSLASTBLOCK=\"1178624\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"512\" TYPE=\"xfs\" USAGE=\"filesystem\"\n+ for dev in $(lsblk -l -p -o NAME)\n+ '[' /dev/mapper/test_vg1-lv1 = NAME ']'\n+ echo blkid info with cache\nblkid info with cache\n+ blkid /dev/mapper/test_vg1-lv1\n/dev/mapper/test_vg1-lv1: UUID=\"d16d04d4-8c39-458d-91ab-62a71063488e\" BLOCK_SIZE=\"512\" TYPE=\"xfs\"\n+ echo blkid info without cache\nblkid info without cache\n+ blkid -p /dev/mapper/test_vg1-lv1\n/dev/mapper/test_vg1-lv1: UUID=\"d16d04d4-8c39-458d-91ab-62a71063488e\" FSSIZE=\"1384120320\" FSLASTBLOCK=\"354304\" FSBLOCKSIZE=\"4096\" BLOCK_SIZE=\"512\" TYPE=\"xfs\" USAGE=\"filesystem\"\n+ blkid -g\n+ echo lsblk after garbage collect\nlsblk after garbage collect\n+ lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE\nNAME=\"/dev/sda\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg1-lv2\" TYPE=\"lvm\" SIZE=\"4827643904\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdb\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg1-lv2\" TYPE=\"lvm\" SIZE=\"4827643904\" FSTYPE=\"xfs\"\nNAME=\"/dev/mapper/test_vg1-lv1\" TYPE=\"lvm\" SIZE=\"1451229184\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdc\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/sdd\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg2-lv4\" TYPE=\"lvm\" SIZE=\"1933574144\" FSTYPE=\"xfs\"\nNAME=\"/dev/mapper/test_vg2-lv3\" TYPE=\"lvm\" SIZE=\"968884224\" FSTYPE=\"xfs\"\nNAME=\"/dev/sde\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/sdf\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/sdg\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg3-lv8\" TYPE=\"lvm\" SIZE=\"1287651328\" FSTYPE=\"xfs\"\nNAME=\"/dev/mapper/test_vg3-lv7\" TYPE=\"lvm\" SIZE=\"1287651328\" FSTYPE=\"xfs\"\nNAME=\"/dev/mapper/test_vg3-lv6\" TYPE=\"lvm\" SIZE=\"3217031168\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdh\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg3-lv6\" TYPE=\"lvm\" SIZE=\"3217031168\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdi\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg3-lv5\" TYPE=\"lvm\" SIZE=\"3862953984\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdj\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg3-lv5\" TYPE=\"lvm\" SIZE=\"3862953984\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdk\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\"\nNAME=\"/dev/sdl\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\"\nNAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\"\nNAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\"\nNAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"ext4\"\nNAME=\"/dev/zram0\" TYPE=\"disk\" SIZE=\"3894411264\" FSTYPE=\"swap\"\n+ blkid -s none\n+ echo lsblk after cache flush\nlsblk after cache flush\n+ lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE\nNAME=\"/dev/sda\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg1-lv2\" TYPE=\"lvm\" SIZE=\"4827643904\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdb\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg1-lv2\" TYPE=\"lvm\" SIZE=\"4827643904\" FSTYPE=\"xfs\"\nNAME=\"/dev/mapper/test_vg1-lv1\" TYPE=\"lvm\" SIZE=\"1451229184\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdc\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/sdd\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg2-lv4\" TYPE=\"lvm\" SIZE=\"1933574144\" FSTYPE=\"xfs\"\nNAME=\"/dev/mapper/test_vg2-lv3\" TYPE=\"lvm\" SIZE=\"968884224\" FSTYPE=\"xfs\"\nNAME=\"/dev/sde\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/sdf\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/sdg\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg3-lv8\" TYPE=\"lvm\" SIZE=\"1287651328\" FSTYPE=\"xfs\"\nNAME=\"/dev/mapper/test_vg3-lv7\" TYPE=\"lvm\" SIZE=\"1287651328\" FSTYPE=\"xfs\"\nNAME=\"/dev/mapper/test_vg3-lv6\" TYPE=\"lvm\" SIZE=\"3217031168\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdh\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg3-lv6\" TYPE=\"lvm\" SIZE=\"3217031168\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdi\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg3-lv5\" TYPE=\"lvm\" SIZE=\"3862953984\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdj\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"LVM2_member\"\nNAME=\"/dev/mapper/test_vg3-lv5\" TYPE=\"lvm\" SIZE=\"3862953984\" FSTYPE=\"xfs\"\nNAME=\"/dev/sdk\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\"\nNAME=\"/dev/sdl\" TYPE=\"disk\" SIZE=\"3221225472\" FSTYPE=\"\"\nNAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\"\nNAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"1048576\" FSTYPE=\"\"\nNAME=\"/dev/xvda2\" TYPE=\"part\" SIZE=\"268433341952\" FSTYPE=\"ext4\"\nNAME=\"/dev/zram0\" TYPE=\"disk\" SIZE=\"3894411264\" FSTYPE=\"swap\"\n+ cat /tmp/blivet.log\n2025-06-28 18:18:51,479 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_87gzpafp/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py']\n2025-06-28 18:18:57,178 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_sbvzybbq/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py']\n2025-06-28 18:19:07,697 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_q_oih1un/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py']\n2025-06-28 18:19:07,714 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:07,728 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:07,729 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 0\n2025-06-28 18:19:07,732 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:07,732 DEBUG blivet/MainThread: trying to set new default fstype to 'ext4'\n2025-06-28 18:19:07,735 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:07,735 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 1\n2025-06-28 18:19:07,738 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:07,738 INFO blivet/MainThread: Fstab file '' does not exist, setting fstab read path to None\n2025-06-28 18:19:07,738 INFO program/MainThread: Running... lsblk --bytes -a -o NAME,SIZE,OWNER,GROUP,MODE,FSTYPE,LABEL,UUID,PARTUUID,MOUNTPOINT\n2025-06-28 18:19:07,762 INFO program/MainThread: stdout:\n2025-06-28 18:19:07,762 INFO program/MainThread: NAME SIZE OWNER GROUP MODE FSTYPE LABEL UUID PARTUUID MOUNTPOINT\n2025-06-28 18:19:07,762 INFO program/MainThread: sda 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdb 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdc 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdd 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sde 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdf 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdg 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdh 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdi 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdj 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdk 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: sdl 3221225472 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: xvda 268435456000 root disk brw-rw---- \n2025-06-28 18:19:07,762 INFO program/MainThread: |-xvda1 1048576 root disk brw-rw---- fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda \n2025-06-28 18:19:07,762 INFO program/MainThread: `-xvda2 268433341952 root disk brw-rw---- ext4 8959a9f3-59d4-4eb7-8e53-e856bbc805e9 782cc2d2-7936-4e3f-9cb4-9758a83f53fa /\n2025-06-28 18:19:07,763 INFO program/MainThread: zram0 3894411264 root disk brw-rw---- swap zram0 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 [SWAP]\n2025-06-28 18:19:07,763 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:07,763 DEBUG blivet/MainThread: lsblk output:\nNAME SIZE OWNER GROUP MODE FSTYPE LABEL UUID PARTUUID MOUNTPOINT\nsda 3221225472 root disk brw-rw---- \nsdb 3221225472 root disk brw-rw---- \nsdc 3221225472 root disk brw-rw---- \nsdd 3221225472 root disk brw-rw---- \nsde 3221225472 root disk brw-rw---- \nsdf 3221225472 root disk brw-rw---- \nsdg 3221225472 root disk brw-rw---- \nsdh 3221225472 root disk brw-rw---- \nsdi 3221225472 root disk brw-rw---- \nsdj 3221225472 root disk brw-rw---- \nsdk 3221225472 root disk brw-rw---- \nsdl 3221225472 root disk brw-rw---- \nxvda 268435456000 root disk brw-rw---- \n|-xvda1 1048576 root disk brw-rw---- fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda \n`-xvda2 268433341952 root disk brw-rw---- ext4 8959a9f3-59d4-4eb7-8e53-e856bbc805e9 782cc2d2-7936-4e3f-9cb4-9758a83f53fa /\nzram0 3894411264 root disk brw-rw---- swap zram0 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 [SWAP]\n\n2025-06-28 18:19:07,763 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:07,763 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:07,763 INFO blivet/MainThread: resetting Blivet (version 3.12.1) instance \n2025-06-28 18:19:07,763 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:07,763 INFO blivet/MainThread: DeviceTree.populate: ignored_disks is [] ; exclusive_disks is []\n2025-06-28 18:19:07,764 WARNING blivet/MainThread: Failed to call the update_volume_info method: libstoragemgmt functionality not available\n2025-06-28 18:19:07,764 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:07,774 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:07,785 INFO blivet/MainThread: devices to scan: ['sda', 'sdb', 'sdk', 'sdl', 'sdc', 'sdd', 'sde', 'sdf', 'sdg', 'sdh', 'sdi', 'sdj', 'xvda', 'xvda1', 'xvda2', 'zram0']\n2025-06-28 18:19:07,789 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sda ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-360014058847ce6dd73d4f01931e495d9 '\n '/dev/disk/by-id/wwn-0x60014058847ce6dd73d4f01931e495d9 '\n '/dev/disk/by-diskseq/3',\n 'DEVNAME': '/dev/sda',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '3',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk0',\n 'ID_MODEL_ENC': 'disk0\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '8847ce6d-d73d-4f01-931e-495d93c2876b',\n 'ID_SERIAL': '360014058847ce6dd73d4f01931e495d9',\n 'ID_SERIAL_SHORT': '60014058847ce6dd73d4f01931e495d9',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x60014058847ce6dd',\n 'ID_WWN_VENDOR_EXTENSION': '0x73d4f01931e495d9',\n 'ID_WWN_WITH_EXTENSION': '0x60014058847ce6dd73d4f01931e495d9',\n 'MAJOR': '8',\n 'MINOR': '0',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sda',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193406043'} ;\n2025-06-28 18:19:07,789 INFO blivet/MainThread: scanning sda (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda)...\n2025-06-28 18:19:07,790 INFO program/MainThread: Running [3] lvm lvs --noheadings --nosuffix --nameprefixes --unquoted --units=b -a -o vg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype,origin,pool_lv,data_lv,metadata_lv,role,move_pv,data_percent,metadata_percent,copy_percent,lv_tags --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:07,824 INFO program/MainThread: stdout[3]: \n2025-06-28 18:19:07,824 INFO program/MainThread: stderr[3]: \n2025-06-28 18:19:07,824 INFO program/MainThread: ...done [3] (exit code: 0)\n2025-06-28 18:19:07,829 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:07,832 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:07,832 INFO program/MainThread: Running [4] mdadm --version ...\n2025-06-28 18:19:07,841 INFO program/MainThread: stdout[4]: \n2025-06-28 18:19:07,841 INFO program/MainThread: stderr[4]: mdadm - v4.3 - 2024-02-15\n\n2025-06-28 18:19:07,841 INFO program/MainThread: ...done [4] (exit code: 0)\n2025-06-28 18:19:07,842 INFO program/MainThread: Running [5] dmsetup --version ...\n2025-06-28 18:19:07,847 INFO program/MainThread: stdout[5]: Library version: 1.02.204 (2025-01-14)\nDriver version: 4.49.0\n\n2025-06-28 18:19:07,847 INFO program/MainThread: stderr[5]: \n2025-06-28 18:19:07,847 INFO program/MainThread: ...done [5] (exit code: 0)\n2025-06-28 18:19:07,998 INFO blivet/MainThread: failed to get initiator name from iscsi firmware: UDisks iSCSI functionality not available\n2025-06-28 18:19:07,999 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/udev.py:1087: DeprecationWarning: Will be removed in 1.0. Access properties with Device.properties.\n while device:\n\n2025-06-28 18:19:08,005 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sda ;\n2025-06-28 18:19:08,006 INFO blivet/MainThread: sda is a disk\n2025-06-28 18:19:08,006 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:08,006 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 3\n2025-06-28 18:19:08,006 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 4\n2025-06-28 18:19:08,010 DEBUG blivet/MainThread: DiskDevice._set_format: sda ; type: None ; current: None ;\n2025-06-28 18:19:08,014 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sda ; status: True ;\n2025-06-28 18:19:08,014 DEBUG blivet/MainThread: sda sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda\n2025-06-28 18:19:08,018 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sda ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda ;\n2025-06-28 18:19:08,018 DEBUG blivet/MainThread: updated sda size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,018 INFO blivet/MainThread: added disk sda (id 2) to device tree\n2025-06-28 18:19:08,018 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa3b8c0) --\n name = sda status = True id = 2\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 0 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda\n target size = 3 GiB path = /dev/sda\n format args = [] original_format = None removable = False wwn = 60014058847ce6dd73d4f01931e495d9\n2025-06-28 18:19:08,022 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sda ;\n2025-06-28 18:19:08,022 DEBUG blivet/MainThread: no type or existing type for sda, bailing\n2025-06-28 18:19:08,026 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdb ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-36001405c48b47cd2cda4408882faf8c6 '\n '/dev/disk/by-id/wwn-0x6001405c48b47cd2cda4408882faf8c6 '\n '/dev/disk/by-diskseq/4',\n 'DEVNAME': '/dev/sdb',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '4',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk1',\n 'ID_MODEL_ENC': 'disk1\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'c48b47cd-2cda-4408-882f-af8c68d83c74',\n 'ID_SERIAL': '36001405c48b47cd2cda4408882faf8c6',\n 'ID_SERIAL_SHORT': '6001405c48b47cd2cda4408882faf8c6',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405c48b47cd2',\n 'ID_WWN_VENDOR_EXTENSION': '0xcda4408882faf8c6',\n 'ID_WWN_WITH_EXTENSION': '0x6001405c48b47cd2cda4408882faf8c6',\n 'MAJOR': '8',\n 'MINOR': '16',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdb',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193440060'} ;\n2025-06-28 18:19:08,026 INFO blivet/MainThread: scanning sdb (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb)...\n2025-06-28 18:19:08,029 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdb ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,032 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,037 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdb ;\n2025-06-28 18:19:08,037 INFO blivet/MainThread: sdb is a disk\n2025-06-28 18:19:08,037 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 8\n2025-06-28 18:19:08,038 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 9\n2025-06-28 18:19:08,041 DEBUG blivet/MainThread: DiskDevice._set_format: sdb ; type: None ; current: None ;\n2025-06-28 18:19:08,044 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdb ; status: True ;\n2025-06-28 18:19:08,044 DEBUG blivet/MainThread: sdb sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb\n2025-06-28 18:19:08,048 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdb ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb ;\n2025-06-28 18:19:08,048 DEBUG blivet/MainThread: updated sdb size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,048 INFO blivet/MainThread: added disk sdb (id 7) to device tree\n2025-06-28 18:19:08,048 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa96710) --\n name = sdb status = True id = 7\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 16 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb\n target size = 3 GiB path = /dev/sdb\n format args = [] original_format = None removable = False wwn = 6001405c48b47cd2cda4408882faf8c6\n2025-06-28 18:19:08,052 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdb ;\n2025-06-28 18:19:08,052 DEBUG blivet/MainThread: no type or existing type for sdb, bailing\n2025-06-28 18:19:08,055 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdk ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-diskseq/13 '\n '/dev/disk/by-id/wwn-0x600140532b7553a2ede45408ac592f3f '\n '/dev/disk/by-id/scsi-3600140532b7553a2ede45408ac592f3f',\n 'DEVNAME': '/dev/sdk',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '13',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk10',\n 'ID_MODEL_ENC': 'disk10\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '32b7553a-2ede-4540-8ac5-92f3f51fb680',\n 'ID_SERIAL': '3600140532b7553a2ede45408ac592f3f',\n 'ID_SERIAL_SHORT': '600140532b7553a2ede45408ac592f3f',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x600140532b7553a2',\n 'ID_WWN_VENDOR_EXTENSION': '0xede45408ac592f3f',\n 'ID_WWN_WITH_EXTENSION': '0x600140532b7553a2ede45408ac592f3f',\n 'MAJOR': '8',\n 'MINOR': '160',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdk',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '194002206'} ;\n2025-06-28 18:19:08,055 INFO blivet/MainThread: scanning sdk (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk)...\n2025-06-28 18:19:08,058 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdk ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,062 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,066 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdk ;\n2025-06-28 18:19:08,067 INFO blivet/MainThread: sdk is a disk\n2025-06-28 18:19:08,067 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 13\n2025-06-28 18:19:08,067 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 14\n2025-06-28 18:19:08,070 DEBUG blivet/MainThread: DiskDevice._set_format: sdk ; type: None ; current: None ;\n2025-06-28 18:19:08,073 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdk ; status: True ;\n2025-06-28 18:19:08,073 DEBUG blivet/MainThread: sdk sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk\n2025-06-28 18:19:08,078 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdk ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk ;\n2025-06-28 18:19:08,078 DEBUG blivet/MainThread: updated sdk size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,078 INFO blivet/MainThread: added disk sdk (id 12) to device tree\n2025-06-28 18:19:08,078 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa96350) --\n name = sdk status = True id = 12\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 160 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk\n target size = 3 GiB path = /dev/sdk\n format args = [] original_format = None removable = False wwn = 600140532b7553a2ede45408ac592f3f\n2025-06-28 18:19:08,081 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdk ;\n2025-06-28 18:19:08,081 DEBUG blivet/MainThread: no type or existing type for sdk, bailing\n2025-06-28 18:19:08,085 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdl ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-36001405d599e4585dd1440490d5145a0 '\n '/dev/disk/by-id/wwn-0x6001405d599e4585dd1440490d5145a0 '\n '/dev/disk/by-diskseq/14',\n 'DEVNAME': '/dev/sdl',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '14',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk11',\n 'ID_MODEL_ENC': 'disk11\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'd599e458-5dd1-4404-90d5-145a01a2b253',\n 'ID_SERIAL': '36001405d599e4585dd1440490d5145a0',\n 'ID_SERIAL_SHORT': '6001405d599e4585dd1440490d5145a0',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405d599e4585',\n 'ID_WWN_VENDOR_EXTENSION': '0xdd1440490d5145a0',\n 'ID_WWN_WITH_EXTENSION': '0x6001405d599e4585dd1440490d5145a0',\n 'MAJOR': '8',\n 'MINOR': '176',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdl',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '194051087'} ;\n2025-06-28 18:19:08,085 INFO blivet/MainThread: scanning sdl (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl)...\n2025-06-28 18:19:08,088 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdl ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,091 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,096 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdl ;\n2025-06-28 18:19:08,096 INFO blivet/MainThread: sdl is a disk\n2025-06-28 18:19:08,096 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 18\n2025-06-28 18:19:08,096 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 19\n2025-06-28 18:19:08,099 DEBUG blivet/MainThread: DiskDevice._set_format: sdl ; type: None ; current: None ;\n2025-06-28 18:19:08,103 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdl ; status: True ;\n2025-06-28 18:19:08,103 DEBUG blivet/MainThread: sdl sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl\n2025-06-28 18:19:08,107 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdl ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl ;\n2025-06-28 18:19:08,107 DEBUG blivet/MainThread: updated sdl size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,107 INFO blivet/MainThread: added disk sdl (id 17) to device tree\n2025-06-28 18:19:08,107 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95950) --\n name = sdl status = True id = 17\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 176 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl\n target size = 3 GiB path = /dev/sdl\n format args = [] original_format = None removable = False wwn = 6001405d599e4585dd1440490d5145a0\n2025-06-28 18:19:08,110 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdl ;\n2025-06-28 18:19:08,110 DEBUG blivet/MainThread: no type or existing type for sdl, bailing\n2025-06-28 18:19:08,113 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdc ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-36001405582e0de585294686b36ae1d1e '\n '/dev/disk/by-diskseq/5 '\n '/dev/disk/by-id/wwn-0x6001405582e0de585294686b36ae1d1e',\n 'DEVNAME': '/dev/sdc',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '5',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk2',\n 'ID_MODEL_ENC': 'disk2\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '582e0de5-8529-4686-b36a-e1d1e173bd3f',\n 'ID_SERIAL': '36001405582e0de585294686b36ae1d1e',\n 'ID_SERIAL_SHORT': '6001405582e0de585294686b36ae1d1e',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405582e0de58',\n 'ID_WWN_VENDOR_EXTENSION': '0x5294686b36ae1d1e',\n 'ID_WWN_WITH_EXTENSION': '0x6001405582e0de585294686b36ae1d1e',\n 'MAJOR': '8',\n 'MINOR': '32',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdc',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193503847'} ;\n2025-06-28 18:19:08,114 INFO blivet/MainThread: scanning sdc (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc)...\n2025-06-28 18:19:08,117 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdc ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,120 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,125 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdc ;\n2025-06-28 18:19:08,125 INFO blivet/MainThread: sdc is a disk\n2025-06-28 18:19:08,125 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 23\n2025-06-28 18:19:08,125 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 24\n2025-06-28 18:19:08,128 DEBUG blivet/MainThread: DiskDevice._set_format: sdc ; type: None ; current: None ;\n2025-06-28 18:19:08,132 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdc ; status: True ;\n2025-06-28 18:19:08,132 DEBUG blivet/MainThread: sdc sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc\n2025-06-28 18:19:08,135 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdc ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc ;\n2025-06-28 18:19:08,136 DEBUG blivet/MainThread: updated sdc size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,136 INFO blivet/MainThread: added disk sdc (id 22) to device tree\n2025-06-28 18:19:08,136 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa960d0) --\n name = sdc status = True id = 22\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 32 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc\n target size = 3 GiB path = /dev/sdc\n format args = [] original_format = None removable = False wwn = 6001405582e0de585294686b36ae1d1e\n2025-06-28 18:19:08,139 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdc ;\n2025-06-28 18:19:08,139 DEBUG blivet/MainThread: no type or existing type for sdc, bailing\n2025-06-28 18:19:08,143 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdd ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-36001405acd2ba9b1a974f55a9704061c '\n '/dev/disk/by-id/wwn-0x6001405acd2ba9b1a974f55a9704061c '\n '/dev/disk/by-diskseq/6',\n 'DEVNAME': '/dev/sdd',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '6',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk3',\n 'ID_MODEL_ENC': 'disk3\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'acd2ba9b-1a97-4f55-a970-4061c67c3912',\n 'ID_SERIAL': '36001405acd2ba9b1a974f55a9704061c',\n 'ID_SERIAL_SHORT': '6001405acd2ba9b1a974f55a9704061c',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405acd2ba9b1',\n 'ID_WWN_VENDOR_EXTENSION': '0xa974f55a9704061c',\n 'ID_WWN_WITH_EXTENSION': '0x6001405acd2ba9b1a974f55a9704061c',\n 'MAJOR': '8',\n 'MINOR': '48',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdd',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193576323'} ;\n2025-06-28 18:19:08,143 INFO blivet/MainThread: scanning sdd (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd)...\n2025-06-28 18:19:08,146 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdd ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,149 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,154 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdd ;\n2025-06-28 18:19:08,154 INFO blivet/MainThread: sdd is a disk\n2025-06-28 18:19:08,154 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 28\n2025-06-28 18:19:08,154 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 29\n2025-06-28 18:19:08,158 DEBUG blivet/MainThread: DiskDevice._set_format: sdd ; type: None ; current: None ;\n2025-06-28 18:19:08,161 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdd ; status: True ;\n2025-06-28 18:19:08,161 DEBUG blivet/MainThread: sdd sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd\n2025-06-28 18:19:08,164 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdd ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd ;\n2025-06-28 18:19:08,164 DEBUG blivet/MainThread: updated sdd size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,165 INFO blivet/MainThread: added disk sdd (id 27) to device tree\n2025-06-28 18:19:08,165 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95f90) --\n name = sdd status = True id = 27\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 48 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd\n target size = 3 GiB path = /dev/sdd\n format args = [] original_format = None removable = False wwn = 6001405acd2ba9b1a974f55a9704061c\n2025-06-28 18:19:08,168 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdd ;\n2025-06-28 18:19:08,169 DEBUG blivet/MainThread: no type or existing type for sdd, bailing\n2025-06-28 18:19:08,172 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sde ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-36001405378e6ca643c443e0b9c840399 '\n '/dev/disk/by-id/wwn-0x6001405378e6ca643c443e0b9c840399 '\n '/dev/disk/by-diskseq/7',\n 'DEVNAME': '/dev/sde',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '7',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk4',\n 'ID_MODEL_ENC': 'disk4\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '378e6ca6-43c4-43e0-b9c8-4039999596e3',\n 'ID_SERIAL': '36001405378e6ca643c443e0b9c840399',\n 'ID_SERIAL_SHORT': '6001405378e6ca643c443e0b9c840399',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405378e6ca64',\n 'ID_WWN_VENDOR_EXTENSION': '0x3c443e0b9c840399',\n 'ID_WWN_WITH_EXTENSION': '0x6001405378e6ca643c443e0b9c840399',\n 'MAJOR': '8',\n 'MINOR': '64',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sde',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193646170'} ;\n2025-06-28 18:19:08,172 INFO blivet/MainThread: scanning sde (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde)...\n2025-06-28 18:19:08,175 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sde ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,178 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,183 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sde ;\n2025-06-28 18:19:08,183 INFO blivet/MainThread: sde is a disk\n2025-06-28 18:19:08,183 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 33\n2025-06-28 18:19:08,183 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 34\n2025-06-28 18:19:08,186 DEBUG blivet/MainThread: DiskDevice._set_format: sde ; type: None ; current: None ;\n2025-06-28 18:19:08,190 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sde ; status: True ;\n2025-06-28 18:19:08,190 DEBUG blivet/MainThread: sde sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde\n2025-06-28 18:19:08,193 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sde ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde ;\n2025-06-28 18:19:08,193 DEBUG blivet/MainThread: updated sde size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,193 INFO blivet/MainThread: added disk sde (id 32) to device tree\n2025-06-28 18:19:08,194 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95e50) --\n name = sde status = True id = 32\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 64 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde\n target size = 3 GiB path = /dev/sde\n format args = [] original_format = None removable = False wwn = 6001405378e6ca643c443e0b9c840399\n2025-06-28 18:19:08,197 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sde ;\n2025-06-28 18:19:08,197 DEBUG blivet/MainThread: no type or existing type for sde, bailing\n2025-06-28 18:19:08,200 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdf ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-diskseq/8 '\n '/dev/disk/by-id/scsi-36001405f858954a0e784149995a198ad '\n '/dev/disk/by-id/wwn-0x6001405f858954a0e784149995a198ad',\n 'DEVNAME': '/dev/sdf',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '8',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk5',\n 'ID_MODEL_ENC': 'disk5\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'f858954a-0e78-4149-995a-198adb87ba16',\n 'ID_SERIAL': '36001405f858954a0e784149995a198ad',\n 'ID_SERIAL_SHORT': '6001405f858954a0e784149995a198ad',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405f858954a0',\n 'ID_WWN_VENDOR_EXTENSION': '0xe784149995a198ad',\n 'ID_WWN_WITH_EXTENSION': '0x6001405f858954a0e784149995a198ad',\n 'MAJOR': '8',\n 'MINOR': '80',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdf',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193708083'} ;\n2025-06-28 18:19:08,201 INFO blivet/MainThread: scanning sdf (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf)...\n2025-06-28 18:19:08,204 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdf ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,207 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,212 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdf ;\n2025-06-28 18:19:08,212 INFO blivet/MainThread: sdf is a disk\n2025-06-28 18:19:08,212 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 38\n2025-06-28 18:19:08,213 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 39\n2025-06-28 18:19:08,216 DEBUG blivet/MainThread: DiskDevice._set_format: sdf ; type: None ; current: None ;\n2025-06-28 18:19:08,219 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdf ; status: True ;\n2025-06-28 18:19:08,219 DEBUG blivet/MainThread: sdf sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf\n2025-06-28 18:19:08,222 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdf ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf ;\n2025-06-28 18:19:08,223 DEBUG blivet/MainThread: updated sdf size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,223 INFO blivet/MainThread: added disk sdf (id 37) to device tree\n2025-06-28 18:19:08,223 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95d10) --\n name = sdf status = True id = 37\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 80 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf\n target size = 3 GiB path = /dev/sdf\n format args = [] original_format = None removable = False wwn = 6001405f858954a0e784149995a198ad\n2025-06-28 18:19:08,227 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdf ;\n2025-06-28 18:19:08,227 DEBUG blivet/MainThread: no type or existing type for sdf, bailing\n2025-06-28 18:19:08,230 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdg ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/wwn-0x6001405a9efc0e1911c4201a970a6f85 '\n '/dev/disk/by-diskseq/9 '\n '/dev/disk/by-id/scsi-36001405a9efc0e1911c4201a970a6f85',\n 'DEVNAME': '/dev/sdg',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '9',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk6',\n 'ID_MODEL_ENC': 'disk6\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'a9efc0e1-911c-4201-a970-a6f858e624d6',\n 'ID_SERIAL': '36001405a9efc0e1911c4201a970a6f85',\n 'ID_SERIAL_SHORT': '6001405a9efc0e1911c4201a970a6f85',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405a9efc0e19',\n 'ID_WWN_VENDOR_EXTENSION': '0x11c4201a970a6f85',\n 'ID_WWN_WITH_EXTENSION': '0x6001405a9efc0e1911c4201a970a6f85',\n 'MAJOR': '8',\n 'MINOR': '96',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdg',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193767499'} ;\n2025-06-28 18:19:08,230 INFO blivet/MainThread: scanning sdg (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg)...\n2025-06-28 18:19:08,233 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdg ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,236 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,241 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdg ;\n2025-06-28 18:19:08,241 INFO blivet/MainThread: sdg is a disk\n2025-06-28 18:19:08,241 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 43\n2025-06-28 18:19:08,242 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 44\n2025-06-28 18:19:08,245 DEBUG blivet/MainThread: DiskDevice._set_format: sdg ; type: None ; current: None ;\n2025-06-28 18:19:08,248 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdg ; status: True ;\n2025-06-28 18:19:08,248 DEBUG blivet/MainThread: sdg sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg\n2025-06-28 18:19:08,251 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdg ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg ;\n2025-06-28 18:19:08,252 DEBUG blivet/MainThread: updated sdg size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,252 INFO blivet/MainThread: added disk sdg (id 42) to device tree\n2025-06-28 18:19:08,252 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95bd0) --\n name = sdg status = True id = 42\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 96 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg\n target size = 3 GiB path = /dev/sdg\n format args = [] original_format = None removable = False wwn = 6001405a9efc0e1911c4201a970a6f85\n2025-06-28 18:19:08,256 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdg ;\n2025-06-28 18:19:08,256 DEBUG blivet/MainThread: no type or existing type for sdg, bailing\n2025-06-28 18:19:08,259 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdh ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-360014058181fbe60fbb48f6bf65e97b7 '\n '/dev/disk/by-id/wwn-0x60014058181fbe60fbb48f6bf65e97b7 '\n '/dev/disk/by-diskseq/10',\n 'DEVNAME': '/dev/sdh',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '10',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk7',\n 'ID_MODEL_ENC': 'disk7\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '8181fbe6-0fbb-48f6-bf65-e97b753d1f2b',\n 'ID_SERIAL': '360014058181fbe60fbb48f6bf65e97b7',\n 'ID_SERIAL_SHORT': '60014058181fbe60fbb48f6bf65e97b7',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x60014058181fbe60',\n 'ID_WWN_VENDOR_EXTENSION': '0xfbb48f6bf65e97b7',\n 'ID_WWN_WITH_EXTENSION': '0x60014058181fbe60fbb48f6bf65e97b7',\n 'MAJOR': '8',\n 'MINOR': '112',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdh',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193801085'} ;\n2025-06-28 18:19:08,259 INFO blivet/MainThread: scanning sdh (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh)...\n2025-06-28 18:19:08,262 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdh ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,265 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,270 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdh ;\n2025-06-28 18:19:08,270 INFO blivet/MainThread: sdh is a disk\n2025-06-28 18:19:08,270 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 48\n2025-06-28 18:19:08,270 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 49\n2025-06-28 18:19:08,274 DEBUG blivet/MainThread: DiskDevice._set_format: sdh ; type: None ; current: None ;\n2025-06-28 18:19:08,278 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdh ; status: True ;\n2025-06-28 18:19:08,278 DEBUG blivet/MainThread: sdh sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh\n2025-06-28 18:19:08,281 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdh ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh ;\n2025-06-28 18:19:08,282 DEBUG blivet/MainThread: updated sdh size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,282 INFO blivet/MainThread: added disk sdh (id 47) to device tree\n2025-06-28 18:19:08,282 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa95a90) --\n name = sdh status = True id = 47\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 112 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh\n target size = 3 GiB path = /dev/sdh\n format args = [] original_format = None removable = False wwn = 60014058181fbe60fbb48f6bf65e97b7\n2025-06-28 18:19:08,285 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdh ;\n2025-06-28 18:19:08,285 DEBUG blivet/MainThread: no type or existing type for sdh, bailing\n2025-06-28 18:19:08,288 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdi ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-diskseq/11 '\n '/dev/disk/by-id/scsi-3600140536dcfebe092746238bb16a3fa '\n '/dev/disk/by-id/wwn-0x600140536dcfebe092746238bb16a3fa',\n 'DEVNAME': '/dev/sdi',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '11',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk8',\n 'ID_MODEL_ENC': 'disk8\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '36dcfebe-0927-4623-8bb1-6a3fa2891f0c',\n 'ID_SERIAL': '3600140536dcfebe092746238bb16a3fa',\n 'ID_SERIAL_SHORT': '600140536dcfebe092746238bb16a3fa',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x600140536dcfebe0',\n 'ID_WWN_VENDOR_EXTENSION': '0x92746238bb16a3fa',\n 'ID_WWN_WITH_EXTENSION': '0x600140536dcfebe092746238bb16a3fa',\n 'MAJOR': '8',\n 'MINOR': '128',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdi',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193876095'} ;\n2025-06-28 18:19:08,288 INFO blivet/MainThread: scanning sdi (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi)...\n2025-06-28 18:19:08,292 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdi ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,295 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,299 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdi ;\n2025-06-28 18:19:08,300 INFO blivet/MainThread: sdi is a disk\n2025-06-28 18:19:08,300 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 53\n2025-06-28 18:19:08,300 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 54\n2025-06-28 18:19:08,303 DEBUG blivet/MainThread: DiskDevice._set_format: sdi ; type: None ; current: None ;\n2025-06-28 18:19:08,306 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdi ; status: True ;\n2025-06-28 18:19:08,307 DEBUG blivet/MainThread: sdi sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi\n2025-06-28 18:19:08,310 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdi ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi ;\n2025-06-28 18:19:08,311 DEBUG blivet/MainThread: updated sdi size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,311 INFO blivet/MainThread: added disk sdi (id 52) to device tree\n2025-06-28 18:19:08,311 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa97b10) --\n name = sdi status = True id = 52\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 128 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi\n target size = 3 GiB path = /dev/sdi\n format args = [] original_format = None removable = False wwn = 600140536dcfebe092746238bb16a3fa\n2025-06-28 18:19:08,314 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdi ;\n2025-06-28 18:19:08,314 DEBUG blivet/MainThread: no type or existing type for sdi, bailing\n2025-06-28 18:19:08,317 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdj ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/wwn-0x600140580c834ee801b48198b71671c5 '\n '/dev/disk/by-id/scsi-3600140580c834ee801b48198b71671c5 '\n '/dev/disk/by-diskseq/12',\n 'DEVNAME': '/dev/sdj',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '12',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk9',\n 'ID_MODEL_ENC': 'disk9\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '80c834ee-801b-4819-8b71-671c5aee19bd',\n 'ID_SERIAL': '3600140580c834ee801b48198b71671c5',\n 'ID_SERIAL_SHORT': '600140580c834ee801b48198b71671c5',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x600140580c834ee8',\n 'ID_WWN_VENDOR_EXTENSION': '0x01b48198b71671c5',\n 'ID_WWN_WITH_EXTENSION': '0x600140580c834ee801b48198b71671c5',\n 'MAJOR': '8',\n 'MINOR': '144',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdj',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193941063'} ;\n2025-06-28 18:19:08,317 INFO blivet/MainThread: scanning sdj (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj)...\n2025-06-28 18:19:08,321 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdj ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,324 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,328 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdj ;\n2025-06-28 18:19:08,329 INFO blivet/MainThread: sdj is a disk\n2025-06-28 18:19:08,329 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 58\n2025-06-28 18:19:08,329 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 59\n2025-06-28 18:19:08,333 DEBUG blivet/MainThread: DiskDevice._set_format: sdj ; type: None ; current: None ;\n2025-06-28 18:19:08,336 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdj ; status: True ;\n2025-06-28 18:19:08,336 DEBUG blivet/MainThread: sdj sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj\n2025-06-28 18:19:08,340 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdj ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj ;\n2025-06-28 18:19:08,340 DEBUG blivet/MainThread: updated sdj size to 3 GiB (3 GiB)\n2025-06-28 18:19:08,340 INFO blivet/MainThread: added disk sdj (id 57) to device tree\n2025-06-28 18:19:08,340 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa97c50) --\n name = sdj status = True id = 57\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 144 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj\n target size = 3 GiB path = /dev/sdj\n format args = [] original_format = None removable = False wwn = 600140580c834ee801b48198b71671c5\n2025-06-28 18:19:08,344 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdj ;\n2025-06-28 18:19:08,344 DEBUG blivet/MainThread: no type or existing type for sdj, bailing\n2025-06-28 18:19:08,347 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-diskseq/1',\n 'DEVNAME': '/dev/xvda',\n 'DEVPATH': '/devices/vbd-768/block/xvda',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '1',\n 'ID_PART_TABLE_TYPE': 'gpt',\n 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2',\n 'MAJOR': '202',\n 'MINOR': '0',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'xvda',\n 'SYS_PATH': '/sys/devices/vbd-768/block/xvda',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '8070613'} ;\n2025-06-28 18:19:08,347 INFO blivet/MainThread: scanning xvda (/sys/devices/vbd-768/block/xvda)...\n2025-06-28 18:19:08,351 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,354 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,357 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: xvda ;\n2025-06-28 18:19:08,357 WARNING blivet/MainThread: device/vendor is not a valid attribute\n2025-06-28 18:19:08,358 WARNING blivet/MainThread: device/model is not a valid attribute\n2025-06-28 18:19:08,358 INFO blivet/MainThread: xvda is a disk\n2025-06-28 18:19:08,358 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 63\n2025-06-28 18:19:08,358 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 64\n2025-06-28 18:19:08,361 DEBUG blivet/MainThread: DiskDevice._set_format: xvda ; type: None ; current: None ;\n2025-06-28 18:19:08,364 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: xvda ; status: True ;\n2025-06-28 18:19:08,364 DEBUG blivet/MainThread: xvda sysfs_path set to /sys/devices/vbd-768/block/xvda\n2025-06-28 18:19:08,368 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/xvda ; sysfs_path: /sys/devices/vbd-768/block/xvda ;\n2025-06-28 18:19:08,369 DEBUG blivet/MainThread: updated xvda size to 250 GiB (250 GiB)\n2025-06-28 18:19:08,369 INFO blivet/MainThread: added disk xvda (id 62) to device tree\n2025-06-28 18:19:08,369 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cffa97d90) --\n name = xvda status = True id = 62\n children = []\n parents = []\n uuid = None size = 250 GiB\n format = existing None\n major = 202 minor = 0 exists = True protected = False\n sysfs path = /sys/devices/vbd-768/block/xvda\n target size = 250 GiB path = /dev/xvda\n format args = [] original_format = None removable = False wwn = None\n2025-06-28 18:19:08,372 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda ;\n2025-06-28 18:19:08,376 DEBUG blivet/MainThread: EFIFS.supported: supported: True ;\n2025-06-28 18:19:08,376 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 66\n2025-06-28 18:19:08,379 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:08,380 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 67\n2025-06-28 18:19:08,384 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:08,384 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 68\n2025-06-28 18:19:08,387 DEBUG blivet/MainThread: DiskLabelFormatPopulator.run: device: xvda ; label_type: gpt ;\n2025-06-28 18:19:08,390 DEBUG blivet/MainThread: DiskDevice.setup: xvda ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:08,394 DEBUG blivet/MainThread: DiskLabel.__init__: uuid: 91c3c0f1-4957-4f21-b15a-28e9016b79c2 ; label: None ; device: /dev/xvda ; serial: None ; exists: True ;\n2025-06-28 18:19:08,407 DEBUG blivet/MainThread: Set pmbr_boot on parted.Disk instance --\n type: gpt primaryPartitionCount: 2\n lastPartitionNumber: 2 maxPrimaryPartitionCount: 128\n partitions: [, ]\n device: \n PedDisk: <_ped.Disk object at 0x7f2d01d06780>\n2025-06-28 18:19:08,469 DEBUG blivet/MainThread: get_format('disklabel') returning DiskLabel instance with object id 69\n2025-06-28 18:19:08,473 DEBUG blivet/MainThread: DiskDevice._set_format: xvda ; type: disklabel ; current: None ;\n2025-06-28 18:19:08,473 INFO blivet/MainThread: got format: existing gpt disklabel\n2025-06-28 18:19:08,473 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:08,476 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda1 ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-partuuid/fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda '\n '/dev/disk/by-diskseq/1-part1',\n 'DEVNAME': '/dev/xvda1',\n 'DEVPATH': '/devices/vbd-768/block/xvda/xvda1',\n 'DEVTYPE': 'partition',\n 'DISKSEQ': '1',\n 'ID_PART_ENTRY_DISK': '202:0',\n 'ID_PART_ENTRY_NUMBER': '1',\n 'ID_PART_ENTRY_OFFSET': '2048',\n 'ID_PART_ENTRY_SCHEME': 'gpt',\n 'ID_PART_ENTRY_SIZE': '2048',\n 'ID_PART_ENTRY_TYPE': '21686148-6449-6e6f-744e-656564454649',\n 'ID_PART_ENTRY_UUID': 'fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda',\n 'ID_PART_TABLE_TYPE': 'gpt',\n 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2',\n 'MAJOR': '202',\n 'MINOR': '1',\n 'PARTN': '1',\n 'PARTUUID': 'fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'xvda1',\n 'SYS_PATH': '/sys/devices/vbd-768/block/xvda/xvda1',\n 'TAGS': ':systemd:',\n 'UDISKS_IGNORE': '1',\n 'USEC_INITIALIZED': '8070648'} ;\n2025-06-28 18:19:08,476 INFO blivet/MainThread: scanning xvda1 (/sys/devices/vbd-768/block/xvda/xvda1)...\n2025-06-28 18:19:08,476 WARNING blivet/MainThread: hidden is not a valid attribute\n2025-06-28 18:19:08,479 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,483 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,486 DEBUG blivet/MainThread: PartitionDevicePopulator.run: name: xvda1 ;\n2025-06-28 18:19:08,489 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,492 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,492 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:08,505 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:08,519 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,522 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 250 GiB disk xvda (62) with existing gpt disklabel\n2025-06-28 18:19:08,523 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:08,523 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 72\n2025-06-28 18:19:08,527 DEBUG blivet/MainThread: DiskDevice.add_child: name: xvda ; child: xvda1 ; kids: 0 ;\n2025-06-28 18:19:08,527 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 73\n2025-06-28 18:19:08,530 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda1 ; type: None ; current: None ;\n2025-06-28 18:19:08,534 DEBUG blivet/MainThread: PartitionDevice.update_sysfs_path: xvda1 ; status: True ;\n2025-06-28 18:19:08,535 DEBUG blivet/MainThread: xvda1 sysfs_path set to /sys/devices/vbd-768/block/xvda/xvda1\n2025-06-28 18:19:08,538 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda1 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda1 ;\n2025-06-28 18:19:08,538 DEBUG blivet/MainThread: updated xvda1 size to 1024 KiB (1024 KiB)\n2025-06-28 18:19:08,539 DEBUG blivet/MainThread: looking up parted Partition: /dev/xvda1\n2025-06-28 18:19:08,542 DEBUG blivet/MainThread: PartitionDevice.probe: xvda1 ; exists: True ;\n2025-06-28 18:19:08,545 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 1 ;\n2025-06-28 18:19:08,549 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 10 ;\n2025-06-28 18:19:08,552 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 12 ;\n2025-06-28 18:19:08,552 DEBUG blivet/MainThread: get_format('biosboot') returning BIOSBoot instance with object id 75\n2025-06-28 18:19:08,557 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda1 ; type: biosboot ; current: None ;\n2025-06-28 18:19:08,560 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda1 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda1 ;\n2025-06-28 18:19:08,560 DEBUG blivet/MainThread: updated xvda1 size to 1024 KiB (1024 KiB)\n2025-06-28 18:19:08,560 INFO blivet/MainThread: added partition xvda1 (id 71) to device tree\n2025-06-28 18:19:08,561 INFO blivet/MainThread: got device: PartitionDevice instance (0x7f2cffa39e80) --\n name = xvda1 status = True id = 71\n children = []\n parents = ['existing 250 GiB disk xvda (62) with existing gpt disklabel']\n uuid = fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda size = 1024 KiB\n format = existing biosboot\n major = 202 minor = 1 exists = True protected = False\n sysfs path = /sys/devices/vbd-768/block/xvda/xvda1\n target size = 1024 KiB path = /dev/xvda1\n format args = [] original_format = None grow = None max size = 0 B bootable = None\n part type = 0 primary = None start sector = None end sector = None\n parted_partition = parted.Partition instance --\n disk: fileSystem: None\n number: 1 path: /dev/xvda1 type: 0\n name: active: True busy: False\n geometry: PedPartition: <_ped.Partition object at 0x7f2cfdd7a660>\n disk = existing 250 GiB disk xvda (62) with existing gpt disklabel\n start = 2048 end = 4095 length = 2048\n flags = bios_grub type_uuid = 21686148-6449-6e6f-744e-656564454649\n2025-06-28 18:19:08,564 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda1 ;\n2025-06-28 18:19:08,565 DEBUG blivet/MainThread: no type or existing type for xvda1, bailing\n2025-06-28 18:19:08,568 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda2 ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-partuuid/782cc2d2-7936-4e3f-9cb4-9758a83f53fa '\n '/dev/disk/by-uuid/8959a9f3-59d4-4eb7-8e53-e856bbc805e9 '\n '/dev/disk/by-diskseq/1-part2',\n 'DEVNAME': '/dev/xvda2',\n 'DEVPATH': '/devices/vbd-768/block/xvda/xvda2',\n 'DEVTYPE': 'partition',\n 'DISKSEQ': '1',\n 'ID_FS_BLOCKSIZE': '4096',\n 'ID_FS_LASTBLOCK': '1572096',\n 'ID_FS_SIZE': '6439305216',\n 'ID_FS_TYPE': 'ext4',\n 'ID_FS_USAGE': 'filesystem',\n 'ID_FS_UUID': '8959a9f3-59d4-4eb7-8e53-e856bbc805e9',\n 'ID_FS_UUID_ENC': '8959a9f3-59d4-4eb7-8e53-e856bbc805e9',\n 'ID_FS_VERSION': '1.0',\n 'ID_PART_ENTRY_DISK': '202:0',\n 'ID_PART_ENTRY_NUMBER': '2',\n 'ID_PART_ENTRY_OFFSET': '4096',\n 'ID_PART_ENTRY_SCHEME': 'gpt',\n 'ID_PART_ENTRY_SIZE': '524283871',\n 'ID_PART_ENTRY_TYPE': '0fc63daf-8483-4772-8e79-3d69d8477de4',\n 'ID_PART_ENTRY_UUID': '782cc2d2-7936-4e3f-9cb4-9758a83f53fa',\n 'ID_PART_TABLE_TYPE': 'gpt',\n 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2',\n 'MAJOR': '202',\n 'MINOR': '2',\n 'PARTN': '2',\n 'PARTUUID': '782cc2d2-7936-4e3f-9cb4-9758a83f53fa',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'xvda2',\n 'SYS_PATH': '/sys/devices/vbd-768/block/xvda/xvda2',\n 'TAGS': ':systemd:',\n 'UDISKS_AUTO': '0',\n 'USEC_INITIALIZED': '8070794'} ;\n2025-06-28 18:19:08,568 INFO blivet/MainThread: scanning xvda2 (/sys/devices/vbd-768/block/xvda/xvda2)...\n2025-06-28 18:19:08,568 WARNING blivet/MainThread: hidden is not a valid attribute\n2025-06-28 18:19:08,571 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,574 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,577 DEBUG blivet/MainThread: PartitionDevicePopulator.run: name: xvda2 ;\n2025-06-28 18:19:08,580 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,583 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,583 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:08,596 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:08,611 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,614 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 250 GiB disk xvda (62) with existing gpt disklabel\n2025-06-28 18:19:08,614 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:08,614 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 78\n2025-06-28 18:19:08,618 DEBUG blivet/MainThread: DiskDevice.add_child: name: xvda ; child: xvda2 ; kids: 1 ;\n2025-06-28 18:19:08,618 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 79\n2025-06-28 18:19:08,621 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda2 ; type: None ; current: None ;\n2025-06-28 18:19:08,625 DEBUG blivet/MainThread: PartitionDevice.update_sysfs_path: xvda2 ; status: True ;\n2025-06-28 18:19:08,626 DEBUG blivet/MainThread: xvda2 sysfs_path set to /sys/devices/vbd-768/block/xvda/xvda2\n2025-06-28 18:19:08,629 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda2 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda2 ;\n2025-06-28 18:19:08,629 DEBUG blivet/MainThread: updated xvda2 size to 250 GiB (250 GiB)\n2025-06-28 18:19:08,629 DEBUG blivet/MainThread: looking up parted Partition: /dev/xvda2\n2025-06-28 18:19:08,633 DEBUG blivet/MainThread: PartitionDevice.probe: xvda2 ; exists: True ;\n2025-06-28 18:19:08,637 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 1 ;\n2025-06-28 18:19:08,640 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 10 ;\n2025-06-28 18:19:08,646 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 12 ;\n2025-06-28 18:19:08,649 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda2 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda2 ;\n2025-06-28 18:19:08,650 DEBUG blivet/MainThread: updated xvda2 size to 250 GiB (250 GiB)\n2025-06-28 18:19:08,650 INFO blivet/MainThread: added partition xvda2 (id 77) to device tree\n2025-06-28 18:19:08,650 INFO blivet/MainThread: got device: PartitionDevice instance (0x7f2cfdd58910) --\n name = xvda2 status = True id = 77\n children = []\n parents = ['existing 250 GiB disk xvda (62) with existing gpt disklabel']\n uuid = 782cc2d2-7936-4e3f-9cb4-9758a83f53fa size = 250 GiB\n format = existing None\n major = 202 minor = 2 exists = True protected = False\n sysfs path = /sys/devices/vbd-768/block/xvda/xvda2\n target size = 250 GiB path = /dev/xvda2\n format args = [] original_format = None grow = None max size = 0 B bootable = None\n part type = 0 primary = None start sector = None end sector = None\n parted_partition = parted.Partition instance --\n disk: fileSystem: \n number: 2 path: /dev/xvda2 type: 0\n name: active: True busy: True\n geometry: PedPartition: <_ped.Partition object at 0x7f2cfdd7b0b0>\n disk = existing 250 GiB disk xvda (62) with existing gpt disklabel\n start = 4096 end = 524287966 length = 524283871\n flags = type_uuid = 0fc63daf-8483-4772-8e79-3d69d8477de4\n2025-06-28 18:19:08,653 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda2 ;\n2025-06-28 18:19:08,657 DEBUG blivet/MainThread: EFIFS.supported: supported: True ;\n2025-06-28 18:19:08,657 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 81\n2025-06-28 18:19:08,661 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:08,661 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 82\n2025-06-28 18:19:08,665 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:08,665 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 83\n2025-06-28 18:19:08,666 WARNING blivet/MainThread: Stratis DBus service is not running\n2025-06-28 18:19:08,666 INFO blivet/MainThread: type detected on 'xvda2' is 'ext4'\n2025-06-28 18:19:08,670 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:08,670 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 84\n2025-06-28 18:19:08,673 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda2 ; type: ext4 ; current: None ;\n2025-06-28 18:19:08,673 INFO blivet/MainThread: got format: existing ext4 filesystem\n2025-06-28 18:19:08,676 DEBUG blivet/MainThread: DeviceTree.handle_device: name: zram0 ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-label/zram0 '\n '/dev/disk/by-uuid/9e4b39b6-8d8e-46c1-8981-c482cb670ee6 '\n '/dev/disk/by-diskseq/2',\n 'DEVNAME': '/dev/zram0',\n 'DEVPATH': '/devices/virtual/block/zram0',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '2',\n 'ID_FS_BLOCKSIZE': '4096',\n 'ID_FS_LABEL': 'zram0',\n 'ID_FS_LABEL_ENC': 'zram0',\n 'ID_FS_LASTBLOCK': '950784',\n 'ID_FS_SIZE': '3894407168',\n 'ID_FS_TYPE': 'swap',\n 'ID_FS_USAGE': 'other',\n 'ID_FS_UUID': '9e4b39b6-8d8e-46c1-8981-c482cb670ee6',\n 'ID_FS_UUID_ENC': '9e4b39b6-8d8e-46c1-8981-c482cb670ee6',\n 'ID_FS_VERSION': '1',\n 'MAJOR': '251',\n 'MINOR': '0',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'zram0',\n 'SYS_PATH': '/sys/devices/virtual/block/zram0',\n 'TAGS': ':systemd:',\n 'UDISKS_IGNORE': '1',\n 'USEC_INITIALIZED': '8070909'} ;\n2025-06-28 18:19:08,677 INFO blivet/MainThread: scanning zram0 (/sys/devices/virtual/block/zram0)...\n2025-06-28 18:19:08,680 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: zram0 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,683 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,684 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/udev.py:1087: DeprecationWarning: Will be removed in 1.0. Access properties with Device.properties.\n while device:\n\n2025-06-28 18:19:08,687 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: zram0 ;\n2025-06-28 18:19:08,687 WARNING blivet/MainThread: device/vendor is not a valid attribute\n2025-06-28 18:19:08,687 WARNING blivet/MainThread: device/model is not a valid attribute\n2025-06-28 18:19:08,687 INFO blivet/MainThread: zram0 is a disk\n2025-06-28 18:19:08,688 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 87\n2025-06-28 18:19:08,688 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 88\n2025-06-28 18:19:08,691 DEBUG blivet/MainThread: DiskDevice._set_format: zram0 ; type: None ; current: None ;\n2025-06-28 18:19:08,694 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: zram0 ; status: True ;\n2025-06-28 18:19:08,694 DEBUG blivet/MainThread: zram0 sysfs_path set to /sys/devices/virtual/block/zram0\n2025-06-28 18:19:08,698 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/zram0 ; sysfs_path: /sys/devices/virtual/block/zram0 ;\n2025-06-28 18:19:08,698 DEBUG blivet/MainThread: updated zram0 size to 3.63 GiB (3.63 GiB)\n2025-06-28 18:19:08,699 INFO blivet/MainThread: added disk zram0 (id 86) to device tree\n2025-06-28 18:19:08,699 INFO blivet/MainThread: got device: DiskDevice instance (0x7f2cfdd5ac10) --\n name = zram0 status = True id = 86\n children = []\n parents = []\n uuid = None size = 3.63 GiB\n format = existing None\n major = 251 minor = 0 exists = True protected = False\n sysfs path = /sys/devices/virtual/block/zram0\n target size = 3.63 GiB path = /dev/zram0\n format args = [] original_format = None removable = False wwn = None\n2025-06-28 18:19:08,702 DEBUG blivet/MainThread: DeviceTree.handle_format: name: zram0 ;\n2025-06-28 18:19:08,706 DEBUG blivet/MainThread: EFIFS.supported: supported: True ;\n2025-06-28 18:19:08,706 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 90\n2025-06-28 18:19:08,709 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:08,709 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 91\n2025-06-28 18:19:08,713 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:08,713 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 92\n2025-06-28 18:19:08,713 INFO blivet/MainThread: type detected on 'zram0' is 'swap'\n2025-06-28 18:19:08,716 DEBUG blivet/MainThread: SwapSpace.__init__: uuid: 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 ; label: zram0 ; device: /dev/zram0 ; serial: None ; exists: True ;\n2025-06-28 18:19:08,716 DEBUG blivet/MainThread: get_format('swap') returning SwapSpace instance with object id 93\n2025-06-28 18:19:08,720 DEBUG blivet/MainThread: DiskDevice._set_format: zram0 ; type: swap ; current: None ;\n2025-06-28 18:19:08,720 INFO blivet/MainThread: got format: existing swap\n2025-06-28 18:19:08,720 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:08,732 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:08,743 INFO blivet/MainThread: edd: MBR signature on xvda is zero. new disk image?\n2025-06-28 18:19:08,744 INFO blivet/MainThread: edd: collected mbr signatures: {}\n2025-06-28 18:19:08,744 DEBUG blivet/MainThread: resolved 'UUID=8959a9f3-59d4-4eb7-8e53-e856bbc805e9' to 'xvda2' (partition)\n2025-06-28 18:19:08,748 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,752 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,755 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,758 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:08,758 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat'\n2025-06-28 18:19:08,761 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nest.test.redhat.com:/mnt/qa ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,764 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,767 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nest.test.redhat.com:/mnt/qa ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,770 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:08,770 DEBUG blivet/MainThread: failed to resolve '/dev/nest.test.redhat.com:/mnt/qa'\n2025-06-28 18:19:08,773 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,776 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,779 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,782 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:08,782 DEBUG blivet/MainThread: failed to resolve '/dev/vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive'\n2025-06-28 18:19:08,785 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nest.test.redhat.com:/mnt/tpsdist ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,787 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,792 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nest.test.redhat.com:/mnt/tpsdist ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,795 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:08,795 DEBUG blivet/MainThread: failed to resolve '/dev/nest.test.redhat.com:/mnt/tpsdist'\n2025-06-28 18:19:08,798 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,801 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,804 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,807 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:08,807 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot'\n2025-06-28 18:19:08,810 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,813 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:08,816 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch'\n2025-06-28 18:19:08,819 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 95\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 96\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 97\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 98\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 99\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 100\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 101\n2025-06-28 18:19:08,819 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 102\n2025-06-28 18:19:13,885 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_v99ku3ej/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py']\n2025-06-28 18:19:13,901 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:13,915 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:13,916 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 0\n2025-06-28 18:19:13,918 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:13,919 DEBUG blivet/MainThread: trying to set new default fstype to 'ext4'\n2025-06-28 18:19:13,922 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:13,922 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 1\n2025-06-28 18:19:13,924 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:13,925 INFO blivet/MainThread: Fstab file '' does not exist, setting fstab read path to None\n2025-06-28 18:19:13,925 INFO program/MainThread: Running... lsblk --bytes -a -o NAME,SIZE,OWNER,GROUP,MODE,FSTYPE,LABEL,UUID,PARTUUID,MOUNTPOINT\n2025-06-28 18:19:13,950 INFO program/MainThread: stdout:\n2025-06-28 18:19:13,950 INFO program/MainThread: NAME SIZE OWNER GROUP MODE FSTYPE LABEL UUID PARTUUID MOUNTPOINT\n2025-06-28 18:19:13,950 INFO program/MainThread: sda 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdb 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdc 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdd 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sde 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdf 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdg 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdh 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdi 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdj 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,950 INFO program/MainThread: sdk 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,951 INFO program/MainThread: sdl 3221225472 root disk brw-rw---- \n2025-06-28 18:19:13,951 INFO program/MainThread: xvda 268435456000 root disk brw-rw---- \n2025-06-28 18:19:13,951 INFO program/MainThread: |-xvda1 1048576 root disk brw-rw---- fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda \n2025-06-28 18:19:13,951 INFO program/MainThread: `-xvda2 268433341952 root disk brw-rw---- ext4 8959a9f3-59d4-4eb7-8e53-e856bbc805e9 782cc2d2-7936-4e3f-9cb4-9758a83f53fa /\n2025-06-28 18:19:13,951 INFO program/MainThread: zram0 3894411264 root disk brw-rw---- swap zram0 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 [SWAP]\n2025-06-28 18:19:13,951 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:13,951 DEBUG blivet/MainThread: lsblk output:\nNAME SIZE OWNER GROUP MODE FSTYPE LABEL UUID PARTUUID MOUNTPOINT\nsda 3221225472 root disk brw-rw---- \nsdb 3221225472 root disk brw-rw---- \nsdc 3221225472 root disk brw-rw---- \nsdd 3221225472 root disk brw-rw---- \nsde 3221225472 root disk brw-rw---- \nsdf 3221225472 root disk brw-rw---- \nsdg 3221225472 root disk brw-rw---- \nsdh 3221225472 root disk brw-rw---- \nsdi 3221225472 root disk brw-rw---- \nsdj 3221225472 root disk brw-rw---- \nsdk 3221225472 root disk brw-rw---- \nsdl 3221225472 root disk brw-rw---- \nxvda 268435456000 root disk brw-rw---- \n|-xvda1 1048576 root disk brw-rw---- fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda \n`-xvda2 268433341952 root disk brw-rw---- ext4 8959a9f3-59d4-4eb7-8e53-e856bbc805e9 782cc2d2-7936-4e3f-9cb4-9758a83f53fa /\nzram0 3894411264 root disk brw-rw---- swap zram0 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 [SWAP]\n\n2025-06-28 18:19:13,951 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:13,951 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:13,951 INFO blivet/MainThread: resetting Blivet (version 3.12.1) instance \n2025-06-28 18:19:13,951 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:13,951 INFO blivet/MainThread: DeviceTree.populate: ignored_disks is [] ; exclusive_disks is []\n2025-06-28 18:19:13,952 WARNING blivet/MainThread: Failed to call the update_volume_info method: libstoragemgmt functionality not available\n2025-06-28 18:19:13,952 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:13,961 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:13,973 INFO blivet/MainThread: devices to scan: ['sda', 'sdb', 'sdk', 'sdl', 'sdc', 'sdd', 'sde', 'sdf', 'sdg', 'sdh', 'sdi', 'sdj', 'xvda', 'xvda1', 'xvda2', 'zram0']\n2025-06-28 18:19:13,977 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sda ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-360014058847ce6dd73d4f01931e495d9 '\n '/dev/disk/by-id/wwn-0x60014058847ce6dd73d4f01931e495d9 '\n '/dev/disk/by-diskseq/3',\n 'DEVNAME': '/dev/sda',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '3',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk0',\n 'ID_MODEL_ENC': 'disk0\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '8847ce6d-d73d-4f01-931e-495d93c2876b',\n 'ID_SERIAL': '360014058847ce6dd73d4f01931e495d9',\n 'ID_SERIAL_SHORT': '60014058847ce6dd73d4f01931e495d9',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x60014058847ce6dd',\n 'ID_WWN_VENDOR_EXTENSION': '0x73d4f01931e495d9',\n 'ID_WWN_WITH_EXTENSION': '0x60014058847ce6dd73d4f01931e495d9',\n 'MAJOR': '8',\n 'MINOR': '0',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sda',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193406043'} ;\n2025-06-28 18:19:13,977 INFO blivet/MainThread: scanning sda (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda)...\n2025-06-28 18:19:13,977 INFO program/MainThread: Running [3] lvm lvs --noheadings --nosuffix --nameprefixes --unquoted --units=b -a -o vg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype,origin,pool_lv,data_lv,metadata_lv,role,move_pv,data_percent,metadata_percent,copy_percent,lv_tags --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:14,003 INFO program/MainThread: stdout[3]: \n2025-06-28 18:19:14,003 INFO program/MainThread: stderr[3]: \n2025-06-28 18:19:14,003 INFO program/MainThread: ...done [3] (exit code: 0)\n2025-06-28 18:19:14,007 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,010 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,011 INFO program/MainThread: Running [4] mdadm --version ...\n2025-06-28 18:19:14,013 INFO program/MainThread: stdout[4]: \n2025-06-28 18:19:14,013 INFO program/MainThread: stderr[4]: mdadm - v4.3 - 2024-02-15\n\n2025-06-28 18:19:14,013 INFO program/MainThread: ...done [4] (exit code: 0)\n2025-06-28 18:19:14,013 INFO program/MainThread: Running [5] dmsetup --version ...\n2025-06-28 18:19:14,016 INFO program/MainThread: stdout[5]: Library version: 1.02.204 (2025-01-14)\nDriver version: 4.49.0\n\n2025-06-28 18:19:14,017 INFO program/MainThread: stderr[5]: \n2025-06-28 18:19:14,017 INFO program/MainThread: ...done [5] (exit code: 0)\n2025-06-28 18:19:14,023 INFO blivet/MainThread: failed to get initiator name from iscsi firmware: UDisks iSCSI functionality not available\n2025-06-28 18:19:14,025 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/udev.py:1087: DeprecationWarning: Will be removed in 1.0. Access properties with Device.properties.\n while device:\n\n2025-06-28 18:19:14,030 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sda ;\n2025-06-28 18:19:14,031 INFO blivet/MainThread: sda is a disk\n2025-06-28 18:19:14,031 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:14,031 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 3\n2025-06-28 18:19:14,031 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 4\n2025-06-28 18:19:14,035 DEBUG blivet/MainThread: DiskDevice._set_format: sda ; type: None ; current: None ;\n2025-06-28 18:19:14,039 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sda ; status: True ;\n2025-06-28 18:19:14,039 DEBUG blivet/MainThread: sda sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda\n2025-06-28 18:19:14,042 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sda ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda ;\n2025-06-28 18:19:14,042 DEBUG blivet/MainThread: updated sda size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,043 INFO blivet/MainThread: added disk sda (id 2) to device tree\n2025-06-28 18:19:14,043 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40678c0) --\n name = sda status = True id = 2\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 0 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda\n target size = 3 GiB path = /dev/sda\n format args = [] original_format = None removable = False wwn = 60014058847ce6dd73d4f01931e495d9\n2025-06-28 18:19:14,047 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sda ;\n2025-06-28 18:19:14,047 DEBUG blivet/MainThread: no type or existing type for sda, bailing\n2025-06-28 18:19:14,050 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdb ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/wwn-0x6001405c48b47cd2cda4408882faf8c6 '\n '/dev/disk/by-diskseq/4 '\n '/dev/disk/by-id/scsi-36001405c48b47cd2cda4408882faf8c6',\n 'DEVNAME': '/dev/sdb',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '4',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk1',\n 'ID_MODEL_ENC': 'disk1\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'c48b47cd-2cda-4408-882f-af8c68d83c74',\n 'ID_SERIAL': '36001405c48b47cd2cda4408882faf8c6',\n 'ID_SERIAL_SHORT': '6001405c48b47cd2cda4408882faf8c6',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405c48b47cd2',\n 'ID_WWN_VENDOR_EXTENSION': '0xcda4408882faf8c6',\n 'ID_WWN_WITH_EXTENSION': '0x6001405c48b47cd2cda4408882faf8c6',\n 'MAJOR': '8',\n 'MINOR': '16',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdb',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193440060'} ;\n2025-06-28 18:19:14,050 INFO blivet/MainThread: scanning sdb (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb)...\n2025-06-28 18:19:14,053 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdb ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,056 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,061 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdb ;\n2025-06-28 18:19:14,061 INFO blivet/MainThread: sdb is a disk\n2025-06-28 18:19:14,062 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 8\n2025-06-28 18:19:14,062 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 9\n2025-06-28 18:19:14,065 DEBUG blivet/MainThread: DiskDevice._set_format: sdb ; type: None ; current: None ;\n2025-06-28 18:19:14,068 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdb ; status: True ;\n2025-06-28 18:19:14,068 DEBUG blivet/MainThread: sdb sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb\n2025-06-28 18:19:14,072 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdb ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb ;\n2025-06-28 18:19:14,072 DEBUG blivet/MainThread: updated sdb size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,072 INFO blivet/MainThread: added disk sdb (id 7) to device tree\n2025-06-28 18:19:14,072 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c2710) --\n name = sdb status = True id = 7\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 16 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb\n target size = 3 GiB path = /dev/sdb\n format args = [] original_format = None removable = False wwn = 6001405c48b47cd2cda4408882faf8c6\n2025-06-28 18:19:14,076 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdb ;\n2025-06-28 18:19:14,076 DEBUG blivet/MainThread: no type or existing type for sdb, bailing\n2025-06-28 18:19:14,079 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdk ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-3600140532b7553a2ede45408ac592f3f '\n '/dev/disk/by-diskseq/13 '\n '/dev/disk/by-id/wwn-0x600140532b7553a2ede45408ac592f3f',\n 'DEVNAME': '/dev/sdk',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '13',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk10',\n 'ID_MODEL_ENC': 'disk10\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '32b7553a-2ede-4540-8ac5-92f3f51fb680',\n 'ID_SERIAL': '3600140532b7553a2ede45408ac592f3f',\n 'ID_SERIAL_SHORT': '600140532b7553a2ede45408ac592f3f',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x600140532b7553a2',\n 'ID_WWN_VENDOR_EXTENSION': '0xede45408ac592f3f',\n 'ID_WWN_WITH_EXTENSION': '0x600140532b7553a2ede45408ac592f3f',\n 'MAJOR': '8',\n 'MINOR': '160',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdk',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '194002206'} ;\n2025-06-28 18:19:14,079 INFO blivet/MainThread: scanning sdk (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk)...\n2025-06-28 18:19:14,082 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdk ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,085 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,090 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdk ;\n2025-06-28 18:19:14,090 INFO blivet/MainThread: sdk is a disk\n2025-06-28 18:19:14,090 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 13\n2025-06-28 18:19:14,091 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 14\n2025-06-28 18:19:14,094 DEBUG blivet/MainThread: DiskDevice._set_format: sdk ; type: None ; current: None ;\n2025-06-28 18:19:14,097 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdk ; status: True ;\n2025-06-28 18:19:14,097 DEBUG blivet/MainThread: sdk sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk\n2025-06-28 18:19:14,101 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdk ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk ;\n2025-06-28 18:19:14,101 DEBUG blivet/MainThread: updated sdk size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,101 INFO blivet/MainThread: added disk sdk (id 12) to device tree\n2025-06-28 18:19:14,101 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c2350) --\n name = sdk status = True id = 12\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 160 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:10/block/sdk\n target size = 3 GiB path = /dev/sdk\n format args = [] original_format = None removable = False wwn = 600140532b7553a2ede45408ac592f3f\n2025-06-28 18:19:14,105 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdk ;\n2025-06-28 18:19:14,105 DEBUG blivet/MainThread: no type or existing type for sdk, bailing\n2025-06-28 18:19:14,108 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdl ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-36001405d599e4585dd1440490d5145a0 '\n '/dev/disk/by-diskseq/14 '\n '/dev/disk/by-id/wwn-0x6001405d599e4585dd1440490d5145a0',\n 'DEVNAME': '/dev/sdl',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '14',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk11',\n 'ID_MODEL_ENC': 'disk11\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'd599e458-5dd1-4404-90d5-145a01a2b253',\n 'ID_SERIAL': '36001405d599e4585dd1440490d5145a0',\n 'ID_SERIAL_SHORT': '6001405d599e4585dd1440490d5145a0',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405d599e4585',\n 'ID_WWN_VENDOR_EXTENSION': '0xdd1440490d5145a0',\n 'ID_WWN_WITH_EXTENSION': '0x6001405d599e4585dd1440490d5145a0',\n 'MAJOR': '8',\n 'MINOR': '176',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdl',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '194051087'} ;\n2025-06-28 18:19:14,108 INFO blivet/MainThread: scanning sdl (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl)...\n2025-06-28 18:19:14,111 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdl ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,114 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,119 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdl ;\n2025-06-28 18:19:14,119 INFO blivet/MainThread: sdl is a disk\n2025-06-28 18:19:14,119 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 18\n2025-06-28 18:19:14,119 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 19\n2025-06-28 18:19:14,123 DEBUG blivet/MainThread: DiskDevice._set_format: sdl ; type: None ; current: None ;\n2025-06-28 18:19:14,126 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdl ; status: True ;\n2025-06-28 18:19:14,126 DEBUG blivet/MainThread: sdl sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl\n2025-06-28 18:19:14,130 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdl ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl ;\n2025-06-28 18:19:14,130 DEBUG blivet/MainThread: updated sdl size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,130 INFO blivet/MainThread: added disk sdl (id 17) to device tree\n2025-06-28 18:19:14,130 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1950) --\n name = sdl status = True id = 17\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 176 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:11/block/sdl\n target size = 3 GiB path = /dev/sdl\n format args = [] original_format = None removable = False wwn = 6001405d599e4585dd1440490d5145a0\n2025-06-28 18:19:14,133 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdl ;\n2025-06-28 18:19:14,133 DEBUG blivet/MainThread: no type or existing type for sdl, bailing\n2025-06-28 18:19:14,137 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdc ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/wwn-0x6001405582e0de585294686b36ae1d1e '\n '/dev/disk/by-diskseq/5 '\n '/dev/disk/by-id/scsi-36001405582e0de585294686b36ae1d1e',\n 'DEVNAME': '/dev/sdc',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '5',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk2',\n 'ID_MODEL_ENC': 'disk2\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '582e0de5-8529-4686-b36a-e1d1e173bd3f',\n 'ID_SERIAL': '36001405582e0de585294686b36ae1d1e',\n 'ID_SERIAL_SHORT': '6001405582e0de585294686b36ae1d1e',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405582e0de58',\n 'ID_WWN_VENDOR_EXTENSION': '0x5294686b36ae1d1e',\n 'ID_WWN_WITH_EXTENSION': '0x6001405582e0de585294686b36ae1d1e',\n 'MAJOR': '8',\n 'MINOR': '32',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdc',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193503847'} ;\n2025-06-28 18:19:14,137 INFO blivet/MainThread: scanning sdc (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc)...\n2025-06-28 18:19:14,140 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdc ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,143 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,148 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdc ;\n2025-06-28 18:19:14,148 INFO blivet/MainThread: sdc is a disk\n2025-06-28 18:19:14,148 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 23\n2025-06-28 18:19:14,148 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 24\n2025-06-28 18:19:14,152 DEBUG blivet/MainThread: DiskDevice._set_format: sdc ; type: None ; current: None ;\n2025-06-28 18:19:14,155 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdc ; status: True ;\n2025-06-28 18:19:14,155 DEBUG blivet/MainThread: sdc sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc\n2025-06-28 18:19:14,158 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdc ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc ;\n2025-06-28 18:19:14,159 DEBUG blivet/MainThread: updated sdc size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,159 INFO blivet/MainThread: added disk sdc (id 22) to device tree\n2025-06-28 18:19:14,159 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c20d0) --\n name = sdc status = True id = 22\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 32 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc\n target size = 3 GiB path = /dev/sdc\n format args = [] original_format = None removable = False wwn = 6001405582e0de585294686b36ae1d1e\n2025-06-28 18:19:14,162 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdc ;\n2025-06-28 18:19:14,162 DEBUG blivet/MainThread: no type or existing type for sdc, bailing\n2025-06-28 18:19:14,166 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdd ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-diskseq/6 '\n '/dev/disk/by-id/scsi-36001405acd2ba9b1a974f55a9704061c '\n '/dev/disk/by-id/wwn-0x6001405acd2ba9b1a974f55a9704061c',\n 'DEVNAME': '/dev/sdd',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '6',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk3',\n 'ID_MODEL_ENC': 'disk3\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'acd2ba9b-1a97-4f55-a970-4061c67c3912',\n 'ID_SERIAL': '36001405acd2ba9b1a974f55a9704061c',\n 'ID_SERIAL_SHORT': '6001405acd2ba9b1a974f55a9704061c',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405acd2ba9b1',\n 'ID_WWN_VENDOR_EXTENSION': '0xa974f55a9704061c',\n 'ID_WWN_WITH_EXTENSION': '0x6001405acd2ba9b1a974f55a9704061c',\n 'MAJOR': '8',\n 'MINOR': '48',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdd',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193576323'} ;\n2025-06-28 18:19:14,166 INFO blivet/MainThread: scanning sdd (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd)...\n2025-06-28 18:19:14,169 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdd ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,172 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,177 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdd ;\n2025-06-28 18:19:14,177 INFO blivet/MainThread: sdd is a disk\n2025-06-28 18:19:14,177 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 28\n2025-06-28 18:19:14,177 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 29\n2025-06-28 18:19:14,181 DEBUG blivet/MainThread: DiskDevice._set_format: sdd ; type: None ; current: None ;\n2025-06-28 18:19:14,184 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdd ; status: True ;\n2025-06-28 18:19:14,184 DEBUG blivet/MainThread: sdd sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd\n2025-06-28 18:19:14,188 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdd ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd ;\n2025-06-28 18:19:14,188 DEBUG blivet/MainThread: updated sdd size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,188 INFO blivet/MainThread: added disk sdd (id 27) to device tree\n2025-06-28 18:19:14,188 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1f90) --\n name = sdd status = True id = 27\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 48 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd\n target size = 3 GiB path = /dev/sdd\n format args = [] original_format = None removable = False wwn = 6001405acd2ba9b1a974f55a9704061c\n2025-06-28 18:19:14,191 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdd ;\n2025-06-28 18:19:14,191 DEBUG blivet/MainThread: no type or existing type for sdd, bailing\n2025-06-28 18:19:14,194 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sde ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-36001405378e6ca643c443e0b9c840399 '\n '/dev/disk/by-diskseq/7 '\n '/dev/disk/by-id/wwn-0x6001405378e6ca643c443e0b9c840399',\n 'DEVNAME': '/dev/sde',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '7',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk4',\n 'ID_MODEL_ENC': 'disk4\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '378e6ca6-43c4-43e0-b9c8-4039999596e3',\n 'ID_SERIAL': '36001405378e6ca643c443e0b9c840399',\n 'ID_SERIAL_SHORT': '6001405378e6ca643c443e0b9c840399',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405378e6ca64',\n 'ID_WWN_VENDOR_EXTENSION': '0x3c443e0b9c840399',\n 'ID_WWN_WITH_EXTENSION': '0x6001405378e6ca643c443e0b9c840399',\n 'MAJOR': '8',\n 'MINOR': '64',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sde',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193646170'} ;\n2025-06-28 18:19:14,195 INFO blivet/MainThread: scanning sde (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde)...\n2025-06-28 18:19:14,198 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sde ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,201 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,206 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sde ;\n2025-06-28 18:19:14,206 INFO blivet/MainThread: sde is a disk\n2025-06-28 18:19:14,206 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 33\n2025-06-28 18:19:14,206 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 34\n2025-06-28 18:19:14,209 DEBUG blivet/MainThread: DiskDevice._set_format: sde ; type: None ; current: None ;\n2025-06-28 18:19:14,213 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sde ; status: True ;\n2025-06-28 18:19:14,213 DEBUG blivet/MainThread: sde sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde\n2025-06-28 18:19:14,216 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sde ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde ;\n2025-06-28 18:19:14,216 DEBUG blivet/MainThread: updated sde size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,217 INFO blivet/MainThread: added disk sde (id 32) to device tree\n2025-06-28 18:19:14,217 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1e50) --\n name = sde status = True id = 32\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 64 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde\n target size = 3 GiB path = /dev/sde\n format args = [] original_format = None removable = False wwn = 6001405378e6ca643c443e0b9c840399\n2025-06-28 18:19:14,220 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sde ;\n2025-06-28 18:19:14,220 DEBUG blivet/MainThread: no type or existing type for sde, bailing\n2025-06-28 18:19:14,223 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdf ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/wwn-0x6001405f858954a0e784149995a198ad '\n '/dev/disk/by-diskseq/8 '\n '/dev/disk/by-id/scsi-36001405f858954a0e784149995a198ad',\n 'DEVNAME': '/dev/sdf',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '8',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk5',\n 'ID_MODEL_ENC': 'disk5\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'f858954a-0e78-4149-995a-198adb87ba16',\n 'ID_SERIAL': '36001405f858954a0e784149995a198ad',\n 'ID_SERIAL_SHORT': '6001405f858954a0e784149995a198ad',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405f858954a0',\n 'ID_WWN_VENDOR_EXTENSION': '0xe784149995a198ad',\n 'ID_WWN_WITH_EXTENSION': '0x6001405f858954a0e784149995a198ad',\n 'MAJOR': '8',\n 'MINOR': '80',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdf',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193708083'} ;\n2025-06-28 18:19:14,223 INFO blivet/MainThread: scanning sdf (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf)...\n2025-06-28 18:19:14,227 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdf ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,230 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,235 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdf ;\n2025-06-28 18:19:14,235 INFO blivet/MainThread: sdf is a disk\n2025-06-28 18:19:14,235 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 38\n2025-06-28 18:19:14,235 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 39\n2025-06-28 18:19:14,238 DEBUG blivet/MainThread: DiskDevice._set_format: sdf ; type: None ; current: None ;\n2025-06-28 18:19:14,242 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdf ; status: True ;\n2025-06-28 18:19:14,242 DEBUG blivet/MainThread: sdf sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf\n2025-06-28 18:19:14,245 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdf ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf ;\n2025-06-28 18:19:14,245 DEBUG blivet/MainThread: updated sdf size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,246 INFO blivet/MainThread: added disk sdf (id 37) to device tree\n2025-06-28 18:19:14,246 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1d10) --\n name = sdf status = True id = 37\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 80 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf\n target size = 3 GiB path = /dev/sdf\n format args = [] original_format = None removable = False wwn = 6001405f858954a0e784149995a198ad\n2025-06-28 18:19:14,249 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdf ;\n2025-06-28 18:19:14,249 DEBUG blivet/MainThread: no type or existing type for sdf, bailing\n2025-06-28 18:19:14,253 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdg ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-diskseq/9 '\n '/dev/disk/by-id/scsi-36001405a9efc0e1911c4201a970a6f85 '\n '/dev/disk/by-id/wwn-0x6001405a9efc0e1911c4201a970a6f85',\n 'DEVNAME': '/dev/sdg',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '9',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk6',\n 'ID_MODEL_ENC': 'disk6\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': 'a9efc0e1-911c-4201-a970-a6f858e624d6',\n 'ID_SERIAL': '36001405a9efc0e1911c4201a970a6f85',\n 'ID_SERIAL_SHORT': '6001405a9efc0e1911c4201a970a6f85',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x6001405a9efc0e19',\n 'ID_WWN_VENDOR_EXTENSION': '0x11c4201a970a6f85',\n 'ID_WWN_WITH_EXTENSION': '0x6001405a9efc0e1911c4201a970a6f85',\n 'MAJOR': '8',\n 'MINOR': '96',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdg',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193767499'} ;\n2025-06-28 18:19:14,253 INFO blivet/MainThread: scanning sdg (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg)...\n2025-06-28 18:19:14,256 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdg ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,259 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,264 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdg ;\n2025-06-28 18:19:14,264 INFO blivet/MainThread: sdg is a disk\n2025-06-28 18:19:14,264 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 43\n2025-06-28 18:19:14,264 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 44\n2025-06-28 18:19:14,267 DEBUG blivet/MainThread: DiskDevice._set_format: sdg ; type: None ; current: None ;\n2025-06-28 18:19:14,271 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdg ; status: True ;\n2025-06-28 18:19:14,271 DEBUG blivet/MainThread: sdg sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg\n2025-06-28 18:19:14,275 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdg ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg ;\n2025-06-28 18:19:14,275 DEBUG blivet/MainThread: updated sdg size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,275 INFO blivet/MainThread: added disk sdg (id 42) to device tree\n2025-06-28 18:19:14,275 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1bd0) --\n name = sdg status = True id = 42\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 96 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg\n target size = 3 GiB path = /dev/sdg\n format args = [] original_format = None removable = False wwn = 6001405a9efc0e1911c4201a970a6f85\n2025-06-28 18:19:14,278 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdg ;\n2025-06-28 18:19:14,278 DEBUG blivet/MainThread: no type or existing type for sdg, bailing\n2025-06-28 18:19:14,281 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdh ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-360014058181fbe60fbb48f6bf65e97b7 '\n '/dev/disk/by-id/wwn-0x60014058181fbe60fbb48f6bf65e97b7 '\n '/dev/disk/by-diskseq/10',\n 'DEVNAME': '/dev/sdh',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '10',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk7',\n 'ID_MODEL_ENC': 'disk7\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '8181fbe6-0fbb-48f6-bf65-e97b753d1f2b',\n 'ID_SERIAL': '360014058181fbe60fbb48f6bf65e97b7',\n 'ID_SERIAL_SHORT': '60014058181fbe60fbb48f6bf65e97b7',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x60014058181fbe60',\n 'ID_WWN_VENDOR_EXTENSION': '0xfbb48f6bf65e97b7',\n 'ID_WWN_WITH_EXTENSION': '0x60014058181fbe60fbb48f6bf65e97b7',\n 'MAJOR': '8',\n 'MINOR': '112',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdh',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193801085'} ;\n2025-06-28 18:19:14,282 INFO blivet/MainThread: scanning sdh (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh)...\n2025-06-28 18:19:14,285 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdh ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,288 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,293 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdh ;\n2025-06-28 18:19:14,293 INFO blivet/MainThread: sdh is a disk\n2025-06-28 18:19:14,293 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 48\n2025-06-28 18:19:14,293 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 49\n2025-06-28 18:19:14,297 DEBUG blivet/MainThread: DiskDevice._set_format: sdh ; type: None ; current: None ;\n2025-06-28 18:19:14,300 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdh ; status: True ;\n2025-06-28 18:19:14,300 DEBUG blivet/MainThread: sdh sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh\n2025-06-28 18:19:14,304 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdh ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh ;\n2025-06-28 18:19:14,304 DEBUG blivet/MainThread: updated sdh size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,304 INFO blivet/MainThread: added disk sdh (id 47) to device tree\n2025-06-28 18:19:14,304 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c1a90) --\n name = sdh status = True id = 47\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 112 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh\n target size = 3 GiB path = /dev/sdh\n format args = [] original_format = None removable = False wwn = 60014058181fbe60fbb48f6bf65e97b7\n2025-06-28 18:19:14,307 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdh ;\n2025-06-28 18:19:14,308 DEBUG blivet/MainThread: no type or existing type for sdh, bailing\n2025-06-28 18:19:14,311 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdi ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-3600140536dcfebe092746238bb16a3fa '\n '/dev/disk/by-diskseq/11 '\n '/dev/disk/by-id/wwn-0x600140536dcfebe092746238bb16a3fa',\n 'DEVNAME': '/dev/sdi',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '11',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk8',\n 'ID_MODEL_ENC': 'disk8\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '36dcfebe-0927-4623-8bb1-6a3fa2891f0c',\n 'ID_SERIAL': '3600140536dcfebe092746238bb16a3fa',\n 'ID_SERIAL_SHORT': '600140536dcfebe092746238bb16a3fa',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x600140536dcfebe0',\n 'ID_WWN_VENDOR_EXTENSION': '0x92746238bb16a3fa',\n 'ID_WWN_WITH_EXTENSION': '0x600140536dcfebe092746238bb16a3fa',\n 'MAJOR': '8',\n 'MINOR': '128',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdi',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193876095'} ;\n2025-06-28 18:19:14,311 INFO blivet/MainThread: scanning sdi (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi)...\n2025-06-28 18:19:14,314 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdi ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,317 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,322 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdi ;\n2025-06-28 18:19:14,322 INFO blivet/MainThread: sdi is a disk\n2025-06-28 18:19:14,322 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 53\n2025-06-28 18:19:14,322 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 54\n2025-06-28 18:19:14,325 DEBUG blivet/MainThread: DiskDevice._set_format: sdi ; type: None ; current: None ;\n2025-06-28 18:19:14,329 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdi ; status: True ;\n2025-06-28 18:19:14,329 DEBUG blivet/MainThread: sdi sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi\n2025-06-28 18:19:14,333 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdi ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi ;\n2025-06-28 18:19:14,333 DEBUG blivet/MainThread: updated sdi size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,333 INFO blivet/MainThread: added disk sdi (id 52) to device tree\n2025-06-28 18:19:14,333 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c3b10) --\n name = sdi status = True id = 52\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 128 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi\n target size = 3 GiB path = /dev/sdi\n format args = [] original_format = None removable = False wwn = 600140536dcfebe092746238bb16a3fa\n2025-06-28 18:19:14,336 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdi ;\n2025-06-28 18:19:14,336 DEBUG blivet/MainThread: no type or existing type for sdi, bailing\n2025-06-28 18:19:14,339 DEBUG blivet/MainThread: DeviceTree.handle_device: name: sdj ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-id/scsi-3600140580c834ee801b48198b71671c5 '\n '/dev/disk/by-diskseq/12 '\n '/dev/disk/by-id/wwn-0x600140580c834ee801b48198b71671c5',\n 'DEVNAME': '/dev/sdj',\n 'DEVPATH': '/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '12',\n 'ID_BUS': 'scsi',\n 'ID_MODEL': 'disk9',\n 'ID_MODEL_ENC': 'disk9\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20\\\\x20',\n 'ID_REVISION': '4.0',\n 'ID_SCSI': '1',\n 'ID_SCSI_SERIAL': '80c834ee-801b-4819-8b71-671c5aee19bd',\n 'ID_SERIAL': '3600140580c834ee801b48198b71671c5',\n 'ID_SERIAL_SHORT': '600140580c834ee801b48198b71671c5',\n 'ID_TARGET_PORT': '0',\n 'ID_TYPE': 'disk',\n 'ID_VENDOR': 'LIO-ORG',\n 'ID_VENDOR_ENC': 'LIO-ORG\\\\x20',\n 'ID_WWN': '0x600140580c834ee8',\n 'ID_WWN_VENDOR_EXTENSION': '0x01b48198b71671c5',\n 'ID_WWN_WITH_EXTENSION': '0x600140580c834ee801b48198b71671c5',\n 'MAJOR': '8',\n 'MINOR': '144',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'sdj',\n 'SYS_PATH': '/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '193941063'} ;\n2025-06-28 18:19:14,340 INFO blivet/MainThread: scanning sdj (/sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj)...\n2025-06-28 18:19:14,343 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdj ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,346 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,351 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: sdj ;\n2025-06-28 18:19:14,351 INFO blivet/MainThread: sdj is a disk\n2025-06-28 18:19:14,351 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 58\n2025-06-28 18:19:14,351 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 59\n2025-06-28 18:19:14,354 DEBUG blivet/MainThread: DiskDevice._set_format: sdj ; type: None ; current: None ;\n2025-06-28 18:19:14,358 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdj ; status: True ;\n2025-06-28 18:19:14,358 DEBUG blivet/MainThread: sdj sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj\n2025-06-28 18:19:14,362 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/sdj ; sysfs_path: /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj ;\n2025-06-28 18:19:14,362 DEBUG blivet/MainThread: updated sdj size to 3 GiB (3 GiB)\n2025-06-28 18:19:14,362 INFO blivet/MainThread: added disk sdj (id 57) to device tree\n2025-06-28 18:19:14,362 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c3c50) --\n name = sdj status = True id = 57\n children = []\n parents = []\n uuid = None size = 3 GiB\n format = existing None\n major = 8 minor = 144 exists = True protected = False\n sysfs path = /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj\n target size = 3 GiB path = /dev/sdj\n format args = [] original_format = None removable = False wwn = 600140580c834ee801b48198b71671c5\n2025-06-28 18:19:14,366 DEBUG blivet/MainThread: DeviceTree.handle_format: name: sdj ;\n2025-06-28 18:19:14,366 DEBUG blivet/MainThread: no type or existing type for sdj, bailing\n2025-06-28 18:19:14,369 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-diskseq/1',\n 'DEVNAME': '/dev/xvda',\n 'DEVPATH': '/devices/vbd-768/block/xvda',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '1',\n 'ID_PART_TABLE_TYPE': 'gpt',\n 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2',\n 'MAJOR': '202',\n 'MINOR': '0',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'xvda',\n 'SYS_PATH': '/sys/devices/vbd-768/block/xvda',\n 'TAGS': ':systemd:',\n 'USEC_INITIALIZED': '8070613'} ;\n2025-06-28 18:19:14,369 INFO blivet/MainThread: scanning xvda (/sys/devices/vbd-768/block/xvda)...\n2025-06-28 18:19:14,372 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,375 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,379 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: xvda ;\n2025-06-28 18:19:14,379 WARNING blivet/MainThread: device/vendor is not a valid attribute\n2025-06-28 18:19:14,379 WARNING blivet/MainThread: device/model is not a valid attribute\n2025-06-28 18:19:14,379 INFO blivet/MainThread: xvda is a disk\n2025-06-28 18:19:14,379 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 63\n2025-06-28 18:19:14,379 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 64\n2025-06-28 18:19:14,383 DEBUG blivet/MainThread: DiskDevice._set_format: xvda ; type: None ; current: None ;\n2025-06-28 18:19:14,386 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: xvda ; status: True ;\n2025-06-28 18:19:14,386 DEBUG blivet/MainThread: xvda sysfs_path set to /sys/devices/vbd-768/block/xvda\n2025-06-28 18:19:14,389 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/xvda ; sysfs_path: /sys/devices/vbd-768/block/xvda ;\n2025-06-28 18:19:14,389 DEBUG blivet/MainThread: updated xvda size to 250 GiB (250 GiB)\n2025-06-28 18:19:14,390 INFO blivet/MainThread: added disk xvda (id 62) to device tree\n2025-06-28 18:19:14,390 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad40c3d90) --\n name = xvda status = True id = 62\n children = []\n parents = []\n uuid = None size = 250 GiB\n format = existing None\n major = 202 minor = 0 exists = True protected = False\n sysfs path = /sys/devices/vbd-768/block/xvda\n target size = 250 GiB path = /dev/xvda\n format args = [] original_format = None removable = False wwn = None\n2025-06-28 18:19:14,393 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda ;\n2025-06-28 18:19:14,397 DEBUG blivet/MainThread: EFIFS.supported: supported: True ;\n2025-06-28 18:19:14,397 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 66\n2025-06-28 18:19:14,401 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:14,401 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 67\n2025-06-28 18:19:14,404 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:14,404 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 68\n2025-06-28 18:19:14,408 DEBUG blivet/MainThread: DiskLabelFormatPopulator.run: device: xvda ; label_type: gpt ;\n2025-06-28 18:19:14,411 DEBUG blivet/MainThread: DiskDevice.setup: xvda ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:14,414 DEBUG blivet/MainThread: DiskLabel.__init__: uuid: 91c3c0f1-4957-4f21-b15a-28e9016b79c2 ; label: None ; device: /dev/xvda ; serial: None ; exists: True ;\n2025-06-28 18:19:14,428 DEBUG blivet/MainThread: Set pmbr_boot on parted.Disk instance --\n type: gpt primaryPartitionCount: 2\n lastPartitionNumber: 2 maxPrimaryPartitionCount: 128\n partitions: [, ]\n device: \n PedDisk: <_ped.Disk object at 0x7fbad2382040>\n2025-06-28 18:19:14,487 DEBUG blivet/MainThread: get_format('disklabel') returning DiskLabel instance with object id 69\n2025-06-28 18:19:14,491 DEBUG blivet/MainThread: DiskDevice._set_format: xvda ; type: disklabel ; current: None ;\n2025-06-28 18:19:14,491 INFO blivet/MainThread: got format: existing gpt disklabel\n2025-06-28 18:19:14,491 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:14,494 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda1 ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-partuuid/fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda '\n '/dev/disk/by-diskseq/1-part1',\n 'DEVNAME': '/dev/xvda1',\n 'DEVPATH': '/devices/vbd-768/block/xvda/xvda1',\n 'DEVTYPE': 'partition',\n 'DISKSEQ': '1',\n 'ID_PART_ENTRY_DISK': '202:0',\n 'ID_PART_ENTRY_NUMBER': '1',\n 'ID_PART_ENTRY_OFFSET': '2048',\n 'ID_PART_ENTRY_SCHEME': 'gpt',\n 'ID_PART_ENTRY_SIZE': '2048',\n 'ID_PART_ENTRY_TYPE': '21686148-6449-6e6f-744e-656564454649',\n 'ID_PART_ENTRY_UUID': 'fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda',\n 'ID_PART_TABLE_TYPE': 'gpt',\n 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2',\n 'MAJOR': '202',\n 'MINOR': '1',\n 'PARTN': '1',\n 'PARTUUID': 'fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'xvda1',\n 'SYS_PATH': '/sys/devices/vbd-768/block/xvda/xvda1',\n 'TAGS': ':systemd:',\n 'UDISKS_IGNORE': '1',\n 'USEC_INITIALIZED': '8070648'} ;\n2025-06-28 18:19:14,495 INFO blivet/MainThread: scanning xvda1 (/sys/devices/vbd-768/block/xvda/xvda1)...\n2025-06-28 18:19:14,495 WARNING blivet/MainThread: hidden is not a valid attribute\n2025-06-28 18:19:14,498 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,501 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,504 DEBUG blivet/MainThread: PartitionDevicePopulator.run: name: xvda1 ;\n2025-06-28 18:19:14,507 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,510 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,510 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:14,523 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:14,537 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,540 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 250 GiB disk xvda (62) with existing gpt disklabel\n2025-06-28 18:19:14,541 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:14,541 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 72\n2025-06-28 18:19:14,545 DEBUG blivet/MainThread: DiskDevice.add_child: name: xvda ; child: xvda1 ; kids: 0 ;\n2025-06-28 18:19:14,545 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 73\n2025-06-28 18:19:14,548 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda1 ; type: None ; current: None ;\n2025-06-28 18:19:14,552 DEBUG blivet/MainThread: PartitionDevice.update_sysfs_path: xvda1 ; status: True ;\n2025-06-28 18:19:14,552 DEBUG blivet/MainThread: xvda1 sysfs_path set to /sys/devices/vbd-768/block/xvda/xvda1\n2025-06-28 18:19:14,556 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda1 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda1 ;\n2025-06-28 18:19:14,556 DEBUG blivet/MainThread: updated xvda1 size to 1024 KiB (1024 KiB)\n2025-06-28 18:19:14,556 DEBUG blivet/MainThread: looking up parted Partition: /dev/xvda1\n2025-06-28 18:19:14,560 DEBUG blivet/MainThread: PartitionDevice.probe: xvda1 ; exists: True ;\n2025-06-28 18:19:14,563 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 1 ;\n2025-06-28 18:19:14,567 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 10 ;\n2025-06-28 18:19:14,570 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda1 ; flag: 12 ;\n2025-06-28 18:19:14,570 DEBUG blivet/MainThread: get_format('biosboot') returning BIOSBoot instance with object id 75\n2025-06-28 18:19:14,574 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda1 ; type: biosboot ; current: None ;\n2025-06-28 18:19:14,577 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda1 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda1 ;\n2025-06-28 18:19:14,578 DEBUG blivet/MainThread: updated xvda1 size to 1024 KiB (1024 KiB)\n2025-06-28 18:19:14,578 INFO blivet/MainThread: added partition xvda1 (id 71) to device tree\n2025-06-28 18:19:14,578 INFO blivet/MainThread: got device: PartitionDevice instance (0x7fbad4065e80) --\n name = xvda1 status = True id = 71\n children = []\n parents = ['existing 250 GiB disk xvda (62) with existing gpt disklabel']\n uuid = fac66fc8-84f4-4d4d-ab19-71e5cf3f4dda size = 1024 KiB\n format = existing biosboot\n major = 202 minor = 1 exists = True protected = False\n sysfs path = /sys/devices/vbd-768/block/xvda/xvda1\n target size = 1024 KiB path = /dev/xvda1\n format args = [] original_format = None grow = None max size = 0 B bootable = None\n part type = 0 primary = None start sector = None end sector = None\n parted_partition = parted.Partition instance --\n disk: fileSystem: None\n number: 1 path: /dev/xvda1 type: 0\n name: active: True busy: False\n geometry: PedPartition: <_ped.Partition object at 0x7fbad23865c0>\n disk = existing 250 GiB disk xvda (62) with existing gpt disklabel\n start = 2048 end = 4095 length = 2048\n flags = bios_grub type_uuid = 21686148-6449-6e6f-744e-656564454649\n2025-06-28 18:19:14,582 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda1 ;\n2025-06-28 18:19:14,582 DEBUG blivet/MainThread: no type or existing type for xvda1, bailing\n2025-06-28 18:19:14,586 DEBUG blivet/MainThread: DeviceTree.handle_device: name: xvda2 ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-diskseq/1-part2 '\n '/dev/disk/by-uuid/8959a9f3-59d4-4eb7-8e53-e856bbc805e9 '\n '/dev/disk/by-partuuid/782cc2d2-7936-4e3f-9cb4-9758a83f53fa',\n 'DEVNAME': '/dev/xvda2',\n 'DEVPATH': '/devices/vbd-768/block/xvda/xvda2',\n 'DEVTYPE': 'partition',\n 'DISKSEQ': '1',\n 'ID_FS_BLOCKSIZE': '4096',\n 'ID_FS_LASTBLOCK': '65535483',\n 'ID_FS_SIZE': '268433338368',\n 'ID_FS_TYPE': 'ext4',\n 'ID_FS_USAGE': 'filesystem',\n 'ID_FS_UUID': '8959a9f3-59d4-4eb7-8e53-e856bbc805e9',\n 'ID_FS_UUID_ENC': '8959a9f3-59d4-4eb7-8e53-e856bbc805e9',\n 'ID_FS_VERSION': '1.0',\n 'ID_PART_ENTRY_DISK': '202:0',\n 'ID_PART_ENTRY_NUMBER': '2',\n 'ID_PART_ENTRY_OFFSET': '4096',\n 'ID_PART_ENTRY_SCHEME': 'gpt',\n 'ID_PART_ENTRY_SIZE': '524283871',\n 'ID_PART_ENTRY_TYPE': '0fc63daf-8483-4772-8e79-3d69d8477de4',\n 'ID_PART_ENTRY_UUID': '782cc2d2-7936-4e3f-9cb4-9758a83f53fa',\n 'ID_PART_TABLE_TYPE': 'gpt',\n 'ID_PART_TABLE_UUID': '91c3c0f1-4957-4f21-b15a-28e9016b79c2',\n 'MAJOR': '202',\n 'MINOR': '2',\n 'PARTN': '2',\n 'PARTUUID': '782cc2d2-7936-4e3f-9cb4-9758a83f53fa',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'xvda2',\n 'SYS_PATH': '/sys/devices/vbd-768/block/xvda/xvda2',\n 'TAGS': ':systemd:',\n 'UDISKS_AUTO': '0',\n 'USEC_INITIALIZED': '8070794'} ;\n2025-06-28 18:19:14,586 INFO blivet/MainThread: scanning xvda2 (/sys/devices/vbd-768/block/xvda/xvda2)...\n2025-06-28 18:19:14,586 WARNING blivet/MainThread: hidden is not a valid attribute\n2025-06-28 18:19:14,589 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,592 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,596 DEBUG blivet/MainThread: PartitionDevicePopulator.run: name: xvda2 ;\n2025-06-28 18:19:14,599 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,602 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,602 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:14,614 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:14,628 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: xvda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,632 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 250 GiB disk xvda (62) with existing gpt disklabel\n2025-06-28 18:19:14,632 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:14,632 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 78\n2025-06-28 18:19:14,636 DEBUG blivet/MainThread: DiskDevice.add_child: name: xvda ; child: xvda2 ; kids: 1 ;\n2025-06-28 18:19:14,636 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 79\n2025-06-28 18:19:14,639 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda2 ; type: None ; current: None ;\n2025-06-28 18:19:14,642 DEBUG blivet/MainThread: PartitionDevice.update_sysfs_path: xvda2 ; status: True ;\n2025-06-28 18:19:14,642 DEBUG blivet/MainThread: xvda2 sysfs_path set to /sys/devices/vbd-768/block/xvda/xvda2\n2025-06-28 18:19:14,646 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda2 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda2 ;\n2025-06-28 18:19:14,647 DEBUG blivet/MainThread: updated xvda2 size to 250 GiB (250 GiB)\n2025-06-28 18:19:14,647 DEBUG blivet/MainThread: looking up parted Partition: /dev/xvda2\n2025-06-28 18:19:14,650 DEBUG blivet/MainThread: PartitionDevice.probe: xvda2 ; exists: True ;\n2025-06-28 18:19:14,653 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 1 ;\n2025-06-28 18:19:14,657 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 10 ;\n2025-06-28 18:19:14,663 DEBUG blivet/MainThread: PartitionDevice.get_flag: path: /dev/xvda2 ; flag: 12 ;\n2025-06-28 18:19:14,666 DEBUG blivet/MainThread: PartitionDevice.read_current_size: exists: True ; path: /dev/xvda2 ; sysfs_path: /sys/devices/vbd-768/block/xvda/xvda2 ;\n2025-06-28 18:19:14,666 DEBUG blivet/MainThread: updated xvda2 size to 250 GiB (250 GiB)\n2025-06-28 18:19:14,666 INFO blivet/MainThread: added partition xvda2 (id 77) to device tree\n2025-06-28 18:19:14,666 INFO blivet/MainThread: got device: PartitionDevice instance (0x7fbad2360910) --\n name = xvda2 status = True id = 77\n children = []\n parents = ['existing 250 GiB disk xvda (62) with existing gpt disklabel']\n uuid = 782cc2d2-7936-4e3f-9cb4-9758a83f53fa size = 250 GiB\n format = existing None\n major = 202 minor = 2 exists = True protected = False\n sysfs path = /sys/devices/vbd-768/block/xvda/xvda2\n target size = 250 GiB path = /dev/xvda2\n format args = [] original_format = None grow = None max size = 0 B bootable = None\n part type = 0 primary = None start sector = None end sector = None\n parted_partition = parted.Partition instance --\n disk: fileSystem: \n number: 2 path: /dev/xvda2 type: 0\n name: active: True busy: True\n geometry: PedPartition: <_ped.Partition object at 0x7fbad2387010>\n disk = existing 250 GiB disk xvda (62) with existing gpt disklabel\n start = 4096 end = 524287966 length = 524283871\n flags = type_uuid = 0fc63daf-8483-4772-8e79-3d69d8477de4\n2025-06-28 18:19:14,670 DEBUG blivet/MainThread: DeviceTree.handle_format: name: xvda2 ;\n2025-06-28 18:19:14,673 DEBUG blivet/MainThread: EFIFS.supported: supported: True ;\n2025-06-28 18:19:14,673 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 81\n2025-06-28 18:19:14,677 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:14,677 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 82\n2025-06-28 18:19:14,681 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:14,681 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 83\n2025-06-28 18:19:14,682 WARNING blivet/MainThread: Stratis DBus service is not running\n2025-06-28 18:19:14,682 INFO blivet/MainThread: type detected on 'xvda2' is 'ext4'\n2025-06-28 18:19:14,686 DEBUG blivet/MainThread: Ext4FS.supported: supported: True ;\n2025-06-28 18:19:14,686 DEBUG blivet/MainThread: get_format('ext4') returning Ext4FS instance with object id 84\n2025-06-28 18:19:14,689 DEBUG blivet/MainThread: PartitionDevice._set_format: xvda2 ; type: ext4 ; current: None ;\n2025-06-28 18:19:14,689 INFO blivet/MainThread: got format: existing ext4 filesystem\n2025-06-28 18:19:14,693 DEBUG blivet/MainThread: DeviceTree.handle_device: name: zram0 ; info: {'CURRENT_TAGS': ':systemd:',\n 'DEVLINKS': '/dev/disk/by-uuid/9e4b39b6-8d8e-46c1-8981-c482cb670ee6 '\n '/dev/disk/by-label/zram0 /dev/disk/by-diskseq/2',\n 'DEVNAME': '/dev/zram0',\n 'DEVPATH': '/devices/virtual/block/zram0',\n 'DEVTYPE': 'disk',\n 'DISKSEQ': '2',\n 'ID_FS_BLOCKSIZE': '4096',\n 'ID_FS_LABEL': 'zram0',\n 'ID_FS_LABEL_ENC': 'zram0',\n 'ID_FS_LASTBLOCK': '950784',\n 'ID_FS_SIZE': '3894407168',\n 'ID_FS_TYPE': 'swap',\n 'ID_FS_USAGE': 'other',\n 'ID_FS_UUID': '9e4b39b6-8d8e-46c1-8981-c482cb670ee6',\n 'ID_FS_UUID_ENC': '9e4b39b6-8d8e-46c1-8981-c482cb670ee6',\n 'ID_FS_VERSION': '1',\n 'MAJOR': '251',\n 'MINOR': '0',\n 'SUBSYSTEM': 'block',\n 'SYS_NAME': 'zram0',\n 'SYS_PATH': '/sys/devices/virtual/block/zram0',\n 'TAGS': ':systemd:',\n 'UDISKS_IGNORE': '1',\n 'USEC_INITIALIZED': '8070909'} ;\n2025-06-28 18:19:14,693 INFO blivet/MainThread: scanning zram0 (/sys/devices/virtual/block/zram0)...\n2025-06-28 18:19:14,696 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: zram0 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,699 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,699 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/udev.py:1087: DeprecationWarning: Will be removed in 1.0. Access properties with Device.properties.\n while device:\n\n2025-06-28 18:19:14,703 DEBUG blivet/MainThread: DiskDevicePopulator.run: name: zram0 ;\n2025-06-28 18:19:14,703 WARNING blivet/MainThread: device/vendor is not a valid attribute\n2025-06-28 18:19:14,703 WARNING blivet/MainThread: device/model is not a valid attribute\n2025-06-28 18:19:14,703 INFO blivet/MainThread: zram0 is a disk\n2025-06-28 18:19:14,703 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 87\n2025-06-28 18:19:14,703 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 88\n2025-06-28 18:19:14,707 DEBUG blivet/MainThread: DiskDevice._set_format: zram0 ; type: None ; current: None ;\n2025-06-28 18:19:14,710 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: zram0 ; status: True ;\n2025-06-28 18:19:14,710 DEBUG blivet/MainThread: zram0 sysfs_path set to /sys/devices/virtual/block/zram0\n2025-06-28 18:19:14,713 DEBUG blivet/MainThread: DiskDevice.read_current_size: exists: True ; path: /dev/zram0 ; sysfs_path: /sys/devices/virtual/block/zram0 ;\n2025-06-28 18:19:14,714 DEBUG blivet/MainThread: updated zram0 size to 3.63 GiB (3.63 GiB)\n2025-06-28 18:19:14,714 INFO blivet/MainThread: added disk zram0 (id 86) to device tree\n2025-06-28 18:19:14,714 INFO blivet/MainThread: got device: DiskDevice instance (0x7fbad2362c10) --\n name = zram0 status = True id = 86\n children = []\n parents = []\n uuid = None size = 3.63 GiB\n format = existing None\n major = 251 minor = 0 exists = True protected = False\n sysfs path = /sys/devices/virtual/block/zram0\n target size = 3.63 GiB path = /dev/zram0\n format args = [] original_format = None removable = False wwn = None\n2025-06-28 18:19:14,717 DEBUG blivet/MainThread: DeviceTree.handle_format: name: zram0 ;\n2025-06-28 18:19:14,721 DEBUG blivet/MainThread: EFIFS.supported: supported: True ;\n2025-06-28 18:19:14,721 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 90\n2025-06-28 18:19:14,725 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:14,725 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 91\n2025-06-28 18:19:14,728 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;\n2025-06-28 18:19:14,728 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 92\n2025-06-28 18:19:14,728 INFO blivet/MainThread: type detected on 'zram0' is 'swap'\n2025-06-28 18:19:14,732 DEBUG blivet/MainThread: SwapSpace.__init__: uuid: 9e4b39b6-8d8e-46c1-8981-c482cb670ee6 ; label: zram0 ; device: /dev/zram0 ; serial: None ; exists: True ;\n2025-06-28 18:19:14,732 DEBUG blivet/MainThread: get_format('swap') returning SwapSpace instance with object id 93\n2025-06-28 18:19:14,735 DEBUG blivet/MainThread: DiskDevice._set_format: zram0 ; type: swap ; current: None ;\n2025-06-28 18:19:14,735 INFO blivet/MainThread: got format: existing swap\n2025-06-28 18:19:14,736 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:14,749 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:14,760 INFO blivet/MainThread: edd: MBR signature on xvda is zero. new disk image?\n2025-06-28 18:19:14,760 INFO blivet/MainThread: edd: collected mbr signatures: {}\n2025-06-28 18:19:14,760 DEBUG blivet/MainThread: resolved 'UUID=8959a9f3-59d4-4eb7-8e53-e856bbc805e9' to 'xvda2' (partition)\n2025-06-28 18:19:14,764 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,767 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,770 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,773 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,773 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat'\n2025-06-28 18:19:14,776 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nest.test.redhat.com:/mnt/qa ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,779 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,783 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nest.test.redhat.com:/mnt/qa ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,786 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,786 DEBUG blivet/MainThread: failed to resolve '/dev/nest.test.redhat.com:/mnt/qa'\n2025-06-28 18:19:14,789 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,791 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,794 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,797 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,797 DEBUG blivet/MainThread: failed to resolve '/dev/vtap-eng01.storage.rdu2.redhat.com:/vol/engarchive'\n2025-06-28 18:19:14,800 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nest.test.redhat.com:/mnt/tpsdist ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,804 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,807 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nest.test.redhat.com:/mnt/tpsdist ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,810 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,810 DEBUG blivet/MainThread: failed to resolve '/dev/nest.test.redhat.com:/mnt/tpsdist'\n2025-06-28 18:19:14,813 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,816 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,819 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,822 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,822 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_engineering_sm/devarchive/redhat/brewroot'\n2025-06-28 18:19:14,825 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,828 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,831 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,834 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,834 DEBUG blivet/MainThread: failed to resolve '/dev/ntap-rdu2-c01-eng01-nfs01b.storage.rdu2.redhat.com:/bos_eng01_devops_brew_scratch_nfs_sm/scratch'\n2025-06-28 18:19:14,837 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,840 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,843 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,846 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,846 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg1'\n2025-06-28 18:19:14,849 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sda ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,852 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sda (2)\n2025-06-28 18:19:14,852 DEBUG blivet/MainThread: resolved 'sda' to 'sda' (disk)\n2025-06-28 18:19:14,855 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdb ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,858 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdb (7)\n2025-06-28 18:19:14,858 DEBUG blivet/MainThread: resolved 'sdb' to 'sdb' (disk)\n2025-06-28 18:19:14,861 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdc ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,864 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdc (22)\n2025-06-28 18:19:14,864 DEBUG blivet/MainThread: resolved 'sdc' to 'sdc' (disk)\n2025-06-28 18:19:14,864 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:14,867 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:14,867 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 95\n2025-06-28 18:19:14,868 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 98\n2025-06-28 18:19:14,872 DEBUG blivet/MainThread: DiskDevice._set_format: sda ; type: None ; current: None ;\n2025-06-28 18:19:14,872 INFO blivet/MainThread: registered action: [96] destroy format None on disk sda (id 2)\n2025-06-28 18:19:14,877 DEBUG blivet/MainThread: DiskDevice._set_format: sda ; type: lvmpv ; current: None ;\n2025-06-28 18:19:14,877 INFO blivet/MainThread: registered action: [97] create format lvmpv on disk sda (id 2)\n2025-06-28 18:19:14,882 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:14,882 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 99\n2025-06-28 18:19:14,882 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 102\n2025-06-28 18:19:14,885 DEBUG blivet/MainThread: DiskDevice._set_format: sdb ; type: None ; current: None ;\n2025-06-28 18:19:14,885 INFO blivet/MainThread: registered action: [100] destroy format None on disk sdb (id 7)\n2025-06-28 18:19:14,888 DEBUG blivet/MainThread: DiskDevice._set_format: sdb ; type: lvmpv ; current: None ;\n2025-06-28 18:19:14,888 INFO blivet/MainThread: registered action: [101] create format lvmpv on disk sdb (id 7)\n2025-06-28 18:19:14,891 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:14,891 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 103\n2025-06-28 18:19:14,892 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 106\n2025-06-28 18:19:14,895 DEBUG blivet/MainThread: DiskDevice._set_format: sdc ; type: None ; current: None ;\n2025-06-28 18:19:14,895 INFO blivet/MainThread: registered action: [104] destroy format None on disk sdc (id 22)\n2025-06-28 18:19:14,898 DEBUG blivet/MainThread: DiskDevice._set_format: sdc ; type: lvmpv ; current: None ;\n2025-06-28 18:19:14,898 INFO blivet/MainThread: registered action: [105] create format lvmpv on disk sdc (id 22)\n2025-06-28 18:19:14,898 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 108\n2025-06-28 18:19:14,903 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg1 ; parent: sda ;\n2025-06-28 18:19:14,906 DEBUG blivet/MainThread: DiskDevice.add_child: name: sda ; child: test_vg1 ; kids: 0 ;\n2025-06-28 18:19:14,910 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg1 ; parent: sdb ;\n2025-06-28 18:19:14,913 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdb ; child: test_vg1 ; kids: 0 ;\n2025-06-28 18:19:14,917 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg1 ; parent: sdc ;\n2025-06-28 18:19:14,921 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdc ; child: test_vg1 ; kids: 0 ;\n2025-06-28 18:19:14,921 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 109\n2025-06-28 18:19:14,924 DEBUG blivet/MainThread: LVMVolumeGroupDevice._set_format: test_vg1 ; type: None ; current: None ;\n2025-06-28 18:19:14,925 INFO blivet/MainThread: added lvmvg test_vg1 (id 107) to device tree\n2025-06-28 18:19:14,925 INFO blivet/MainThread: registered action: [111] create device lvmvg test_vg1 (id 107)\n2025-06-28 18:19:14,928 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg1-lv1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,931 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,935 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg1-lv1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,938 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,939 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg1-lv1'\n2025-06-28 18:19:14,939 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB\n2025-06-28 18:19:14,939 DEBUG blivet/MainThread: vg test_vg1 has 8.99 GiB free\n2025-06-28 18:19:14,939 DEBUG blivet.ansible/MainThread: size: 1.35 GiB ; -565.028901734104\n2025-06-28 18:19:14,939 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB\n2025-06-28 18:19:14,940 DEBUG blivet/MainThread: vg test_vg1 has 8.99 GiB free\n2025-06-28 18:19:14,943 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:14,943 INFO program/MainThread: [libmkod] custom logging function 0x7fbad4c89570 registered\n\n2025-06-28 18:19:14,943 INFO program/MainThread: [libmkod] context 0x562602e737f0 released\n\n2025-06-28 18:19:14,943 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 112\n2025-06-28 18:19:14,946 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:14,947 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 114\n2025-06-28 18:19:14,950 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg1 ; child: lv1 ; kids: 0 ;\n2025-06-28 18:19:14,955 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv1 ; type: xfs ; current: None ;\n2025-06-28 18:19:14,955 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 116\n2025-06-28 18:19:14,959 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg1 ; child: lv1 ; kids: 1 ;\n2025-06-28 18:19:14,962 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg1 ; child: lv1 ; kids: 0 ;\n2025-06-28 18:19:14,965 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv1 ; type: xfs ; current: None ;\n2025-06-28 18:19:14,969 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg1-lv1 ; sysfs_path: ;\n2025-06-28 18:19:14,969 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB\n2025-06-28 18:19:14,970 DEBUG blivet/MainThread: vg test_vg1 has 8.99 GiB free\n2025-06-28 18:19:14,970 DEBUG blivet/MainThread: Adding test_vg1-lv1/1.35 GiB to test_vg1\n2025-06-28 18:19:14,970 INFO blivet/MainThread: added lvmlv test_vg1-lv1 (id 113) to device tree\n2025-06-28 18:19:14,970 INFO blivet/MainThread: registered action: [118] create device lvmlv test_vg1-lv1 (id 113)\n2025-06-28 18:19:14,970 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 120\n2025-06-28 18:19:14,974 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv1 ; type: xfs ; current: xfs ;\n2025-06-28 18:19:14,974 INFO blivet/MainThread: registered action: [119] create format xfs filesystem on lvmlv test_vg1-lv1 (id 113)\n2025-06-28 18:19:14,978 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg1-lv2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,981 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:14,983 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg1-lv2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:14,987 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:14,987 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg1-lv2'\n2025-06-28 18:19:14,987 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB\n2025-06-28 18:19:14,987 DEBUG blivet/MainThread: vg test_vg1 has 7.64 GiB free\n2025-06-28 18:19:14,987 DEBUG blivet.ansible/MainThread: size: 4.5 GiB ; -69.85230234578627\n2025-06-28 18:19:14,988 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB\n2025-06-28 18:19:14,988 DEBUG blivet/MainThread: vg test_vg1 has 7.64 GiB free\n2025-06-28 18:19:14,992 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:14,992 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 121\n2025-06-28 18:19:14,995 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:14,995 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 123\n2025-06-28 18:19:14,999 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg1 ; child: lv2 ; kids: 1 ;\n2025-06-28 18:19:15,002 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv2 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,002 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 125\n2025-06-28 18:19:15,007 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg1 ; child: lv2 ; kids: 2 ;\n2025-06-28 18:19:15,010 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg1 ; child: lv2 ; kids: 1 ;\n2025-06-28 18:19:15,014 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv2 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,018 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg1-lv2 ; sysfs_path: ;\n2025-06-28 18:19:15,018 DEBUG blivet/MainThread: test_vg1 size is 8.99 GiB\n2025-06-28 18:19:15,018 DEBUG blivet/MainThread: vg test_vg1 has 7.64 GiB free\n2025-06-28 18:19:15,018 DEBUG blivet/MainThread: Adding test_vg1-lv2/4.5 GiB to test_vg1\n2025-06-28 18:19:15,018 INFO blivet/MainThread: added lvmlv test_vg1-lv2 (id 122) to device tree\n2025-06-28 18:19:15,018 INFO blivet/MainThread: registered action: [127] create device lvmlv test_vg1-lv2 (id 122)\n2025-06-28 18:19:15,018 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 129\n2025-06-28 18:19:15,022 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg1-lv2 ; type: xfs ; current: xfs ;\n2025-06-28 18:19:15,022 INFO blivet/MainThread: registered action: [128] create format xfs filesystem on lvmlv test_vg1-lv2 (id 122)\n2025-06-28 18:19:15,025 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,029 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:15,031 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,034 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:15,034 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg2'\n2025-06-28 18:19:15,037 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdd ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,040 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdd (27)\n2025-06-28 18:19:15,040 DEBUG blivet/MainThread: resolved 'sdd' to 'sdd' (disk)\n2025-06-28 18:19:15,044 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sde ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,047 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sde (32)\n2025-06-28 18:19:15,047 DEBUG blivet/MainThread: resolved 'sde' to 'sde' (disk)\n2025-06-28 18:19:15,051 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdf ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,053 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdf (37)\n2025-06-28 18:19:15,054 DEBUG blivet/MainThread: resolved 'sdf' to 'sdf' (disk)\n2025-06-28 18:19:15,056 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:15,057 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 130\n2025-06-28 18:19:15,057 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 133\n2025-06-28 18:19:15,060 DEBUG blivet/MainThread: DiskDevice._set_format: sdd ; type: None ; current: None ;\n2025-06-28 18:19:15,060 INFO blivet/MainThread: registered action: [131] destroy format None on disk sdd (id 27)\n2025-06-28 18:19:15,063 DEBUG blivet/MainThread: DiskDevice._set_format: sdd ; type: lvmpv ; current: None ;\n2025-06-28 18:19:15,063 INFO blivet/MainThread: registered action: [132] create format lvmpv on disk sdd (id 27)\n2025-06-28 18:19:15,066 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:15,066 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 134\n2025-06-28 18:19:15,066 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 137\n2025-06-28 18:19:15,070 DEBUG blivet/MainThread: DiskDevice._set_format: sde ; type: None ; current: None ;\n2025-06-28 18:19:15,070 INFO blivet/MainThread: registered action: [135] destroy format None on disk sde (id 32)\n2025-06-28 18:19:15,073 DEBUG blivet/MainThread: DiskDevice._set_format: sde ; type: lvmpv ; current: None ;\n2025-06-28 18:19:15,073 INFO blivet/MainThread: registered action: [136] create format lvmpv on disk sde (id 32)\n2025-06-28 18:19:15,076 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:15,076 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 138\n2025-06-28 18:19:15,076 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 141\n2025-06-28 18:19:15,079 DEBUG blivet/MainThread: DiskDevice._set_format: sdf ; type: None ; current: None ;\n2025-06-28 18:19:15,079 INFO blivet/MainThread: registered action: [139] destroy format None on disk sdf (id 37)\n2025-06-28 18:19:15,083 DEBUG blivet/MainThread: DiskDevice._set_format: sdf ; type: lvmpv ; current: None ;\n2025-06-28 18:19:15,083 INFO blivet/MainThread: registered action: [140] create format lvmpv on disk sdf (id 37)\n2025-06-28 18:19:15,083 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 143\n2025-06-28 18:19:15,087 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg2 ; parent: sdd ;\n2025-06-28 18:19:15,090 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdd ; child: test_vg2 ; kids: 0 ;\n2025-06-28 18:19:15,094 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg2 ; parent: sde ;\n2025-06-28 18:19:15,097 DEBUG blivet/MainThread: DiskDevice.add_child: name: sde ; child: test_vg2 ; kids: 0 ;\n2025-06-28 18:19:15,100 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg2 ; parent: sdf ;\n2025-06-28 18:19:15,105 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdf ; child: test_vg2 ; kids: 0 ;\n2025-06-28 18:19:15,105 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 144\n2025-06-28 18:19:15,108 DEBUG blivet/MainThread: LVMVolumeGroupDevice._set_format: test_vg2 ; type: None ; current: None ;\n2025-06-28 18:19:15,109 INFO blivet/MainThread: added lvmvg test_vg2 (id 142) to device tree\n2025-06-28 18:19:15,109 INFO blivet/MainThread: registered action: [146] create device lvmvg test_vg2 (id 142)\n2025-06-28 18:19:15,112 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg2-lv3 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,115 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:15,119 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg2-lv3 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,122 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:15,122 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg2-lv3'\n2025-06-28 18:19:15,122 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB\n2025-06-28 18:19:15,123 DEBUG blivet/MainThread: vg test_vg2 has 8.99 GiB free\n2025-06-28 18:19:15,123 DEBUG blivet.ansible/MainThread: size: 924 MiB ; -896.1038961038961\n2025-06-28 18:19:15,123 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB\n2025-06-28 18:19:15,123 DEBUG blivet/MainThread: vg test_vg2 has 8.99 GiB free\n2025-06-28 18:19:15,126 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,126 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 147\n2025-06-28 18:19:15,129 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,129 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 149\n2025-06-28 18:19:15,133 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg2 ; child: lv3 ; kids: 0 ;\n2025-06-28 18:19:15,136 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv3 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,137 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 151\n2025-06-28 18:19:15,140 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg2 ; child: lv3 ; kids: 1 ;\n2025-06-28 18:19:15,144 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg2 ; child: lv3 ; kids: 0 ;\n2025-06-28 18:19:15,147 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv3 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,151 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg2-lv3 ; sysfs_path: ;\n2025-06-28 18:19:15,151 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB\n2025-06-28 18:19:15,152 DEBUG blivet/MainThread: vg test_vg2 has 8.99 GiB free\n2025-06-28 18:19:15,152 DEBUG blivet/MainThread: Adding test_vg2-lv3/924 MiB to test_vg2\n2025-06-28 18:19:15,152 INFO blivet/MainThread: added lvmlv test_vg2-lv3 (id 148) to device tree\n2025-06-28 18:19:15,152 INFO blivet/MainThread: registered action: [153] create device lvmlv test_vg2-lv3 (id 148)\n2025-06-28 18:19:15,152 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 155\n2025-06-28 18:19:15,156 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv3 ; type: xfs ; current: xfs ;\n2025-06-28 18:19:15,156 INFO blivet/MainThread: registered action: [154] create format xfs filesystem on lvmlv test_vg2-lv3 (id 148)\n2025-06-28 18:19:15,159 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg2-lv4 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,162 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:15,165 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg2-lv4 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,168 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:15,168 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg2-lv4'\n2025-06-28 18:19:15,169 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB\n2025-06-28 18:19:15,169 DEBUG blivet/MainThread: vg test_vg2 has 8.09 GiB free\n2025-06-28 18:19:15,169 DEBUG blivet.ansible/MainThread: size: 1.8 GiB ; -349.0238611713666\n2025-06-28 18:19:15,169 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB\n2025-06-28 18:19:15,170 DEBUG blivet/MainThread: vg test_vg2 has 8.09 GiB free\n2025-06-28 18:19:15,173 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,173 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 156\n2025-06-28 18:19:15,176 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,177 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 158\n2025-06-28 18:19:15,180 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg2 ; child: lv4 ; kids: 1 ;\n2025-06-28 18:19:15,184 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv4 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,184 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 160\n2025-06-28 18:19:15,188 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg2 ; child: lv4 ; kids: 2 ;\n2025-06-28 18:19:15,191 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg2 ; child: lv4 ; kids: 1 ;\n2025-06-28 18:19:15,195 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv4 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,198 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg2-lv4 ; sysfs_path: ;\n2025-06-28 18:19:15,199 DEBUG blivet/MainThread: test_vg2 size is 8.99 GiB\n2025-06-28 18:19:15,199 DEBUG blivet/MainThread: vg test_vg2 has 8.09 GiB free\n2025-06-28 18:19:15,199 DEBUG blivet/MainThread: Adding test_vg2-lv4/1.8 GiB to test_vg2\n2025-06-28 18:19:15,199 INFO blivet/MainThread: added lvmlv test_vg2-lv4 (id 157) to device tree\n2025-06-28 18:19:15,199 INFO blivet/MainThread: registered action: [162] create device lvmlv test_vg2-lv4 (id 157)\n2025-06-28 18:19:15,199 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 164\n2025-06-28 18:19:15,203 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg2-lv4 ; type: xfs ; current: xfs ;\n2025-06-28 18:19:15,204 INFO blivet/MainThread: registered action: [163] create format xfs filesystem on lvmlv test_vg2-lv4 (id 157)\n2025-06-28 18:19:15,206 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,210 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:15,213 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,216 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:15,216 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3'\n2025-06-28 18:19:15,219 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdg ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,222 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdg (42)\n2025-06-28 18:19:15,222 DEBUG blivet/MainThread: resolved 'sdg' to 'sdg' (disk)\n2025-06-28 18:19:15,225 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdh ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,228 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdh (47)\n2025-06-28 18:19:15,228 DEBUG blivet/MainThread: resolved 'sdh' to 'sdh' (disk)\n2025-06-28 18:19:15,231 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdi ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,234 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdi (52)\n2025-06-28 18:19:15,234 DEBUG blivet/MainThread: resolved 'sdi' to 'sdi' (disk)\n2025-06-28 18:19:15,237 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: sdj ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,240 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 3 GiB disk sdj (57)\n2025-06-28 18:19:15,240 DEBUG blivet/MainThread: resolved 'sdj' to 'sdj' (disk)\n2025-06-28 18:19:15,243 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:15,243 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 165\n2025-06-28 18:19:15,243 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 168\n2025-06-28 18:19:15,246 DEBUG blivet/MainThread: DiskDevice._set_format: sdg ; type: None ; current: None ;\n2025-06-28 18:19:15,246 INFO blivet/MainThread: registered action: [166] destroy format None on disk sdg (id 42)\n2025-06-28 18:19:15,249 DEBUG blivet/MainThread: DiskDevice._set_format: sdg ; type: lvmpv ; current: None ;\n2025-06-28 18:19:15,250 INFO blivet/MainThread: registered action: [167] create format lvmpv on disk sdg (id 42)\n2025-06-28 18:19:15,252 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:15,252 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 169\n2025-06-28 18:19:15,253 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 172\n2025-06-28 18:19:15,255 DEBUG blivet/MainThread: DiskDevice._set_format: sdh ; type: None ; current: None ;\n2025-06-28 18:19:15,256 INFO blivet/MainThread: registered action: [170] destroy format None on disk sdh (id 47)\n2025-06-28 18:19:15,259 DEBUG blivet/MainThread: DiskDevice._set_format: sdh ; type: lvmpv ; current: None ;\n2025-06-28 18:19:15,259 INFO blivet/MainThread: registered action: [171] create format lvmpv on disk sdh (id 47)\n2025-06-28 18:19:15,262 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:15,262 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 173\n2025-06-28 18:19:15,262 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 176\n2025-06-28 18:19:15,265 DEBUG blivet/MainThread: DiskDevice._set_format: sdi ; type: None ; current: None ;\n2025-06-28 18:19:15,265 INFO blivet/MainThread: registered action: [174] destroy format None on disk sdi (id 52)\n2025-06-28 18:19:15,268 DEBUG blivet/MainThread: DiskDevice._set_format: sdi ; type: lvmpv ; current: None ;\n2025-06-28 18:19:15,268 INFO blivet/MainThread: registered action: [175] create format lvmpv on disk sdi (id 52)\n2025-06-28 18:19:15,271 DEBUG blivet/MainThread: LVMPhysicalVolume.__init__:\n2025-06-28 18:19:15,271 DEBUG blivet/MainThread: get_format('lvmpv') returning LVMPhysicalVolume instance with object id 177\n2025-06-28 18:19:15,271 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 180\n2025-06-28 18:19:15,275 DEBUG blivet/MainThread: DiskDevice._set_format: sdj ; type: None ; current: None ;\n2025-06-28 18:19:15,275 INFO blivet/MainThread: registered action: [178] destroy format None on disk sdj (id 57)\n2025-06-28 18:19:15,278 DEBUG blivet/MainThread: DiskDevice._set_format: sdj ; type: lvmpv ; current: None ;\n2025-06-28 18:19:15,278 INFO blivet/MainThread: registered action: [179] create format lvmpv on disk sdj (id 57)\n2025-06-28 18:19:15,279 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 182\n2025-06-28 18:19:15,282 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg3 ; parent: sdg ;\n2025-06-28 18:19:15,286 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdg ; child: test_vg3 ; kids: 0 ;\n2025-06-28 18:19:15,289 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg3 ; parent: sdh ;\n2025-06-28 18:19:15,293 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdh ; child: test_vg3 ; kids: 0 ;\n2025-06-28 18:19:15,296 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg3 ; parent: sdi ;\n2025-06-28 18:19:15,299 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdi ; child: test_vg3 ; kids: 0 ;\n2025-06-28 18:19:15,303 DEBUG blivet/MainThread: LVMVolumeGroupDevice._add_parent: test_vg3 ; parent: sdj ;\n2025-06-28 18:19:15,307 DEBUG blivet/MainThread: DiskDevice.add_child: name: sdj ; child: test_vg3 ; kids: 0 ;\n2025-06-28 18:19:15,307 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 183\n2025-06-28 18:19:15,310 DEBUG blivet/MainThread: LVMVolumeGroupDevice._set_format: test_vg3 ; type: None ; current: None ;\n2025-06-28 18:19:15,310 INFO blivet/MainThread: added lvmvg test_vg3 (id 181) to device tree\n2025-06-28 18:19:15,310 INFO blivet/MainThread: registered action: [185] create device lvmvg test_vg3 (id 181)\n2025-06-28 18:19:15,314 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3-lv5 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,317 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:15,320 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3-lv5 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,324 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:15,324 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3-lv5'\n2025-06-28 18:19:15,324 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,324 DEBUG blivet/MainThread: vg test_vg3 has 11.98 GiB free\n2025-06-28 18:19:15,324 DEBUG blivet.ansible/MainThread: size: 3.6 GiB ; -233.11617806731815\n2025-06-28 18:19:15,325 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,325 DEBUG blivet/MainThread: vg test_vg3 has 11.98 GiB free\n2025-06-28 18:19:15,329 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,329 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 186\n2025-06-28 18:19:15,332 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,332 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 188\n2025-06-28 18:19:15,336 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv5 ; kids: 0 ;\n2025-06-28 18:19:15,339 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv5 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,339 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 190\n2025-06-28 18:19:15,343 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg3 ; child: lv5 ; kids: 1 ;\n2025-06-28 18:19:15,347 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv5 ; kids: 0 ;\n2025-06-28 18:19:15,350 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv5 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,354 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg3-lv5 ; sysfs_path: ;\n2025-06-28 18:19:15,354 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,354 DEBUG blivet/MainThread: vg test_vg3 has 11.98 GiB free\n2025-06-28 18:19:15,354 DEBUG blivet/MainThread: Adding test_vg3-lv5/3.6 GiB to test_vg3\n2025-06-28 18:19:15,354 INFO blivet/MainThread: added lvmlv test_vg3-lv5 (id 187) to device tree\n2025-06-28 18:19:15,354 INFO blivet/MainThread: registered action: [192] create device lvmlv test_vg3-lv5 (id 187)\n2025-06-28 18:19:15,355 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 194\n2025-06-28 18:19:15,359 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv5 ; type: xfs ; current: xfs ;\n2025-06-28 18:19:15,359 INFO blivet/MainThread: registered action: [193] create format xfs filesystem on lvmlv test_vg3-lv5 (id 187)\n2025-06-28 18:19:15,362 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3-lv6 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,365 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:15,368 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3-lv6 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,371 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:15,371 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3-lv6'\n2025-06-28 18:19:15,372 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,372 DEBUG blivet/MainThread: vg test_vg3 has 8.39 GiB free\n2025-06-28 18:19:15,372 DEBUG blivet.ansible/MainThread: size: 3 GiB ; -179.9217731421121\n2025-06-28 18:19:15,372 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,373 DEBUG blivet/MainThread: vg test_vg3 has 8.39 GiB free\n2025-06-28 18:19:15,376 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,376 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 195\n2025-06-28 18:19:15,379 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,379 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 197\n2025-06-28 18:19:15,384 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv6 ; kids: 1 ;\n2025-06-28 18:19:15,387 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv6 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,388 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 199\n2025-06-28 18:19:15,391 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg3 ; child: lv6 ; kids: 2 ;\n2025-06-28 18:19:15,394 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv6 ; kids: 1 ;\n2025-06-28 18:19:15,399 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv6 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,402 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg3-lv6 ; sysfs_path: ;\n2025-06-28 18:19:15,402 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,403 DEBUG blivet/MainThread: vg test_vg3 has 8.39 GiB free\n2025-06-28 18:19:15,403 DEBUG blivet/MainThread: Adding test_vg3-lv6/3 GiB to test_vg3\n2025-06-28 18:19:15,403 INFO blivet/MainThread: added lvmlv test_vg3-lv6 (id 196) to device tree\n2025-06-28 18:19:15,403 INFO blivet/MainThread: registered action: [201] create device lvmlv test_vg3-lv6 (id 196)\n2025-06-28 18:19:15,403 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 203\n2025-06-28 18:19:15,407 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv6 ; type: xfs ; current: xfs ;\n2025-06-28 18:19:15,407 INFO blivet/MainThread: registered action: [202] create format xfs filesystem on lvmlv test_vg3-lv6 (id 196)\n2025-06-28 18:19:15,410 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3-lv7 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,413 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:15,416 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3-lv7 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,420 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:15,420 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3-lv7'\n2025-06-28 18:19:15,420 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,421 DEBUG blivet/MainThread: vg test_vg3 has 5.39 GiB free\n2025-06-28 18:19:15,421 DEBUG blivet.ansible/MainThread: size: 1.2 GiB ; -349.5114006514658\n2025-06-28 18:19:15,421 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,421 DEBUG blivet/MainThread: vg test_vg3 has 5.39 GiB free\n2025-06-28 18:19:15,424 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,425 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 204\n2025-06-28 18:19:15,427 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,428 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 206\n2025-06-28 18:19:15,431 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv7 ; kids: 2 ;\n2025-06-28 18:19:15,436 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv7 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,436 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 208\n2025-06-28 18:19:15,440 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg3 ; child: lv7 ; kids: 3 ;\n2025-06-28 18:19:15,443 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv7 ; kids: 2 ;\n2025-06-28 18:19:15,446 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv7 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,450 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg3-lv7 ; sysfs_path: ;\n2025-06-28 18:19:15,450 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,451 DEBUG blivet/MainThread: vg test_vg3 has 5.39 GiB free\n2025-06-28 18:19:15,451 DEBUG blivet/MainThread: Adding test_vg3-lv7/1.2 GiB to test_vg3\n2025-06-28 18:19:15,451 INFO blivet/MainThread: added lvmlv test_vg3-lv7 (id 205) to device tree\n2025-06-28 18:19:15,451 INFO blivet/MainThread: registered action: [210] create device lvmlv test_vg3-lv7 (id 205)\n2025-06-28 18:19:15,451 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 212\n2025-06-28 18:19:15,455 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv7 ; type: xfs ; current: xfs ;\n2025-06-28 18:19:15,455 INFO blivet/MainThread: registered action: [211] create format xfs filesystem on lvmlv test_vg3-lv7 (id 205)\n2025-06-28 18:19:15,459 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: test_vg3-lv8 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,462 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None\n2025-06-28 18:19:15,465 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/test_vg3-lv8 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:15,468 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None\n2025-06-28 18:19:15,468 DEBUG blivet/MainThread: failed to resolve '/dev/test_vg3-lv8'\n2025-06-28 18:19:15,468 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,469 DEBUG blivet/MainThread: vg test_vg3 has 4.19 GiB free\n2025-06-28 18:19:15,469 DEBUG blivet.ansible/MainThread: size: 1.2 GiB ; -249.5114006514658\n2025-06-28 18:19:15,469 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,469 DEBUG blivet/MainThread: vg test_vg3 has 4.19 GiB free\n2025-06-28 18:19:15,473 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,473 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 213\n2025-06-28 18:19:15,476 DEBUG blivet/MainThread: XFS.supported: supported: True ;\n2025-06-28 18:19:15,476 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 215\n2025-06-28 18:19:15,480 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv8 ; kids: 3 ;\n2025-06-28 18:19:15,483 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv8 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,483 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 217\n2025-06-28 18:19:15,488 DEBUG blivet/MainThread: LVMVolumeGroupDevice.remove_child: name: test_vg3 ; child: lv8 ; kids: 4 ;\n2025-06-28 18:19:15,491 DEBUG blivet/MainThread: LVMVolumeGroupDevice.add_child: name: test_vg3 ; child: lv8 ; kids: 3 ;\n2025-06-28 18:19:15,495 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv8 ; type: xfs ; current: None ;\n2025-06-28 18:19:15,499 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: False ; path: /dev/mapper/test_vg3-lv8 ; sysfs_path: ;\n2025-06-28 18:19:15,499 DEBUG blivet/MainThread: test_vg3 size is 11.98 GiB\n2025-06-28 18:19:15,500 DEBUG blivet/MainThread: vg test_vg3 has 4.19 GiB free\n2025-06-28 18:19:15,500 DEBUG blivet/MainThread: Adding test_vg3-lv8/1.2 GiB to test_vg3\n2025-06-28 18:19:15,500 INFO blivet/MainThread: added lvmlv test_vg3-lv8 (id 214) to device tree\n2025-06-28 18:19:15,500 INFO blivet/MainThread: registered action: [219] create device lvmlv test_vg3-lv8 (id 214)\n2025-06-28 18:19:15,500 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 221\n2025-06-28 18:19:15,504 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._set_format: test_vg3-lv8 ; type: xfs ; current: xfs ;\n2025-06-28 18:19:15,504 INFO blivet/MainThread: registered action: [220] create format xfs filesystem on lvmlv test_vg3-lv8 (id 214)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [96] destroy format None on disk sda (id 2)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [97] create format lvmpv on disk sda (id 2)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [100] destroy format None on disk sdb (id 7)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [101] create format lvmpv on disk sdb (id 7)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [104] destroy format None on disk sdc (id 22)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [105] create format lvmpv on disk sdc (id 22)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [111] create device lvmvg test_vg1 (id 107)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [118] create device lvmlv test_vg1-lv1 (id 113)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [119] create format xfs filesystem on lvmlv test_vg1-lv1 (id 113)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [127] create device lvmlv test_vg1-lv2 (id 122)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [128] create format xfs filesystem on lvmlv test_vg1-lv2 (id 122)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [131] destroy format None on disk sdd (id 27)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [132] create format lvmpv on disk sdd (id 27)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [135] destroy format None on disk sde (id 32)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [136] create format lvmpv on disk sde (id 32)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [139] destroy format None on disk sdf (id 37)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [140] create format lvmpv on disk sdf (id 37)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [146] create device lvmvg test_vg2 (id 142)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [153] create device lvmlv test_vg2-lv3 (id 148)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [154] create format xfs filesystem on lvmlv test_vg2-lv3 (id 148)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [162] create device lvmlv test_vg2-lv4 (id 157)\n2025-06-28 18:19:15,505 DEBUG blivet/MainThread: action: [163] create format xfs filesystem on lvmlv test_vg2-lv4 (id 157)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [166] destroy format None on disk sdg (id 42)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [167] create format lvmpv on disk sdg (id 42)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [170] destroy format None on disk sdh (id 47)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [171] create format lvmpv on disk sdh (id 47)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [174] destroy format None on disk sdi (id 52)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [175] create format lvmpv on disk sdi (id 52)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [178] destroy format None on disk sdj (id 57)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [179] create format lvmpv on disk sdj (id 57)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [185] create device lvmvg test_vg3 (id 181)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [192] create device lvmlv test_vg3-lv5 (id 187)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [193] create format xfs filesystem on lvmlv test_vg3-lv5 (id 187)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [201] create device lvmlv test_vg3-lv6 (id 196)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [202] create format xfs filesystem on lvmlv test_vg3-lv6 (id 196)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [210] create device lvmlv test_vg3-lv7 (id 205)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [211] create format xfs filesystem on lvmlv test_vg3-lv7 (id 205)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [219] create device lvmlv test_vg3-lv8 (id 214)\n2025-06-28 18:19:15,506 DEBUG blivet/MainThread: action: [220] create format xfs filesystem on lvmlv test_vg3-lv8 (id 214)\n2025-06-28 18:19:15,506 INFO blivet/MainThread: pruning action queue...\n2025-06-28 18:19:15,507 INFO blivet/MainThread: resetting parted disks...\n2025-06-28 18:19:15,511 DEBUG blivet/MainThread: DiskLabel.reset_parted_disk: device: /dev/xvda ;\n2025-06-28 18:19:15,513 DEBUG blivet/MainThread: DiskLabel.reset_parted_disk: device: /dev/xvda ;\n2025-06-28 18:19:15,517 DEBUG blivet/MainThread: PartitionDevice.pre_commit_fixup: xvda1 ;\n2025-06-28 18:19:15,517 DEBUG blivet/MainThread: sector-based lookup found partition xvda1\n2025-06-28 18:19:15,520 DEBUG blivet/MainThread: PartitionDevice._set_parted_partition: xvda1 ;\n2025-06-28 18:19:15,520 DEBUG blivet/MainThread: device xvda1 new parted_partition parted.Partition instance --\n disk: fileSystem: None\n number: 1 path: /dev/xvda1 type: 0\n name: active: True busy: False\n geometry: PedPartition: <_ped.Partition object at 0x7fbad22336a0>\n2025-06-28 18:19:15,524 DEBUG blivet/MainThread: PartitionDevice.pre_commit_fixup: xvda2 ;\n2025-06-28 18:19:15,524 DEBUG blivet/MainThread: sector-based lookup found partition xvda2\n2025-06-28 18:19:15,527 DEBUG blivet/MainThread: PartitionDevice._set_parted_partition: xvda2 ;\n2025-06-28 18:19:15,527 DEBUG blivet/MainThread: device xvda2 new parted_partition parted.Partition instance --\n disk: fileSystem: \n number: 2 path: /dev/xvda2 type: 0\n name: active: True busy: True\n geometry: PedPartition: <_ped.Partition object at 0x7fbad2232de0>\n2025-06-28 18:19:15,527 INFO blivet/MainThread: sorting actions...\n2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [178] destroy format None on disk sdj (id 57)\n2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [174] destroy format None on disk sdi (id 52)\n2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [170] destroy format None on disk sdh (id 47)\n2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [166] destroy format None on disk sdg (id 42)\n2025-06-28 18:19:15,549 DEBUG blivet/MainThread: action: [139] destroy format None on disk sdf (id 37)\n2025-06-28 18:19:15,550 DEBUG blivet/MainThread: action: [135] destroy format None on disk sde (id 32)\n2025-06-28 18:19:15,550 DEBUG blivet/MainThread: action: [131] destroy format None on disk sdd (id 27)\n2025-06-28 18:19:15,550 DEBUG blivet/MainThread: action: [104] destroy format None on disk sdc (id 22)\n2025-06-28 18:19:15,550 DEBUG blivet/MainThread: action: [100] destroy format None on disk sdb (id 7)\n2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [96] destroy format None on disk sda (id 2)\n2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [179] create format lvmpv on disk sdj (id 57)\n2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [175] create format lvmpv on disk sdi (id 52)\n2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [171] create format lvmpv on disk sdh (id 47)\n2025-06-28 18:19:15,551 DEBUG blivet/MainThread: action: [167] create format lvmpv on disk sdg (id 42)\n2025-06-28 18:19:15,552 DEBUG blivet/MainThread: action: [185] create device lvmvg test_vg3 (id 181)\n2025-06-28 18:19:15,552 DEBUG blivet/MainThread: action: [219] create device lvmlv test_vg3-lv8 (id 214)\n2025-06-28 18:19:15,552 DEBUG blivet/MainThread: action: [220] create format xfs filesystem on lvmlv test_vg3-lv8 (id 214)\n2025-06-28 18:19:15,552 DEBUG blivet/MainThread: action: [210] create device lvmlv test_vg3-lv7 (id 205)\n2025-06-28 18:19:15,553 DEBUG blivet/MainThread: action: [211] create format xfs filesystem on lvmlv test_vg3-lv7 (id 205)\n2025-06-28 18:19:15,553 DEBUG blivet/MainThread: action: [201] create device lvmlv test_vg3-lv6 (id 196)\n2025-06-28 18:19:15,553 DEBUG blivet/MainThread: action: [202] create format xfs filesystem on lvmlv test_vg3-lv6 (id 196)\n2025-06-28 18:19:15,553 DEBUG blivet/MainThread: action: [192] create device lvmlv test_vg3-lv5 (id 187)\n2025-06-28 18:19:15,554 DEBUG blivet/MainThread: action: [193] create format xfs filesystem on lvmlv test_vg3-lv5 (id 187)\n2025-06-28 18:19:15,554 DEBUG blivet/MainThread: action: [140] create format lvmpv on disk sdf (id 37)\n2025-06-28 18:19:15,554 DEBUG blivet/MainThread: action: [136] create format lvmpv on disk sde (id 32)\n2025-06-28 18:19:15,554 DEBUG blivet/MainThread: action: [132] create format lvmpv on disk sdd (id 27)\n2025-06-28 18:19:15,555 DEBUG blivet/MainThread: action: [146] create device lvmvg test_vg2 (id 142)\n2025-06-28 18:19:15,555 DEBUG blivet/MainThread: action: [162] create device lvmlv test_vg2-lv4 (id 157)\n2025-06-28 18:19:15,555 DEBUG blivet/MainThread: action: [163] create format xfs filesystem on lvmlv test_vg2-lv4 (id 157)\n2025-06-28 18:19:15,555 DEBUG blivet/MainThread: action: [153] create device lvmlv test_vg2-lv3 (id 148)\n2025-06-28 18:19:15,556 DEBUG blivet/MainThread: action: [154] create format xfs filesystem on lvmlv test_vg2-lv3 (id 148)\n2025-06-28 18:19:15,556 DEBUG blivet/MainThread: action: [105] create format lvmpv on disk sdc (id 22)\n2025-06-28 18:19:15,556 DEBUG blivet/MainThread: action: [101] create format lvmpv on disk sdb (id 7)\n2025-06-28 18:19:15,556 DEBUG blivet/MainThread: action: [97] create format lvmpv on disk sda (id 2)\n2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [111] create device lvmvg test_vg1 (id 107)\n2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [127] create device lvmlv test_vg1-lv2 (id 122)\n2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [128] create format xfs filesystem on lvmlv test_vg1-lv2 (id 122)\n2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [118] create device lvmlv test_vg1-lv1 (id 113)\n2025-06-28 18:19:15,557 DEBUG blivet/MainThread: action: [119] create format xfs filesystem on lvmlv test_vg1-lv1 (id 113)\n2025-06-28 18:19:15,558 INFO blivet/MainThread: executing action: [178] destroy format None on disk sdj (id 57)\n2025-06-28 18:19:15,561 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,564 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdj ; type: None ; status: False ;\n2025-06-28 18:19:15,568 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,593 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,594 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,606 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,606 INFO blivet/MainThread: executing action: [174] destroy format None on disk sdi (id 52)\n2025-06-28 18:19:15,610 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,613 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdi ; type: None ; status: False ;\n2025-06-28 18:19:15,616 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,639 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,640 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,651 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,651 INFO blivet/MainThread: executing action: [170] destroy format None on disk sdh (id 47)\n2025-06-28 18:19:15,655 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,658 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdh ; type: None ; status: False ;\n2025-06-28 18:19:15,661 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,687 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,688 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,699 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,699 INFO blivet/MainThread: executing action: [166] destroy format None on disk sdg (id 42)\n2025-06-28 18:19:15,704 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,707 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdg ; type: None ; status: False ;\n2025-06-28 18:19:15,709 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,740 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,740 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,751 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,751 INFO blivet/MainThread: executing action: [139] destroy format None on disk sdf (id 37)\n2025-06-28 18:19:15,755 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,758 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdf ; type: None ; status: False ;\n2025-06-28 18:19:15,761 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,783 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,783 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,796 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,796 INFO blivet/MainThread: executing action: [135] destroy format None on disk sde (id 32)\n2025-06-28 18:19:15,800 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,803 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sde ; type: None ; status: False ;\n2025-06-28 18:19:15,806 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,828 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,828 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,839 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,840 INFO blivet/MainThread: executing action: [131] destroy format None on disk sdd (id 27)\n2025-06-28 18:19:15,844 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,848 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdd ; type: None ; status: False ;\n2025-06-28 18:19:15,851 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,872 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,872 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,883 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,884 INFO blivet/MainThread: executing action: [104] destroy format None on disk sdc (id 22)\n2025-06-28 18:19:15,888 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,891 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdc ; type: None ; status: False ;\n2025-06-28 18:19:15,894 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,918 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,919 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,929 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,930 INFO blivet/MainThread: executing action: [100] destroy format None on disk sdb (id 7)\n2025-06-28 18:19:15,934 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,937 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sdb ; type: None ; status: False ;\n2025-06-28 18:19:15,940 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,964 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,965 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:15,975 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:15,976 INFO blivet/MainThread: executing action: [96] destroy format None on disk sda (id 2)\n2025-06-28 18:19:15,980 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: True ; status: True ; controllable: True ;\n2025-06-28 18:19:15,983 DEBUG blivet/MainThread: DeviceFormat.destroy: device: /dev/sda ; type: None ; status: False ;\n2025-06-28 18:19:15,986 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,011 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,012 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,024 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,024 INFO blivet/MainThread: executing action: [179] create format lvmpv on disk sdj (id 57)\n2025-06-28 18:19:16,028 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,031 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdj ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,035 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdj ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,035 DEBUG blivet/MainThread: lvm filter: device /dev/sdj added to the list of allowed devices\n2025-06-28 18:19:16,035 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:16,035 INFO program/MainThread: Running [6] lvm pvcreate /dev/sdj --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:16,076 INFO program/MainThread: stdout[6]: Physical volume \"/dev/sdj\" successfully created.\n Creating devices file /etc/lvm/devices/system.devices\n\n2025-06-28 18:19:16,077 INFO program/MainThread: stderr[6]: \n2025-06-28 18:19:16,077 INFO program/MainThread: ...done [6] (exit code: 0)\n2025-06-28 18:19:16,077 INFO program/MainThread: Running [7] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:16,088 INFO program/MainThread: stdout[7]: use_devicesfile=1\n\n2025-06-28 18:19:16,088 INFO program/MainThread: stderr[7]: \n2025-06-28 18:19:16,088 INFO program/MainThread: ...done [7] (exit code: 0)\n2025-06-28 18:19:16,088 INFO program/MainThread: Running [8] lvmdevices --adddev /dev/sdj ...\n2025-06-28 18:19:16,116 INFO program/MainThread: stdout[8]: \n2025-06-28 18:19:16,117 INFO program/MainThread: stderr[8]: \n2025-06-28 18:19:16,117 INFO program/MainThread: ...done [8] (exit code: 0)\n2025-06-28 18:19:16,117 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdj\n2025-06-28 18:19:16,117 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,127 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,132 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdj ; status: True ;\n2025-06-28 18:19:16,132 DEBUG blivet/MainThread: sdj sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:9/block/sdj\n2025-06-28 18:19:16,133 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:16,133 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdj\n2025-06-28 18:19:16,141 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,141 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,156 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,157 INFO blivet/MainThread: executing action: [175] create format lvmpv on disk sdi (id 52)\n2025-06-28 18:19:16,161 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,164 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdi ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,167 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdi ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,168 DEBUG blivet/MainThread: lvm filter: device /dev/sdi added to the list of allowed devices\n2025-06-28 18:19:16,168 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:16,168 INFO program/MainThread: Running [9] lvm pvcreate /dev/sdi --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:16,203 INFO program/MainThread: stdout[9]: Physical volume \"/dev/sdi\" successfully created.\n\n2025-06-28 18:19:16,203 INFO program/MainThread: stderr[9]: \n2025-06-28 18:19:16,204 INFO program/MainThread: ...done [9] (exit code: 0)\n2025-06-28 18:19:16,204 INFO program/MainThread: Running [10] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:16,215 INFO program/MainThread: stdout[10]: use_devicesfile=1\n\n2025-06-28 18:19:16,215 INFO program/MainThread: stderr[10]: \n2025-06-28 18:19:16,215 INFO program/MainThread: ...done [10] (exit code: 0)\n2025-06-28 18:19:16,215 INFO program/MainThread: Running [11] lvmdevices --adddev /dev/sdi ...\n2025-06-28 18:19:16,245 INFO program/MainThread: stdout[11]: \n2025-06-28 18:19:16,245 INFO program/MainThread: stderr[11]: \n2025-06-28 18:19:16,245 INFO program/MainThread: ...done [11] (exit code: 0)\n2025-06-28 18:19:16,245 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdi\n2025-06-28 18:19:16,246 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,256 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,261 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdi ; status: True ;\n2025-06-28 18:19:16,261 DEBUG blivet/MainThread: sdi sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:8/block/sdi\n2025-06-28 18:19:16,262 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:16,262 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdi\n2025-06-28 18:19:16,270 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,270 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,285 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,286 INFO blivet/MainThread: executing action: [171] create format lvmpv on disk sdh (id 47)\n2025-06-28 18:19:16,290 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,293 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdh ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,297 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdh ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,297 DEBUG blivet/MainThread: lvm filter: device /dev/sdh added to the list of allowed devices\n2025-06-28 18:19:16,298 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:16,298 INFO program/MainThread: Running [12] lvm pvcreate /dev/sdh --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:16,333 INFO program/MainThread: stdout[12]: Physical volume \"/dev/sdh\" successfully created.\n\n2025-06-28 18:19:16,334 INFO program/MainThread: stderr[12]: \n2025-06-28 18:19:16,334 INFO program/MainThread: ...done [12] (exit code: 0)\n2025-06-28 18:19:16,334 INFO program/MainThread: Running [13] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:16,345 INFO program/MainThread: stdout[13]: use_devicesfile=1\n\n2025-06-28 18:19:16,346 INFO program/MainThread: stderr[13]: \n2025-06-28 18:19:16,346 INFO program/MainThread: ...done [13] (exit code: 0)\n2025-06-28 18:19:16,346 INFO program/MainThread: Running [14] lvmdevices --adddev /dev/sdh ...\n2025-06-28 18:19:16,375 INFO program/MainThread: stdout[14]: \n2025-06-28 18:19:16,375 INFO program/MainThread: stderr[14]: \n2025-06-28 18:19:16,375 INFO program/MainThread: ...done [14] (exit code: 0)\n2025-06-28 18:19:16,375 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdh\n2025-06-28 18:19:16,376 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,386 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,391 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdh ; status: True ;\n2025-06-28 18:19:16,391 DEBUG blivet/MainThread: sdh sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:7/block/sdh\n2025-06-28 18:19:16,392 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:16,392 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdh\n2025-06-28 18:19:16,400 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,400 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,416 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,417 INFO blivet/MainThread: executing action: [167] create format lvmpv on disk sdg (id 42)\n2025-06-28 18:19:16,421 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,424 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdg ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,427 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdg ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,427 DEBUG blivet/MainThread: lvm filter: device /dev/sdg added to the list of allowed devices\n2025-06-28 18:19:16,427 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:16,427 INFO program/MainThread: Running [15] lvm pvcreate /dev/sdg --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:16,463 INFO program/MainThread: stdout[15]: Physical volume \"/dev/sdg\" successfully created.\n\n2025-06-28 18:19:16,463 INFO program/MainThread: stderr[15]: \n2025-06-28 18:19:16,463 INFO program/MainThread: ...done [15] (exit code: 0)\n2025-06-28 18:19:16,463 INFO program/MainThread: Running [16] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:16,474 INFO program/MainThread: stdout[16]: use_devicesfile=1\n\n2025-06-28 18:19:16,474 INFO program/MainThread: stderr[16]: \n2025-06-28 18:19:16,474 INFO program/MainThread: ...done [16] (exit code: 0)\n2025-06-28 18:19:16,474 INFO program/MainThread: Running [17] lvmdevices --adddev /dev/sdg ...\n2025-06-28 18:19:16,510 INFO program/MainThread: stdout[17]: \n2025-06-28 18:19:16,510 INFO program/MainThread: stderr[17]: \n2025-06-28 18:19:16,510 INFO program/MainThread: ...done [17] (exit code: 0)\n2025-06-28 18:19:16,510 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdg\n2025-06-28 18:19:16,511 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,522 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,527 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdg ; status: True ;\n2025-06-28 18:19:16,527 DEBUG blivet/MainThread: sdg sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:6/block/sdg\n2025-06-28 18:19:16,528 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:16,529 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdg\n2025-06-28 18:19:16,537 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,537 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,553 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,554 INFO blivet/MainThread: executing action: [185] create device lvmvg test_vg3 (id 181)\n2025-06-28 18:19:16,558 DEBUG blivet/MainThread: LVMVolumeGroupDevice.create: test_vg3 ; status: False ;\n2025-06-28 18:19:16,561 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg3 ; orig: False ;\n2025-06-28 18:19:16,564 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,567 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdg ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,570 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,574 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdh ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,577 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,580 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdi ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,583 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,586 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdj ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,589 DEBUG blivet/MainThread: LVMVolumeGroupDevice._create: test_vg3 ; status: False ;\n2025-06-28 18:19:16,589 INFO program/MainThread: Running [18] lvm vgcreate -s 4096K test_vg3 /dev/sdg /dev/sdh /dev/sdi /dev/sdj --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:16,655 INFO program/MainThread: stdout[18]: Volume group \"test_vg3\" successfully created\n\n2025-06-28 18:19:16,655 INFO program/MainThread: stderr[18]: \n2025-06-28 18:19:16,655 INFO program/MainThread: ...done [18] (exit code: 0)\n2025-06-28 18:19:16,664 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: False ; controllable: True ;\n2025-06-28 18:19:16,670 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg3 ; orig: False ;\n2025-06-28 18:19:16,677 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,681 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdg ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,684 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,688 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdh ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,691 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,694 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdi ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,698 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,702 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdj ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,702 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,767 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,771 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg3 ; status: False ;\n2025-06-28 18:19:16,775 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg3 ; status: False ;\n2025-06-28 18:19:16,775 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,787 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,788 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=test_vg3\n2025-06-28 18:19:16,796 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,796 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,806 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,807 INFO blivet/MainThread: executing action: [219] create device lvmlv test_vg3-lv8 (id 214)\n2025-06-28 18:19:16,811 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg3-lv8 ; status: False ;\n2025-06-28 18:19:16,814 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg3-lv8 ; orig: False ;\n2025-06-28 18:19:16,817 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: False ; controllable: True ;\n2025-06-28 18:19:16,821 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg3 ; orig: False ;\n2025-06-28 18:19:16,824 DEBUG blivet/MainThread: DiskDevice.setup: sdg ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,828 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdg ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,831 DEBUG blivet/MainThread: DiskDevice.setup: sdh ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,835 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdh ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,838 DEBUG blivet/MainThread: DiskDevice.setup: sdi ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,842 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdi ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,845 DEBUG blivet/MainThread: DiskDevice.setup: sdj ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,848 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdj ; type: lvmpv ; status: False ;\n2025-06-28 18:19:16,848 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:16,857 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:16,862 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg3 ; status: False ;\n2025-06-28 18:19:16,862 INFO program/MainThread: Running [19] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:16,894 INFO program/MainThread: stdout[19]: LVM2_VG_NAME=test_vg3 LVM2_VG_UUID=ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM LVM2_VG_SIZE=12851347456 LVM2_VG_FREE=12851347456 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=3064 LVM2_VG_FREE_COUNT=3064 LVM2_PV_COUNT=4 LVM2_VG_EXPORTED= LVM2_VG_TAGS=\n\n2025-06-28 18:19:16,894 INFO program/MainThread: stderr[19]: \n2025-06-28 18:19:16,894 INFO program/MainThread: ...done [19] (exit code: 0)\n2025-06-28 18:19:16,899 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg3-lv8 ; status: False ;\n2025-06-28 18:19:16,899 INFO program/MainThread: Running [20] lvm lvcreate -n lv8 -L 1257472K -y --type linear test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:16,956 INFO program/MainThread: stdout[20]: Logical volume \"lv8\" created.\n\n2025-06-28 18:19:16,957 INFO program/MainThread: stderr[20]: \n2025-06-28 18:19:16,957 INFO program/MainThread: ...done [20] (exit code: 0)\n2025-06-28 18:19:16,961 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv8 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:16,964 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv8 ; status: True ;\n2025-06-28 18:19:16,965 DEBUG blivet/MainThread: test_vg3-lv8 sysfs_path set to /sys/devices/virtual/block/dm-0\n2025-06-28 18:19:16,965 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:17,013 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,017 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg3-lv8 ; sysfs_path: /sys/devices/virtual/block/dm-0 ;\n2025-06-28 18:19:17,018 DEBUG blivet/MainThread: updated test_vg3-lv8 size to 1.2 GiB (1.2 GiB)\n2025-06-28 18:19:17,018 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-0\n2025-06-28 18:19:17,026 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,027 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:17,042 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,043 INFO blivet/MainThread: executing action: [220] create format xfs filesystem on lvmlv test_vg3-lv8 (id 214)\n2025-06-28 18:19:17,047 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv8 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:17,050 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg3-lv8 ; type: xfs ; status: False ;\n2025-06-28 18:19:17,054 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg3-lv8 ; mountpoint: ;\n2025-06-28 18:19:17,054 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored.\n bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None,\n\n2025-06-28 18:19:17,054 INFO program/MainThread: Running [21] mkfs.xfs /dev/mapper/test_vg3-lv8 -f ...\n2025-06-28 18:19:17,184 INFO program/MainThread: stdout[21]: meta-data=/dev/mapper/test_vg3-lv8 isize=512 agcount=4, agsize=78592 blks\n = sectsz=512 attr=2, projid32bit=1\n = crc=1 finobt=1, sparse=1, rmapbt=1\n = reflink=1 bigtime=1 inobtcount=1 nrext64=1\n = exchange=0 \ndata = bsize=4096 blocks=314368, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0\nlog =internal log bsize=4096 blocks=16384, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n2025-06-28 18:19:17,184 INFO program/MainThread: stderr[21]: \n2025-06-28 18:19:17,184 INFO program/MainThread: ...done [21] (exit code: 0)\n2025-06-28 18:19:17,185 INFO program/MainThread: Running [22] xfs_admin -L -- /dev/mapper/test_vg3-lv8 ...\n2025-06-28 18:19:17,211 INFO program/MainThread: stdout[22]: writing all SBs\nnew label = \"\"\n\n2025-06-28 18:19:17,211 INFO program/MainThread: stderr[22]: \n2025-06-28 18:19:17,211 INFO program/MainThread: ...done [22] (exit code: 0)\n2025-06-28 18:19:17,211 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:17,230 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,235 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv8 ; status: True ;\n2025-06-28 18:19:17,236 DEBUG blivet/MainThread: test_vg3-lv8 sysfs_path set to /sys/devices/virtual/block/dm-0\n2025-06-28 18:19:17,236 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:17,237 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-0\n2025-06-28 18:19:17,244 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,245 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:17,259 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,260 INFO blivet/MainThread: executing action: [210] create device lvmlv test_vg3-lv7 (id 205)\n2025-06-28 18:19:17,264 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg3-lv7 ; status: False ;\n2025-06-28 18:19:17,267 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg3-lv7 ; orig: False ;\n2025-06-28 18:19:17,270 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:17,270 INFO program/MainThread: Running [23] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:17,299 INFO program/MainThread: stdout[23]: LVM2_VG_NAME=test_vg3 LVM2_VG_UUID=ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM LVM2_VG_SIZE=12851347456 LVM2_VG_FREE=11563696128 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=3064 LVM2_VG_FREE_COUNT=2757 LVM2_PV_COUNT=4 LVM2_VG_EXPORTED= LVM2_VG_TAGS=\n\n2025-06-28 18:19:17,299 INFO program/MainThread: stderr[23]: \n2025-06-28 18:19:17,299 INFO program/MainThread: ...done [23] (exit code: 0)\n2025-06-28 18:19:17,305 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg3-lv7 ; status: False ;\n2025-06-28 18:19:17,305 INFO program/MainThread: Running [24] lvm lvcreate -n lv7 -L 1257472K -y --type linear test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:17,356 INFO program/MainThread: stdout[24]: Logical volume \"lv7\" created.\n\n2025-06-28 18:19:17,356 INFO program/MainThread: stderr[24]: \n2025-06-28 18:19:17,356 INFO program/MainThread: ...done [24] (exit code: 0)\n2025-06-28 18:19:17,362 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv7 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:17,365 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv7 ; status: True ;\n2025-06-28 18:19:17,366 DEBUG blivet/MainThread: test_vg3-lv7 sysfs_path set to /sys/devices/virtual/block/dm-1\n2025-06-28 18:19:17,366 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:17,407 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,411 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg3-lv7 ; sysfs_path: /sys/devices/virtual/block/dm-1 ;\n2025-06-28 18:19:17,412 DEBUG blivet/MainThread: updated test_vg3-lv7 size to 1.2 GiB (1.2 GiB)\n2025-06-28 18:19:17,412 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-1\n2025-06-28 18:19:17,421 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,421 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:17,439 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:17,440 INFO blivet/MainThread: executing action: [211] create format xfs filesystem on lvmlv test_vg3-lv7 (id 205)\n2025-06-28 18:19:17,444 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv7 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:17,447 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg3-lv7 ; type: xfs ; status: False ;\n2025-06-28 18:19:17,451 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg3-lv7 ; mountpoint: ;\n2025-06-28 18:19:17,451 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored.\n bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None,\n\n2025-06-28 18:19:17,451 INFO program/MainThread: Running [25] mkfs.xfs /dev/mapper/test_vg3-lv7 -f ...\n2025-06-28 18:19:18,278 INFO program/MainThread: stdout[25]: meta-data=/dev/mapper/test_vg3-lv7 isize=512 agcount=4, agsize=78592 blks\n = sectsz=512 attr=2, projid32bit=1\n = crc=1 finobt=1, sparse=1, rmapbt=1\n = reflink=1 bigtime=1 inobtcount=1 nrext64=1\n = exchange=0 \ndata = bsize=4096 blocks=314368, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0\nlog =internal log bsize=4096 blocks=16384, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n2025-06-28 18:19:18,278 INFO program/MainThread: stderr[25]: \n2025-06-28 18:19:18,278 INFO program/MainThread: ...done [25] (exit code: 0)\n2025-06-28 18:19:18,279 INFO program/MainThread: Running [26] xfs_admin -L -- /dev/mapper/test_vg3-lv7 ...\n2025-06-28 18:19:18,294 INFO program/MainThread: stdout[26]: writing all SBs\nnew label = \"\"\n\n2025-06-28 18:19:18,294 INFO program/MainThread: stderr[26]: \n2025-06-28 18:19:18,294 INFO program/MainThread: ...done [26] (exit code: 0)\n2025-06-28 18:19:18,294 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:18,311 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:18,316 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv7 ; status: True ;\n2025-06-28 18:19:18,317 DEBUG blivet/MainThread: test_vg3-lv7 sysfs_path set to /sys/devices/virtual/block/dm-1\n2025-06-28 18:19:18,317 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:18,318 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-1\n2025-06-28 18:19:18,325 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:18,326 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:18,338 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:18,339 INFO blivet/MainThread: executing action: [201] create device lvmlv test_vg3-lv6 (id 196)\n2025-06-28 18:19:18,343 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg3-lv6 ; status: False ;\n2025-06-28 18:19:18,346 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg3-lv6 ; orig: False ;\n2025-06-28 18:19:18,349 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:18,350 INFO program/MainThread: Running [27] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:18,381 INFO program/MainThread: stdout[27]: LVM2_VG_NAME=test_vg3 LVM2_VG_UUID=ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM LVM2_VG_SIZE=12851347456 LVM2_VG_FREE=10276044800 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=3064 LVM2_VG_FREE_COUNT=2450 LVM2_PV_COUNT=4 LVM2_VG_EXPORTED= LVM2_VG_TAGS=\n\n2025-06-28 18:19:18,381 INFO program/MainThread: stderr[27]: \n2025-06-28 18:19:18,381 INFO program/MainThread: ...done [27] (exit code: 0)\n2025-06-28 18:19:18,385 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg3-lv6 ; status: False ;\n2025-06-28 18:19:18,386 INFO program/MainThread: Running [28] lvm lvcreate -n lv6 -L 3141632K -y --type linear test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:18,431 INFO program/MainThread: stdout[28]: Logical volume \"lv6\" created.\n\n2025-06-28 18:19:18,431 INFO program/MainThread: stderr[28]: \n2025-06-28 18:19:18,431 INFO program/MainThread: ...done [28] (exit code: 0)\n2025-06-28 18:19:18,464 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv6 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:18,476 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv6 ; status: True ;\n2025-06-28 18:19:18,477 DEBUG blivet/MainThread: test_vg3-lv6 sysfs_path set to /sys/devices/virtual/block/dm-2\n2025-06-28 18:19:18,477 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:18,491 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:18,497 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg3-lv6 ; sysfs_path: /sys/devices/virtual/block/dm-2 ;\n2025-06-28 18:19:18,497 DEBUG blivet/MainThread: updated test_vg3-lv6 size to 3 GiB (3 GiB)\n2025-06-28 18:19:18,497 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-2\n2025-06-28 18:19:18,505 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:18,505 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:18,517 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:18,517 INFO blivet/MainThread: executing action: [202] create format xfs filesystem on lvmlv test_vg3-lv6 (id 196)\n2025-06-28 18:19:18,522 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv6 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:18,525 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg3-lv6 ; type: xfs ; status: False ;\n2025-06-28 18:19:18,528 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg3-lv6 ; mountpoint: ;\n2025-06-28 18:19:18,528 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored.\n bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None,\n\n2025-06-28 18:19:18,529 INFO program/MainThread: Running [29] mkfs.xfs /dev/mapper/test_vg3-lv6 -f ...\n2025-06-28 18:19:19,365 INFO program/MainThread: stdout[29]: meta-data=/dev/mapper/test_vg3-lv6 isize=512 agcount=4, agsize=196352 blks\n = sectsz=512 attr=2, projid32bit=1\n = crc=1 finobt=1, sparse=1, rmapbt=1\n = reflink=1 bigtime=1 inobtcount=1 nrext64=1\n = exchange=0 \ndata = bsize=4096 blocks=785408, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0\nlog =internal log bsize=4096 blocks=16384, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n2025-06-28 18:19:19,365 INFO program/MainThread: stderr[29]: \n2025-06-28 18:19:19,365 INFO program/MainThread: ...done [29] (exit code: 0)\n2025-06-28 18:19:19,365 INFO program/MainThread: Running [30] xfs_admin -L -- /dev/mapper/test_vg3-lv6 ...\n2025-06-28 18:19:19,380 INFO program/MainThread: stdout[30]: writing all SBs\nnew label = \"\"\n\n2025-06-28 18:19:19,380 INFO program/MainThread: stderr[30]: \n2025-06-28 18:19:19,380 INFO program/MainThread: ...done [30] (exit code: 0)\n2025-06-28 18:19:19,380 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:19,399 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:19,404 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv6 ; status: True ;\n2025-06-28 18:19:19,405 DEBUG blivet/MainThread: test_vg3-lv6 sysfs_path set to /sys/devices/virtual/block/dm-2\n2025-06-28 18:19:19,405 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:19,406 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-2\n2025-06-28 18:19:19,415 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:19,415 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:19,428 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:19,429 INFO blivet/MainThread: executing action: [192] create device lvmlv test_vg3-lv5 (id 187)\n2025-06-28 18:19:19,433 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg3-lv5 ; status: False ;\n2025-06-28 18:19:19,436 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg3-lv5 ; orig: False ;\n2025-06-28 18:19:19,440 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg3 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:19,440 INFO program/MainThread: Running [31] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:19,467 INFO program/MainThread: stdout[31]: LVM2_VG_NAME=test_vg3 LVM2_VG_UUID=ZB1tkb-ps0S-5eSt-mkNq-Zi5H-8gx6-CCtNFM LVM2_VG_SIZE=12851347456 LVM2_VG_FREE=7059013632 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=3064 LVM2_VG_FREE_COUNT=1683 LVM2_PV_COUNT=4 LVM2_VG_EXPORTED= LVM2_VG_TAGS=\n\n2025-06-28 18:19:19,467 INFO program/MainThread: stderr[31]: \n2025-06-28 18:19:19,467 INFO program/MainThread: ...done [31] (exit code: 0)\n2025-06-28 18:19:19,471 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg3-lv5 ; status: False ;\n2025-06-28 18:19:19,471 INFO program/MainThread: Running [32] lvm lvcreate -n lv5 -L 3772416K -y --type linear test_vg3 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:19,518 INFO program/MainThread: stdout[32]: Logical volume \"lv5\" created.\n\n2025-06-28 18:19:19,519 INFO program/MainThread: stderr[32]: \n2025-06-28 18:19:19,519 INFO program/MainThread: ...done [32] (exit code: 0)\n2025-06-28 18:19:19,540 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv5 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:19,554 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv5 ; status: True ;\n2025-06-28 18:19:19,555 DEBUG blivet/MainThread: test_vg3-lv5 sysfs_path set to /sys/devices/virtual/block/dm-3\n2025-06-28 18:19:19,555 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:19,567 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:19,572 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg3-lv5 ; sysfs_path: /sys/devices/virtual/block/dm-3 ;\n2025-06-28 18:19:19,572 DEBUG blivet/MainThread: updated test_vg3-lv5 size to 3.6 GiB (3.6 GiB)\n2025-06-28 18:19:19,573 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-3\n2025-06-28 18:19:19,581 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:19,581 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:19,596 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:19,597 INFO blivet/MainThread: executing action: [193] create format xfs filesystem on lvmlv test_vg3-lv5 (id 187)\n2025-06-28 18:19:19,601 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg3-lv5 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:19,604 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg3-lv5 ; type: xfs ; status: False ;\n2025-06-28 18:19:19,608 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg3-lv5 ; mountpoint: ;\n2025-06-28 18:19:19,608 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored.\n bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None,\n\n2025-06-28 18:19:19,608 INFO program/MainThread: Running [33] mkfs.xfs /dev/mapper/test_vg3-lv5 -f ...\n2025-06-28 18:19:20,451 INFO program/MainThread: stdout[33]: meta-data=/dev/mapper/test_vg3-lv5 isize=512 agcount=4, agsize=235776 blks\n = sectsz=512 attr=2, projid32bit=1\n = crc=1 finobt=1, sparse=1, rmapbt=1\n = reflink=1 bigtime=1 inobtcount=1 nrext64=1\n = exchange=0 \ndata = bsize=4096 blocks=943104, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0\nlog =internal log bsize=4096 blocks=16384, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n2025-06-28 18:19:20,452 INFO program/MainThread: stderr[33]: \n2025-06-28 18:19:20,452 INFO program/MainThread: ...done [33] (exit code: 0)\n2025-06-28 18:19:20,452 INFO program/MainThread: Running [34] xfs_admin -L -- /dev/mapper/test_vg3-lv5 ...\n2025-06-28 18:19:20,467 INFO program/MainThread: stdout[34]: writing all SBs\nnew label = \"\"\n\n2025-06-28 18:19:20,468 INFO program/MainThread: stderr[34]: \n2025-06-28 18:19:20,468 INFO program/MainThread: ...done [34] (exit code: 0)\n2025-06-28 18:19:20,468 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:20,487 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,492 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg3-lv5 ; status: True ;\n2025-06-28 18:19:20,492 DEBUG blivet/MainThread: test_vg3-lv5 sysfs_path set to /sys/devices/virtual/block/dm-3\n2025-06-28 18:19:20,493 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:20,493 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-3\n2025-06-28 18:19:20,501 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,501 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:20,515 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,516 INFO blivet/MainThread: executing action: [140] create format lvmpv on disk sdf (id 37)\n2025-06-28 18:19:20,520 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:20,523 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdf ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,527 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdf ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,527 DEBUG blivet/MainThread: lvm filter: device /dev/sdf added to the list of allowed devices\n2025-06-28 18:19:20,527 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:20,527 INFO program/MainThread: Running [35] lvm pvcreate /dev/sdf --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:20,559 INFO program/MainThread: stdout[35]: Physical volume \"/dev/sdf\" successfully created.\n\n2025-06-28 18:19:20,560 INFO program/MainThread: stderr[35]: \n2025-06-28 18:19:20,560 INFO program/MainThread: ...done [35] (exit code: 0)\n2025-06-28 18:19:20,560 INFO program/MainThread: Running [36] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:20,571 INFO program/MainThread: stdout[36]: use_devicesfile=1\n\n2025-06-28 18:19:20,571 INFO program/MainThread: stderr[36]: \n2025-06-28 18:19:20,571 INFO program/MainThread: ...done [36] (exit code: 0)\n2025-06-28 18:19:20,571 INFO program/MainThread: Running [37] lvmdevices --adddev /dev/sdf ...\n2025-06-28 18:19:20,597 INFO program/MainThread: stdout[37]: \n2025-06-28 18:19:20,597 INFO program/MainThread: stderr[37]: \n2025-06-28 18:19:20,597 INFO program/MainThread: ...done [37] (exit code: 0)\n2025-06-28 18:19:20,597 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdf\n2025-06-28 18:19:20,598 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:20,608 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,613 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdf ; status: True ;\n2025-06-28 18:19:20,613 DEBUG blivet/MainThread: sdf sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:5/block/sdf\n2025-06-28 18:19:20,614 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:20,614 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdf\n2025-06-28 18:19:20,622 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,623 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:20,637 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,638 INFO blivet/MainThread: executing action: [136] create format lvmpv on disk sde (id 32)\n2025-06-28 18:19:20,642 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:20,645 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sde ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,648 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sde ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,649 DEBUG blivet/MainThread: lvm filter: device /dev/sde added to the list of allowed devices\n2025-06-28 18:19:20,649 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:20,649 INFO program/MainThread: Running [38] lvm pvcreate /dev/sde --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:20,679 INFO program/MainThread: stdout[38]: Physical volume \"/dev/sde\" successfully created.\n\n2025-06-28 18:19:20,680 INFO program/MainThread: stderr[38]: \n2025-06-28 18:19:20,680 INFO program/MainThread: ...done [38] (exit code: 0)\n2025-06-28 18:19:20,680 INFO program/MainThread: Running [39] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:20,690 INFO program/MainThread: stdout[39]: use_devicesfile=1\n\n2025-06-28 18:19:20,690 INFO program/MainThread: stderr[39]: \n2025-06-28 18:19:20,690 INFO program/MainThread: ...done [39] (exit code: 0)\n2025-06-28 18:19:20,690 INFO program/MainThread: Running [40] lvmdevices --adddev /dev/sde ...\n2025-06-28 18:19:20,716 INFO program/MainThread: stdout[40]: \n2025-06-28 18:19:20,716 INFO program/MainThread: stderr[40]: \n2025-06-28 18:19:20,716 INFO program/MainThread: ...done [40] (exit code: 0)\n2025-06-28 18:19:20,716 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sde\n2025-06-28 18:19:20,717 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:20,727 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,732 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sde ; status: True ;\n2025-06-28 18:19:20,732 DEBUG blivet/MainThread: sde sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:4/block/sde\n2025-06-28 18:19:20,733 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:20,733 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sde\n2025-06-28 18:19:20,742 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,742 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:20,755 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,755 INFO blivet/MainThread: executing action: [132] create format lvmpv on disk sdd (id 27)\n2025-06-28 18:19:20,760 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:20,764 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdd ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,767 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdd ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,767 DEBUG blivet/MainThread: lvm filter: device /dev/sdd added to the list of allowed devices\n2025-06-28 18:19:20,767 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:20,768 INFO program/MainThread: Running [41] lvm pvcreate /dev/sdd --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:20,803 INFO program/MainThread: stdout[41]: Physical volume \"/dev/sdd\" successfully created.\n\n2025-06-28 18:19:20,803 INFO program/MainThread: stderr[41]: \n2025-06-28 18:19:20,804 INFO program/MainThread: ...done [41] (exit code: 0)\n2025-06-28 18:19:20,804 INFO program/MainThread: Running [42] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:20,814 INFO program/MainThread: stdout[42]: use_devicesfile=1\n\n2025-06-28 18:19:20,815 INFO program/MainThread: stderr[42]: \n2025-06-28 18:19:20,815 INFO program/MainThread: ...done [42] (exit code: 0)\n2025-06-28 18:19:20,815 INFO program/MainThread: Running [43] lvmdevices --adddev /dev/sdd ...\n2025-06-28 18:19:20,846 INFO program/MainThread: stdout[43]: \n2025-06-28 18:19:20,846 INFO program/MainThread: stderr[43]: \n2025-06-28 18:19:20,846 INFO program/MainThread: ...done [43] (exit code: 0)\n2025-06-28 18:19:20,846 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdd\n2025-06-28 18:19:20,847 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:20,854 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,859 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdd ; status: True ;\n2025-06-28 18:19:20,859 DEBUG blivet/MainThread: sdd sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:3/block/sdd\n2025-06-28 18:19:20,859 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:20,860 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdd\n2025-06-28 18:19:20,867 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,868 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:20,880 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:20,881 INFO blivet/MainThread: executing action: [146] create device lvmvg test_vg2 (id 142)\n2025-06-28 18:19:20,885 DEBUG blivet/MainThread: LVMVolumeGroupDevice.create: test_vg2 ; status: False ;\n2025-06-28 18:19:20,888 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg2 ; orig: False ;\n2025-06-28 18:19:20,892 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:20,895 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdd ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,898 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:20,901 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sde ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,904 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:20,907 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdf ; type: lvmpv ; status: False ;\n2025-06-28 18:19:20,911 DEBUG blivet/MainThread: LVMVolumeGroupDevice._create: test_vg2 ; status: False ;\n2025-06-28 18:19:20,911 INFO program/MainThread: Running [44] lvm vgcreate -s 4096K test_vg2 /dev/sdd /dev/sde /dev/sdf --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:20,977 INFO program/MainThread: stdout[44]: Volume group \"test_vg2\" successfully created\n\n2025-06-28 18:19:20,977 INFO program/MainThread: stderr[44]: \n2025-06-28 18:19:20,977 INFO program/MainThread: ...done [44] (exit code: 0)\n2025-06-28 18:19:20,991 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg2 ; orig: False ; status: False ; controllable: True ;\n2025-06-28 18:19:21,000 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg2 ; orig: False ;\n2025-06-28 18:19:21,009 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,025 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdd ; type: lvmpv ; status: False ;\n2025-06-28 18:19:21,038 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,047 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sde ; type: lvmpv ; status: False ;\n2025-06-28 18:19:21,051 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,055 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdf ; type: lvmpv ; status: False ;\n2025-06-28 18:19:21,055 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,068 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,072 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg2 ; status: False ;\n2025-06-28 18:19:21,075 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg2 ; status: False ;\n2025-06-28 18:19:21,076 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,088 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,088 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=test_vg2\n2025-06-28 18:19:21,095 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,096 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,106 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,107 INFO blivet/MainThread: executing action: [162] create device lvmlv test_vg2-lv4 (id 157)\n2025-06-28 18:19:21,111 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg2-lv4 ; status: False ;\n2025-06-28 18:19:21,114 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg2-lv4 ; orig: False ;\n2025-06-28 18:19:21,118 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg2 ; orig: False ; status: False ; controllable: True ;\n2025-06-28 18:19:21,121 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg2 ; orig: False ;\n2025-06-28 18:19:21,124 DEBUG blivet/MainThread: DiskDevice.setup: sdd ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,128 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdd ; type: lvmpv ; status: False ;\n2025-06-28 18:19:21,131 DEBUG blivet/MainThread: DiskDevice.setup: sde ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,135 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sde ; type: lvmpv ; status: False ;\n2025-06-28 18:19:21,138 DEBUG blivet/MainThread: DiskDevice.setup: sdf ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,142 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdf ; type: lvmpv ; status: False ;\n2025-06-28 18:19:21,142 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,152 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,157 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg2 ; status: False ;\n2025-06-28 18:19:21,157 INFO program/MainThread: Running [45] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg2 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:21,185 INFO program/MainThread: stdout[45]: LVM2_VG_NAME=test_vg2 LVM2_VG_UUID=lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV LVM2_VG_SIZE=9638510592 LVM2_VG_FREE=9638510592 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=2298 LVM2_VG_FREE_COUNT=2298 LVM2_PV_COUNT=3 LVM2_VG_EXPORTED= LVM2_VG_TAGS=\n\n2025-06-28 18:19:21,185 INFO program/MainThread: stderr[45]: \n2025-06-28 18:19:21,185 INFO program/MainThread: ...done [45] (exit code: 0)\n2025-06-28 18:19:21,190 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg2-lv4 ; status: False ;\n2025-06-28 18:19:21,190 INFO program/MainThread: Running [46] lvm lvcreate -n lv4 -L 1888256K -y --type linear test_vg2 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:21,237 INFO program/MainThread: stdout[46]: Logical volume \"lv4\" created.\n\n2025-06-28 18:19:21,237 INFO program/MainThread: stderr[46]: \n2025-06-28 18:19:21,237 INFO program/MainThread: ...done [46] (exit code: 0)\n2025-06-28 18:19:21,250 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg2-lv4 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,257 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg2-lv4 ; status: True ;\n2025-06-28 18:19:21,257 DEBUG blivet/MainThread: test_vg2-lv4 sysfs_path set to /sys/devices/virtual/block/dm-4\n2025-06-28 18:19:21,257 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,281 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,286 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg2-lv4 ; sysfs_path: /sys/devices/virtual/block/dm-4 ;\n2025-06-28 18:19:21,287 DEBUG blivet/MainThread: updated test_vg2-lv4 size to 1.8 GiB (1.8 GiB)\n2025-06-28 18:19:21,287 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-4\n2025-06-28 18:19:21,295 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,296 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,309 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,310 INFO blivet/MainThread: executing action: [163] create format xfs filesystem on lvmlv test_vg2-lv4 (id 157)\n2025-06-28 18:19:21,314 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg2-lv4 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,317 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg2-lv4 ; type: xfs ; status: False ;\n2025-06-28 18:19:21,321 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg2-lv4 ; mountpoint: ;\n2025-06-28 18:19:21,321 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored.\n bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None,\n\n2025-06-28 18:19:21,321 INFO program/MainThread: Running [47] mkfs.xfs /dev/mapper/test_vg2-lv4 -f ...\n2025-06-28 18:19:21,551 INFO program/MainThread: stdout[47]: meta-data=/dev/mapper/test_vg2-lv4 isize=512 agcount=4, agsize=118016 blks\n = sectsz=512 attr=2, projid32bit=1\n = crc=1 finobt=1, sparse=1, rmapbt=1\n = reflink=1 bigtime=1 inobtcount=1 nrext64=1\n = exchange=0 \ndata = bsize=4096 blocks=472064, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0\nlog =internal log bsize=4096 blocks=16384, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n2025-06-28 18:19:21,551 INFO program/MainThread: stderr[47]: \n2025-06-28 18:19:21,551 INFO program/MainThread: ...done [47] (exit code: 0)\n2025-06-28 18:19:21,552 INFO program/MainThread: Running [48] xfs_admin -L -- /dev/mapper/test_vg2-lv4 ...\n2025-06-28 18:19:21,566 INFO program/MainThread: stdout[48]: writing all SBs\nnew label = \"\"\n\n2025-06-28 18:19:21,567 INFO program/MainThread: stderr[48]: \n2025-06-28 18:19:21,567 INFO program/MainThread: ...done [48] (exit code: 0)\n2025-06-28 18:19:21,567 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,586 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,592 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg2-lv4 ; status: True ;\n2025-06-28 18:19:21,592 DEBUG blivet/MainThread: test_vg2-lv4 sysfs_path set to /sys/devices/virtual/block/dm-4\n2025-06-28 18:19:21,592 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:21,593 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-4\n2025-06-28 18:19:21,601 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,601 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,614 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,615 INFO blivet/MainThread: executing action: [153] create device lvmlv test_vg2-lv3 (id 148)\n2025-06-28 18:19:21,619 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg2-lv3 ; status: False ;\n2025-06-28 18:19:21,623 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg2-lv3 ; orig: False ;\n2025-06-28 18:19:21,626 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg2 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,626 INFO program/MainThread: Running [49] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg2 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:21,653 INFO program/MainThread: stdout[49]: LVM2_VG_NAME=test_vg2 LVM2_VG_UUID=lsdgTC-BfnH-KhYn-7kpg-p9o4-Powd-aAaNhV LVM2_VG_SIZE=9638510592 LVM2_VG_FREE=7704936448 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=2298 LVM2_VG_FREE_COUNT=1837 LVM2_PV_COUNT=3 LVM2_VG_EXPORTED= LVM2_VG_TAGS=\n\n2025-06-28 18:19:21,653 INFO program/MainThread: stderr[49]: \n2025-06-28 18:19:21,653 INFO program/MainThread: ...done [49] (exit code: 0)\n2025-06-28 18:19:21,657 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg2-lv3 ; status: False ;\n2025-06-28 18:19:21,657 INFO program/MainThread: Running [50] lvm lvcreate -n lv3 -L 946176K -y --type linear test_vg2 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:21,709 INFO program/MainThread: stdout[50]: Logical volume \"lv3\" created.\n\n2025-06-28 18:19:21,709 INFO program/MainThread: stderr[50]: \n2025-06-28 18:19:21,709 INFO program/MainThread: ...done [50] (exit code: 0)\n2025-06-28 18:19:21,714 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg2-lv3 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,719 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg2-lv3 ; status: True ;\n2025-06-28 18:19:21,719 DEBUG blivet/MainThread: test_vg2-lv3 sysfs_path set to /sys/devices/virtual/block/dm-5\n2025-06-28 18:19:21,719 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,748 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,753 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg2-lv3 ; sysfs_path: /sys/devices/virtual/block/dm-5 ;\n2025-06-28 18:19:21,753 DEBUG blivet/MainThread: updated test_vg2-lv3 size to 924 MiB (924 MiB)\n2025-06-28 18:19:21,754 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-5\n2025-06-28 18:19:21,762 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,763 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:21,778 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:21,779 INFO blivet/MainThread: executing action: [154] create format xfs filesystem on lvmlv test_vg2-lv3 (id 148)\n2025-06-28 18:19:21,783 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg2-lv3 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:21,787 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg2-lv3 ; type: xfs ; status: False ;\n2025-06-28 18:19:21,790 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg2-lv3 ; mountpoint: ;\n2025-06-28 18:19:21,790 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored.\n bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None,\n\n2025-06-28 18:19:21,791 INFO program/MainThread: Running [51] mkfs.xfs /dev/mapper/test_vg2-lv3 -f ...\n2025-06-28 18:19:22,635 INFO program/MainThread: stdout[51]: meta-data=/dev/mapper/test_vg2-lv3 isize=512 agcount=4, agsize=59136 blks\n = sectsz=512 attr=2, projid32bit=1\n = crc=1 finobt=1, sparse=1, rmapbt=1\n = reflink=1 bigtime=1 inobtcount=1 nrext64=1\n = exchange=0 \ndata = bsize=4096 blocks=236544, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0\nlog =internal log bsize=4096 blocks=16384, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n2025-06-28 18:19:22,636 INFO program/MainThread: stderr[51]: \n2025-06-28 18:19:22,636 INFO program/MainThread: ...done [51] (exit code: 0)\n2025-06-28 18:19:22,636 INFO program/MainThread: Running [52] xfs_admin -L -- /dev/mapper/test_vg2-lv3 ...\n2025-06-28 18:19:22,656 INFO program/MainThread: stdout[52]: writing all SBs\nnew label = \"\"\n\n2025-06-28 18:19:22,656 INFO program/MainThread: stderr[52]: \n2025-06-28 18:19:22,656 INFO program/MainThread: ...done [52] (exit code: 0)\n2025-06-28 18:19:22,656 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:22,671 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,676 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg2-lv3 ; status: True ;\n2025-06-28 18:19:22,676 DEBUG blivet/MainThread: test_vg2-lv3 sysfs_path set to /sys/devices/virtual/block/dm-5\n2025-06-28 18:19:22,677 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:22,677 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-5\n2025-06-28 18:19:22,685 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,685 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:22,701 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,702 INFO blivet/MainThread: executing action: [105] create format lvmpv on disk sdc (id 22)\n2025-06-28 18:19:22,706 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:22,709 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdc ; type: lvmpv ; status: False ;\n2025-06-28 18:19:22,714 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdc ; type: lvmpv ; status: False ;\n2025-06-28 18:19:22,714 DEBUG blivet/MainThread: lvm filter: device /dev/sdc added to the list of allowed devices\n2025-06-28 18:19:22,714 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:22,714 INFO program/MainThread: Running [53] lvm pvcreate /dev/sdc --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:22,747 INFO program/MainThread: stdout[53]: Physical volume \"/dev/sdc\" successfully created.\n\n2025-06-28 18:19:22,747 INFO program/MainThread: stderr[53]: \n2025-06-28 18:19:22,747 INFO program/MainThread: ...done [53] (exit code: 0)\n2025-06-28 18:19:22,747 INFO program/MainThread: Running [54] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:22,758 INFO program/MainThread: stdout[54]: use_devicesfile=1\n\n2025-06-28 18:19:22,758 INFO program/MainThread: stderr[54]: \n2025-06-28 18:19:22,758 INFO program/MainThread: ...done [54] (exit code: 0)\n2025-06-28 18:19:22,758 INFO program/MainThread: Running [55] lvmdevices --adddev /dev/sdc ...\n2025-06-28 18:19:22,786 INFO program/MainThread: stdout[55]: \n2025-06-28 18:19:22,786 INFO program/MainThread: stderr[55]: \n2025-06-28 18:19:22,786 INFO program/MainThread: ...done [55] (exit code: 0)\n2025-06-28 18:19:22,786 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdc\n2025-06-28 18:19:22,787 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:22,798 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,803 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdc ; status: True ;\n2025-06-28 18:19:22,803 DEBUG blivet/MainThread: sdc sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:2/block/sdc\n2025-06-28 18:19:22,804 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:22,804 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdc\n2025-06-28 18:19:22,813 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,813 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:22,829 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,830 INFO blivet/MainThread: executing action: [101] create format lvmpv on disk sdb (id 7)\n2025-06-28 18:19:22,834 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:22,837 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sdb ; type: lvmpv ; status: False ;\n2025-06-28 18:19:22,840 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sdb ; type: lvmpv ; status: False ;\n2025-06-28 18:19:22,841 DEBUG blivet/MainThread: lvm filter: device /dev/sdb added to the list of allowed devices\n2025-06-28 18:19:22,841 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:22,841 INFO program/MainThread: Running [56] lvm pvcreate /dev/sdb --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:22,879 INFO program/MainThread: stdout[56]: Physical volume \"/dev/sdb\" successfully created.\n\n2025-06-28 18:19:22,880 INFO program/MainThread: stderr[56]: \n2025-06-28 18:19:22,880 INFO program/MainThread: ...done [56] (exit code: 0)\n2025-06-28 18:19:22,880 INFO program/MainThread: Running [57] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:22,891 INFO program/MainThread: stdout[57]: use_devicesfile=1\n\n2025-06-28 18:19:22,891 INFO program/MainThread: stderr[57]: \n2025-06-28 18:19:22,891 INFO program/MainThread: ...done [57] (exit code: 0)\n2025-06-28 18:19:22,891 INFO program/MainThread: Running [58] lvmdevices --adddev /dev/sdb ...\n2025-06-28 18:19:22,923 INFO program/MainThread: stdout[58]: \n2025-06-28 18:19:22,923 INFO program/MainThread: stderr[58]: \n2025-06-28 18:19:22,923 INFO program/MainThread: ...done [58] (exit code: 0)\n2025-06-28 18:19:22,923 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sdb\n2025-06-28 18:19:22,924 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:22,935 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,940 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sdb ; status: True ;\n2025-06-28 18:19:22,940 DEBUG blivet/MainThread: sdb sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:1/block/sdb\n2025-06-28 18:19:22,941 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:22,941 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sdb\n2025-06-28 18:19:22,950 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,950 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:22,964 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:22,965 INFO blivet/MainThread: executing action: [97] create format lvmpv on disk sda (id 2)\n2025-06-28 18:19:22,969 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:22,973 DEBUG blivet/MainThread: LVMPhysicalVolume.create: device: /dev/sda ; type: lvmpv ; status: False ;\n2025-06-28 18:19:22,976 DEBUG blivet/MainThread: LVMPhysicalVolume._create: device: /dev/sda ; type: lvmpv ; status: False ;\n2025-06-28 18:19:22,976 DEBUG blivet/MainThread: lvm filter: device /dev/sda added to the list of allowed devices\n2025-06-28 18:19:22,976 DEBUG blivet/MainThread: lvm filter: clearing the lvm devices list\n2025-06-28 18:19:22,976 INFO program/MainThread: Running [59] lvm pvcreate /dev/sda --config=log {level=7 file=/tmp/lvm.log syslog=0} -y ...\n2025-06-28 18:19:23,009 INFO program/MainThread: stdout[59]: Physical volume \"/dev/sda\" successfully created.\n\n2025-06-28 18:19:23,010 INFO program/MainThread: stderr[59]: \n2025-06-28 18:19:23,010 INFO program/MainThread: ...done [59] (exit code: 0)\n2025-06-28 18:19:23,010 INFO program/MainThread: Running [60] lvm config --typeconfig full devices/use_devicesfile --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:23,021 INFO program/MainThread: stdout[60]: use_devicesfile=1\n\n2025-06-28 18:19:23,021 INFO program/MainThread: stderr[60]: \n2025-06-28 18:19:23,021 INFO program/MainThread: ...done [60] (exit code: 0)\n2025-06-28 18:19:23,021 INFO program/MainThread: Running [61] lvmdevices --adddev /dev/sda ...\n2025-06-28 18:19:23,051 INFO program/MainThread: stdout[61]: \n2025-06-28 18:19:23,051 INFO program/MainThread: stderr[61]: \n2025-06-28 18:19:23,051 INFO program/MainThread: ...done [61] (exit code: 0)\n2025-06-28 18:19:23,051 DEBUG blivet/MainThread: lvm filter: restoring the lvm devices list to /dev/sda\n2025-06-28 18:19:23,051 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,062 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,069 DEBUG blivet/MainThread: DiskDevice.update_sysfs_path: sda ; status: True ;\n2025-06-28 18:19:23,069 DEBUG blivet/MainThread: sda sysfs_path set to /sys/devices/tcm_loop_0/tcm_loop_adapter_0/host2/target2:0:1/2:0:1:0/block/sda\n2025-06-28 18:19:23,071 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:23,071 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=sda\n2025-06-28 18:19:23,079 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,079 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,092 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,092 INFO blivet/MainThread: executing action: [111] create device lvmvg test_vg1 (id 107)\n2025-06-28 18:19:23,096 DEBUG blivet/MainThread: LVMVolumeGroupDevice.create: test_vg1 ; status: False ;\n2025-06-28 18:19:23,099 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg1 ; orig: False ;\n2025-06-28 18:19:23,102 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,105 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sda ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,108 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,111 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdb ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,114 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,117 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdc ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,121 DEBUG blivet/MainThread: LVMVolumeGroupDevice._create: test_vg1 ; status: False ;\n2025-06-28 18:19:23,121 INFO program/MainThread: Running [62] lvm vgcreate -s 4096K test_vg1 /dev/sda /dev/sdb /dev/sdc --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:23,192 INFO program/MainThread: stdout[62]: Volume group \"test_vg1\" successfully created\n\n2025-06-28 18:19:23,192 INFO program/MainThread: stderr[62]: \n2025-06-28 18:19:23,192 INFO program/MainThread: ...done [62] (exit code: 0)\n2025-06-28 18:19:23,206 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg1 ; orig: False ; status: False ; controllable: True ;\n2025-06-28 18:19:23,215 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg1 ; orig: False ;\n2025-06-28 18:19:23,231 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,238 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sda ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,251 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,257 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdb ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,262 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,265 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdc ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,265 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,277 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,281 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg1 ; status: False ;\n2025-06-28 18:19:23,284 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg1 ; status: False ;\n2025-06-28 18:19:23,284 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,296 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,296 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=test_vg1\n2025-06-28 18:19:23,303 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,304 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,313 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,314 INFO blivet/MainThread: executing action: [127] create device lvmlv test_vg1-lv2 (id 122)\n2025-06-28 18:19:23,318 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg1-lv2 ; status: False ;\n2025-06-28 18:19:23,321 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg1-lv2 ; orig: False ;\n2025-06-28 18:19:23,325 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg1 ; orig: False ; status: False ; controllable: True ;\n2025-06-28 18:19:23,328 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup_parents: name: test_vg1 ; orig: False ;\n2025-06-28 18:19:23,332 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,335 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sda ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,338 DEBUG blivet/MainThread: DiskDevice.setup: sdb ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,342 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdb ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,346 DEBUG blivet/MainThread: DiskDevice.setup: sdc ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,349 DEBUG blivet/MainThread: LVMPhysicalVolume.setup: device: /dev/sdc ; type: lvmpv ; status: False ;\n2025-06-28 18:19:23,349 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,361 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,365 DEBUG blivet/MainThread: LVMVolumeGroupDevice.update_sysfs_path: test_vg1 ; status: False ;\n2025-06-28 18:19:23,366 INFO program/MainThread: Running [63] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg1 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:23,395 INFO program/MainThread: stdout[63]: LVM2_VG_NAME=test_vg1 LVM2_VG_UUID=JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg LVM2_VG_SIZE=9638510592 LVM2_VG_FREE=9638510592 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=2298 LVM2_VG_FREE_COUNT=2298 LVM2_PV_COUNT=3 LVM2_VG_EXPORTED= LVM2_VG_TAGS=\n\n2025-06-28 18:19:23,395 INFO program/MainThread: stderr[63]: \n2025-06-28 18:19:23,395 INFO program/MainThread: ...done [63] (exit code: 0)\n2025-06-28 18:19:23,400 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg1-lv2 ; status: False ;\n2025-06-28 18:19:23,400 INFO program/MainThread: Running [64] lvm lvcreate -n lv2 -L 4714496K -y --type linear test_vg1 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:23,461 INFO program/MainThread: stdout[64]: Logical volume \"lv2\" created.\n\n2025-06-28 18:19:23,461 INFO program/MainThread: stderr[64]: \n2025-06-28 18:19:23,461 INFO program/MainThread: ...done [64] (exit code: 0)\n2025-06-28 18:19:23,472 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg1-lv2 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,479 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg1-lv2 ; status: True ;\n2025-06-28 18:19:23,479 DEBUG blivet/MainThread: test_vg1-lv2 sysfs_path set to /sys/devices/virtual/block/dm-6\n2025-06-28 18:19:23,479 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,493 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,498 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg1-lv2 ; sysfs_path: /sys/devices/virtual/block/dm-6 ;\n2025-06-28 18:19:23,498 DEBUG blivet/MainThread: updated test_vg1-lv2 size to 4.5 GiB (4.5 GiB)\n2025-06-28 18:19:23,498 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-6\n2025-06-28 18:19:23,506 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,506 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,522 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,523 INFO blivet/MainThread: executing action: [128] create format xfs filesystem on lvmlv test_vg1-lv2 (id 122)\n2025-06-28 18:19:23,527 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg1-lv2 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,530 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg1-lv2 ; type: xfs ; status: False ;\n2025-06-28 18:19:23,534 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg1-lv2 ; mountpoint: ;\n2025-06-28 18:19:23,534 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored.\n bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None,\n\n2025-06-28 18:19:23,534 INFO program/MainThread: Running [65] mkfs.xfs /dev/mapper/test_vg1-lv2 -f ...\n2025-06-28 18:19:23,772 INFO program/MainThread: stdout[65]: meta-data=/dev/mapper/test_vg1-lv2 isize=512 agcount=4, agsize=294656 blks\n = sectsz=512 attr=2, projid32bit=1\n = crc=1 finobt=1, sparse=1, rmapbt=1\n = reflink=1 bigtime=1 inobtcount=1 nrext64=1\n = exchange=0 \ndata = bsize=4096 blocks=1178624, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0\nlog =internal log bsize=4096 blocks=16384, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n2025-06-28 18:19:23,772 INFO program/MainThread: stderr[65]: \n2025-06-28 18:19:23,772 INFO program/MainThread: ...done [65] (exit code: 0)\n2025-06-28 18:19:23,772 INFO program/MainThread: Running [66] xfs_admin -L -- /dev/mapper/test_vg1-lv2 ...\n2025-06-28 18:19:23,790 INFO program/MainThread: stdout[66]: writing all SBs\nnew label = \"\"\n\n2025-06-28 18:19:23,790 INFO program/MainThread: stderr[66]: \n2025-06-28 18:19:23,790 INFO program/MainThread: ...done [66] (exit code: 0)\n2025-06-28 18:19:23,790 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,809 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,814 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg1-lv2 ; status: True ;\n2025-06-28 18:19:23,814 DEBUG blivet/MainThread: test_vg1-lv2 sysfs_path set to /sys/devices/virtual/block/dm-6\n2025-06-28 18:19:23,815 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:23,815 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-6\n2025-06-28 18:19:23,823 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,823 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,839 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,840 INFO blivet/MainThread: executing action: [118] create device lvmlv test_vg1-lv1 (id 113)\n2025-06-28 18:19:23,844 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.create: test_vg1-lv1 ; status: False ;\n2025-06-28 18:19:23,847 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup_parents: name: test_vg1-lv1 ; orig: False ;\n2025-06-28 18:19:23,851 DEBUG blivet/MainThread: LVMVolumeGroupDevice.setup: test_vg1 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,851 INFO program/MainThread: Running [67] lvm vgs --noheadings --nosuffix --nameprefixes --unquoted --units=b -o name,uuid,size,free,extent_size,extent_count,free_count,pv_count,vg_exported,vg_tags test_vg1 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:23,890 INFO program/MainThread: stdout[67]: LVM2_VG_NAME=test_vg1 LVM2_VG_UUID=JCwfsr-ocKr-azZN-dwpg-MM42-zTOm-C9QPwg LVM2_VG_SIZE=9638510592 LVM2_VG_FREE=4810866688 LVM2_VG_EXTENT_SIZE=4194304 LVM2_VG_EXTENT_COUNT=2298 LVM2_VG_FREE_COUNT=1147 LVM2_PV_COUNT=3 LVM2_VG_EXPORTED= LVM2_VG_TAGS=\n\n2025-06-28 18:19:23,890 INFO program/MainThread: stderr[67]: \n2025-06-28 18:19:23,890 INFO program/MainThread: ...done [67] (exit code: 0)\n2025-06-28 18:19:23,894 DEBUG blivet/MainThread: LVMLogicalVolumeDevice._create: test_vg1-lv1 ; status: False ;\n2025-06-28 18:19:23,894 INFO program/MainThread: Running [68] lvm lvcreate -n lv1 -L 1417216K -y --type linear test_vg1 --config=log {level=7 file=/tmp/lvm.log syslog=0} ...\n2025-06-28 18:19:23,943 INFO program/MainThread: stdout[68]: Logical volume \"lv1\" created.\n\n2025-06-28 18:19:23,943 INFO program/MainThread: stderr[68]: \n2025-06-28 18:19:23,943 INFO program/MainThread: ...done [68] (exit code: 0)\n2025-06-28 18:19:23,961 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg1-lv1 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:23,969 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg1-lv1 ; status: True ;\n2025-06-28 18:19:23,969 DEBUG blivet/MainThread: test_vg1-lv1 sysfs_path set to /sys/devices/virtual/block/dm-7\n2025-06-28 18:19:23,969 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:23,982 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,988 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.read_current_size: exists: True ; path: /dev/mapper/test_vg1-lv1 ; sysfs_path: /sys/devices/virtual/block/dm-7 ;\n2025-06-28 18:19:23,989 DEBUG blivet/MainThread: updated test_vg1-lv1 size to 1.35 GiB (1.35 GiB)\n2025-06-28 18:19:23,989 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-7\n2025-06-28 18:19:23,997 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:23,997 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:24,011 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:24,012 INFO blivet/MainThread: executing action: [119] create format xfs filesystem on lvmlv test_vg1-lv1 (id 113)\n2025-06-28 18:19:24,016 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.setup: test_vg1-lv1 ; orig: False ; status: True ; controllable: True ;\n2025-06-28 18:19:24,019 DEBUG blivet/MainThread: XFS.create: device: /dev/mapper/test_vg1-lv1 ; type: xfs ; status: False ;\n2025-06-28 18:19:24,023 DEBUG blivet/MainThread: XFS._create: type: xfs ; device: /dev/mapper/test_vg1-lv1 ; mountpoint: ;\n2025-06-28 18:19:24,023 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/tasks/fsmkfs.py:279: DeprecationWarning: Passing arguments to gi.types.Boxed.__init__() is deprecated. All arguments passed will be ignored.\n bd_options = BlockDev.FSMkfsOptions(label=self.fs.label if label else None,\n\n2025-06-28 18:19:24,023 INFO program/MainThread: Running [69] mkfs.xfs /dev/mapper/test_vg1-lv1 -f ...\n2025-06-28 18:19:24,855 INFO program/MainThread: stdout[69]: meta-data=/dev/mapper/test_vg1-lv1 isize=512 agcount=4, agsize=88576 blks\n = sectsz=512 attr=2, projid32bit=1\n = crc=1 finobt=1, sparse=1, rmapbt=1\n = reflink=1 bigtime=1 inobtcount=1 nrext64=1\n = exchange=0 \ndata = bsize=4096 blocks=354304, imaxpct=25\n = sunit=0 swidth=0 blks\nnaming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0\nlog =internal log bsize=4096 blocks=16384, version=2\n = sectsz=512 sunit=0 blks, lazy-count=1\nrealtime =none extsz=4096 blocks=0, rtextents=0\n\n2025-06-28 18:19:24,855 INFO program/MainThread: stderr[69]: \n2025-06-28 18:19:24,855 INFO program/MainThread: ...done [69] (exit code: 0)\n2025-06-28 18:19:24,855 INFO program/MainThread: Running [70] xfs_admin -L -- /dev/mapper/test_vg1-lv1 ...\n2025-06-28 18:19:24,871 INFO program/MainThread: stdout[70]: writing all SBs\nnew label = \"\"\n\n2025-06-28 18:19:24,871 INFO program/MainThread: stderr[70]: \n2025-06-28 18:19:24,871 INFO program/MainThread: ...done [70] (exit code: 0)\n2025-06-28 18:19:24,871 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:24,891 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:24,895 DEBUG blivet/MainThread: LVMLogicalVolumeDevice.update_sysfs_path: test_vg1-lv1 ; status: True ;\n2025-06-28 18:19:24,896 DEBUG blivet/MainThread: test_vg1-lv1 sysfs_path set to /sys/devices/virtual/block/dm-7\n2025-06-28 18:19:24,896 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:24,897 INFO program/MainThread: Running... udevadm trigger --action=change --subsystem-match=block --sysname-match=dm-7\n2025-06-28 18:19:24,904 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:24,905 INFO program/MainThread: Running... udevadm settle --timeout=300\n2025-06-28 18:19:24,922 DEBUG program/MainThread: Return code: 0\n2025-06-28 18:19:24,923 WARNING py.warnings/MainThread: /usr/lib/python3.13/site-packages/blivet/util.py:651: FutureWarning: functools.partial will be a method descriptor in future Python versions; wrap it in staticmethod() if you want to preserve the old behavior\n self.id = self._newid_gen() # pylint: disable=attribute-defined-outside-init,assignment-from-no-return\n\n2025-06-28 18:19:24,928 DEBUG blivet/MainThread: PartitionDevice._set_parted_partition: xvda1 ;\n2025-06-28 18:19:24,928 DEBUG blivet/MainThread: device xvda1 new parted_partition parted.Partition instance --\n disk: fileSystem: None\n number: 1 path: /dev/xvda1 type: 0\n name: active: True busy: False\n geometry: PedPartition: <_ped.Partition object at 0x7fbad22453a0>\n2025-06-28 18:19:24,931 DEBUG blivet/MainThread: PartitionDevice._set_parted_partition: xvda2 ;\n2025-06-28 18:19:24,931 DEBUG blivet/MainThread: device xvda2 new parted_partition parted.Partition instance --\n disk: fileSystem: \n number: 2 path: /dev/xvda2 type: 0\n name: active: True busy: True\n geometry: PedPartition: <_ped.Partition object at 0x7fbad2273b00>\n2025-06-28 18:19:24,935 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg1-lv1 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:24,938 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 1.35 GiB lvmlv test_vg1-lv1 (113) with existing xfs filesystem\n2025-06-28 18:19:24,938 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg1-lv1' to 'test_vg1-lv1' (lvmlv)\n2025-06-28 18:19:24,942 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg1-lv2 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:24,946 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 4.5 GiB lvmlv test_vg1-lv2 (122) with existing xfs filesystem\n2025-06-28 18:19:24,946 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg1-lv2' to 'test_vg1-lv2' (lvmlv)\n2025-06-28 18:19:24,949 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg2-lv3 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:24,952 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 924 MiB lvmlv test_vg2-lv3 (148) with existing xfs filesystem\n2025-06-28 18:19:24,952 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg2-lv3' to 'test_vg2-lv3' (lvmlv)\n2025-06-28 18:19:24,955 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg2-lv4 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:24,959 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 1.8 GiB lvmlv test_vg2-lv4 (157) with existing xfs filesystem\n2025-06-28 18:19:24,959 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg2-lv4' to 'test_vg2-lv4' (lvmlv)\n2025-06-28 18:19:24,962 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg3-lv5 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:24,965 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 3.6 GiB lvmlv test_vg3-lv5 (187) with existing xfs filesystem\n2025-06-28 18:19:24,965 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg3-lv5' to 'test_vg3-lv5' (lvmlv)\n2025-06-28 18:19:24,969 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg3-lv6 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:24,972 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 3 GiB lvmlv test_vg3-lv6 (196) with existing xfs filesystem\n2025-06-28 18:19:24,972 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg3-lv6' to 'test_vg3-lv6' (lvmlv)\n2025-06-28 18:19:24,976 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg3-lv7 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:24,979 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 1.2 GiB lvmlv test_vg3-lv7 (205) with existing xfs filesystem\n2025-06-28 18:19:24,979 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg3-lv7' to 'test_vg3-lv7' (lvmlv)\n2025-06-28 18:19:24,982 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/test_vg3-lv8 ; incomplete: False ; hidden: False ;\n2025-06-28 18:19:24,985 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 1.2 GiB lvmlv test_vg3-lv8 (214) with existing xfs filesystem\n2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved '/dev/mapper/test_vg3-lv8' to 'test_vg3-lv8' (lvmlv)\n2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=d16d04d4-8c39-458d-91ab-62a71063488e' to 'test_vg1-lv1' (lvmlv)\n2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=991d22ac-80b0-450e-8c69-466187ba5696' to 'test_vg1-lv2' (lvmlv)\n2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=13c0d833-a00e-49fa-a58c-aa045851ccb6' to 'test_vg2-lv3' (lvmlv)\n2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=8ff069e1-541f-476d-a94a-129e7396e539' to 'test_vg2-lv4' (lvmlv)\n2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=eb47ddaa-1d93-495c-b8f9-7231835c82c5' to 'test_vg3-lv5' (lvmlv)\n2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=84001af2-01d9-4734-b51a-79efe7f59395' to 'test_vg3-lv6' (lvmlv)\n2025-06-28 18:19:24,986 DEBUG blivet/MainThread: resolved 'UUID=596601cb-c0d2-421d-af31-da3ca0a3650c' to 'test_vg3-lv7' (lvmlv)\n2025-06-28 18:19:24,987 DEBUG blivet/MainThread: resolved 'UUID=77628583-14fb-431d-8722-115eadb2c621' to 'test_vg3-lv8' (lvmlv)\n2025-06-28 18:20:04,961 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_h6q1vr90/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py']\n2025-06-28 18:20:10,997 INFO blivet/MainThread: sys.argv = ['/tmp/ansible_fedora.linux_system_roles.blivet_payload_t3_0a_er/ansible_fedora.linux_system_roles.blivet_payload.zip/ansible_collections/fedora/linux_system_roles/plugins/modules/blivet.py']", "task_name": "Debug why list of unused disks has changed", "task_path": "/tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/tasks/cleanup.yml:40" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 28 June 2025 18:20:17 -0400 (0:00:00.924) 0:01:34.664 ********* =============================================================================== fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state -- 11.87s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Make sure blivet is available ------- 4.87s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 fedora.linux_system_roles.snapshot : Run snapshot module snapshot ------- 4.41s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 fedora.linux_system_roles.storage : Get service facts ------------------- 3.15s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Get service facts ------------------- 2.93s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.storage : Get service facts ------------------- 2.73s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 fedora.linux_system_roles.snapshot : Run snapshot module remove --------- 2.37s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 fedora.linux_system_roles.storage : Update facts ------------------------ 2.09s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 fedora.linux_system_roles.snapshot : Run snapshot module check ---------- 2.02s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 fedora.linux_system_roles.snapshot : Run snapshot module snapshot ------- 1.99s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 fedora.linux_system_roles.storage : Get required packages --------------- 1.95s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 fedora.linux_system_roles.snapshot : Ensure required packages are installed --- 1.91s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 fedora.linux_system_roles.snapshot : Run snapshot module check ---------- 1.89s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:14 Find unused disks in the system ----------------------------------------- 1.81s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/tests/snapshot/get_unused_disk.yml:23 fedora.linux_system_roles.snapshot : Ensure required packages are installed --- 1.55s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 fedora.linux_system_roles.snapshot : Ensure required packages are installed --- 1.53s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 fedora.linux_system_roles.snapshot : Ensure required packages are installed --- 1.52s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 fedora.linux_system_roles.storage : Make sure required packages are installed --- 1.49s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 fedora.linux_system_roles.snapshot : Ensure required packages are installed --- 1.47s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/snapshot/tasks/main.yml:6 fedora.linux_system_roles.storage : Make sure blivet is available ------- 1.46s /tmp/collections-Z5w/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2