ansible-playbook 2.9.27 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 2.7.5 (default, Nov 14 2023, 16:14:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] Using /etc/ansible/ansible.cfg as config file [WARNING]: running playbook inside collection fedora.linux_system_roles Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: tests_misc.yml ******************************************************* 1 plays in /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml PLAY [Test misc features of the storage role] ********************************** TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:2 Tuesday 22 July 2025 08:34:53 -0400 (0:00:00.220) 0:00:00.220 ********** ok: [managed-node13] META: ran handlers TASK [Include the role to ensure packages are installed] *********************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:15 Tuesday 22 July 2025 08:34:56 -0400 (0:00:03.108) 0:00:03.329 ********** TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 Tuesday 22 July 2025 08:34:56 -0400 (0:00:00.274) 0:00:03.604 ********** included: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml for managed-node13 TASK [fedora.linux_system_roles.storage : Ensure ansible_facts used by role] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:2 Tuesday 22 July 2025 08:34:57 -0400 (0:00:00.484) 0:00:04.088 ********** skipping: [managed-node13] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set platform/version specific variables] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:7 Tuesday 22 July 2025 08:34:57 -0400 (0:00:00.105) 0:00:04.194 ********** skipping: [managed-node13] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node13] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node13] => (item=CentOS_7.yml) => { "ansible_facts": { "__storage_blivet_diskvolume_mkfs_option_map": { "ext2": "-F", "ext3": "-F", "ext4": "-F" }, "blivet_package_list": [ "python-enum34", "python-blivet3", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap", "{{ 'libblockdev-s390' if ansible_architecture == 's390x' else 'libblockdev' }}" ] }, "ansible_included_var_files": [ "/tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/vars/CentOS_7.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_7.yml" } skipping: [managed-node13] => (item=CentOS_7.9.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS_7.9.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if system is ostree] *********** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 Tuesday 22 July 2025 08:34:57 -0400 (0:00:00.384) 0:00:04.578 ********** ok: [managed-node13] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.storage : Set flag to indicate system is ostree] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:30 Tuesday 22 July 2025 08:34:59 -0400 (0:00:01.329) 0:00:05.907 ********** ok: [managed-node13] => { "ansible_facts": { "__storage_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of pools to be used in testing] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:5 Tuesday 22 July 2025 08:34:59 -0400 (0:00:00.412) 0:00:06.321 ********** ok: [managed-node13] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Define an empty list of volumes to be used in testing] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:9 Tuesday 22 July 2025 08:34:59 -0400 (0:00:00.208) 0:00:06.529 ********** ok: [managed-node13] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Include the appropriate provider tasks] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Tuesday 22 July 2025 08:34:59 -0400 (0:00:00.149) 0:00:06.678 ********** included: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml for managed-node13 TASK [fedora.linux_system_roles.storage : Make sure blivet is available] ******* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 Tuesday 22 July 2025 08:35:00 -0400 (0:00:00.601) 0:00:07.279 ********** ok: [managed-node13] => { "changed": false, "rc": 0, "results": [ "python-enum34-1.0.4-1.el7.noarch providing python-enum34 is already installed", "1:python2-blivet3-3.1.3-3.el7.noarch providing python-blivet3 is already installed", "libblockdev-crypto-2.18-5.el7.x86_64 providing libblockdev-crypto is already installed", "libblockdev-dm-2.18-5.el7.x86_64 providing libblockdev-dm is already installed", "libblockdev-lvm-2.18-5.el7.x86_64 providing libblockdev-lvm is already installed", "libblockdev-mdraid-2.18-5.el7.x86_64 providing libblockdev-mdraid is already installed", "libblockdev-swap-2.18-5.el7.x86_64 providing libblockdev-swap is already installed", "libblockdev-2.18-5.el7.x86_64 providing libblockdev is already installed" ] } TASK [fedora.linux_system_roles.storage : Show storage_pools] ****************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 Tuesday 22 July 2025 08:35:06 -0400 (0:00:06.157) 0:00:13.436 ********** ok: [managed-node13] => { "storage_pools | d([])": [] } TASK [fedora.linux_system_roles.storage : Show storage_volumes] **************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 Tuesday 22 July 2025 08:35:07 -0400 (0:00:00.417) 0:00:13.854 ********** ok: [managed-node13] => { "storage_volumes | d([])": [] } TASK [fedora.linux_system_roles.storage : Get required packages] *************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Tuesday 22 July 2025 08:35:07 -0400 (0:00:00.439) 0:00:14.293 ********** ok: [managed-node13] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Enable copr repositories if needed] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 Tuesday 22 July 2025 08:35:10 -0400 (0:00:03.081) 0:00:17.375 ********** included: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml for managed-node13 TASK [fedora.linux_system_roles.storage : Check if the COPR support packages should be installed] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:2 Tuesday 22 July 2025 08:35:11 -0400 (0:00:00.550) 0:00:17.925 ********** TASK [fedora.linux_system_roles.storage : Make sure COPR support packages are present] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:13 Tuesday 22 July 2025 08:35:11 -0400 (0:00:00.131) 0:00:18.057 ********** skipping: [managed-node13] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Enable COPRs] ************************ task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/enable_coprs.yml:19 Tuesday 22 July 2025 08:35:11 -0400 (0:00:00.253) 0:00:18.311 ********** TASK [fedora.linux_system_roles.storage : Make sure required packages are installed] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 Tuesday 22 July 2025 08:35:11 -0400 (0:00:00.091) 0:00:18.403 ********** ok: [managed-node13] => { "changed": false, "rc": 0, "results": [ "kpartx-0.4.9-136.el7_9.x86_64 providing kpartx is already installed" ] } TASK [fedora.linux_system_roles.storage : Get service facts] ******************* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Tuesday 22 July 2025 08:35:13 -0400 (0:00:02.009) 0:00:20.412 ********** ok: [managed-node13] => { "ansible_facts": { "services": { "NetworkManager-dispatcher.service": { "name": "NetworkManager-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "NetworkManager-wait-online.service": { "name": "NetworkManager-wait-online.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "NetworkManager.service": { "name": "NetworkManager.service", "source": "systemd", "state": "running", "status": "enabled" }, "arp-ethers.service": { "name": "arp-ethers.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "auditd.service": { "name": "auditd.service", "source": "systemd", "state": "running", "status": "enabled" }, "auth-rpcgss-module.service": { "name": "auth-rpcgss-module.service", "source": "systemd", "state": "stopped", "status": "static" }, "autovt@.service": { "name": "autovt@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "blivet.service": { "name": "blivet.service", "source": "systemd", "state": "inactive", "status": "static" }, "blk-availability.service": { "name": "blk-availability.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "brandbot.service": { "name": "brandbot.service", "source": "systemd", "state": "inactive", "status": "static" }, "chrony-dnssrv@.service": { "name": "chrony-dnssrv@.service", "source": "systemd", "state": "unknown", "status": "static" }, "chrony-wait.service": { "name": "chrony-wait.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "chronyd.service": { "name": "chronyd.service", "source": "systemd", "state": "running", "status": "enabled" }, "cloud-config.service": { "name": "cloud-config.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-final.service": { "name": "cloud-final.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init-local.service": { "name": "cloud-init-local.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "cloud-init.service": { "name": "cloud-init.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "console-getty.service": { "name": "console-getty.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "console-shell.service": { "name": "console-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "container-getty@.service": { "name": "container-getty@.service", "source": "systemd", "state": "unknown", "status": "static" }, "cpupower.service": { "name": "cpupower.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "crond.service": { "name": "crond.service", "source": "systemd", "state": "running", "status": "enabled" }, "dbus-org.freedesktop.hostname1.service": { "name": "dbus-org.freedesktop.hostname1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.import1.service": { "name": "dbus-org.freedesktop.import1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.locale1.service": { "name": "dbus-org.freedesktop.locale1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.login1.service": { "name": "dbus-org.freedesktop.login1.service", "source": "systemd", "state": "active", "status": "static" }, "dbus-org.freedesktop.machine1.service": { "name": "dbus-org.freedesktop.machine1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus-org.freedesktop.nm-dispatcher.service": { "name": "dbus-org.freedesktop.nm-dispatcher.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "dbus-org.freedesktop.timedate1.service": { "name": "dbus-org.freedesktop.timedate1.service", "source": "systemd", "state": "inactive", "status": "static" }, "dbus.service": { "name": "dbus.service", "source": "systemd", "state": "running", "status": "static" }, "debug-shell.service": { "name": "debug-shell.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "dm-event.service": { "name": "dm-event.service", "source": "systemd", "state": "stopped", "status": "static" }, "dmraid-activation.service": { "name": "dmraid-activation.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "dracut-cmdline.service": { "name": "dracut-cmdline.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-initqueue.service": { "name": "dracut-initqueue.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-mount.service": { "name": "dracut-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-mount.service": { "name": "dracut-pre-mount.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-pivot.service": { "name": "dracut-pre-pivot.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-trigger.service": { "name": "dracut-pre-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-pre-udev.service": { "name": "dracut-pre-udev.service", "source": "systemd", "state": "stopped", "status": "static" }, "dracut-shutdown.service": { "name": "dracut-shutdown.service", "source": "systemd", "state": "stopped", "status": "static" }, "ebtables.service": { "name": "ebtables.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "emergency.service": { "name": "emergency.service", "source": "systemd", "state": "stopped", "status": "static" }, "firewalld.service": { "name": "firewalld.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "fstrim.service": { "name": "fstrim.service", "source": "systemd", "state": "inactive", "status": "static" }, "getty@.service": { "name": "getty@.service", "source": "systemd", "state": "unknown", "status": "enabled" }, "getty@tty1.service": { "name": "getty@tty1.service", "source": "systemd", "state": "running", "status": "unknown" }, "gssproxy.service": { "name": "gssproxy.service", "source": "systemd", "state": "running", "status": "disabled" }, "halt-local.service": { "name": "halt-local.service", "source": "systemd", "state": "inactive", "status": "static" }, "initrd-cleanup.service": { "name": "initrd-cleanup.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-parse-etc.service": { "name": "initrd-parse-etc.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-switch-root.service": { "name": "initrd-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "initrd-udevadm-cleanup-db.service": { "name": "initrd-udevadm-cleanup-db.service", "source": "systemd", "state": "stopped", "status": "static" }, "iprdump.service": { "name": "iprdump.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "iprinit.service": { "name": "iprinit.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "iprupdate.service": { "name": "iprupdate.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "irqbalance.service": { "name": "irqbalance.service", "source": "systemd", "state": "running", "status": "enabled" }, "kdump.service": { "name": "kdump.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "kmod-static-nodes.service": { "name": "kmod-static-nodes.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-lvmetad.service": { "name": "lvm2-lvmetad.service", "source": "systemd", "state": "running", "status": "static" }, "lvm2-lvmpolld.service": { "name": "lvm2-lvmpolld.service", "source": "systemd", "state": "stopped", "status": "static" }, "lvm2-monitor.service": { "name": "lvm2-monitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "lvm2-pvscan@.service": { "name": "lvm2-pvscan@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-grow-continue@.service": { "name": "mdadm-grow-continue@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdadm-last-resort@.service": { "name": "mdadm-last-resort@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdcheck_continue.service": { "name": "mdcheck_continue.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdcheck_start.service": { "name": "mdcheck_start.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmon@.service": { "name": "mdmon@.service", "source": "systemd", "state": "unknown", "status": "static" }, "mdmonitor-oneshot.service": { "name": "mdmonitor-oneshot.service", "source": "systemd", "state": "inactive", "status": "static" }, "mdmonitor.service": { "name": "mdmonitor.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "messagebus.service": { "name": "messagebus.service", "source": "systemd", "state": "active", "status": "static" }, "microcode.service": { "name": "microcode.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "netconsole": { "name": "netconsole", "source": "sysv", "state": "stopped", "status": "disabled" }, "network": { "name": "network", "source": "sysv", "state": "running", "status": "enabled" }, "network.service": { "name": "network.service", "source": "systemd", "state": "stopped", "status": "unknown" }, "nfs-blkmap.service": { "name": "nfs-blkmap.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-config.service": { "name": "nfs-config.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-idmap.service": { "name": "nfs-idmap.service", "source": "systemd", "state": "inactive", "status": "static" }, "nfs-idmapd.service": { "name": "nfs-idmapd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-lock.service": { "name": "nfs-lock.service", "source": "systemd", "state": "inactive", "status": "static" }, "nfs-mountd.service": { "name": "nfs-mountd.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs-rquotad.service": { "name": "nfs-rquotad.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfs-secure.service": { "name": "nfs-secure.service", "source": "systemd", "state": "inactive", "status": "static" }, "nfs-server.service": { "name": "nfs-server.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "nfs-utils.service": { "name": "nfs-utils.service", "source": "systemd", "state": "stopped", "status": "static" }, "nfs.service": { "name": "nfs.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "nfslock.service": { "name": "nfslock.service", "source": "systemd", "state": "inactive", "status": "static" }, "plymouth-halt.service": { "name": "plymouth-halt.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-kexec.service": { "name": "plymouth-kexec.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-poweroff.service": { "name": "plymouth-poweroff.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-quit-wait.service": { "name": "plymouth-quit-wait.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "plymouth-quit.service": { "name": "plymouth-quit.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "plymouth-read-write.service": { "name": "plymouth-read-write.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "plymouth-reboot.service": { "name": "plymouth-reboot.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "plymouth-start.service": { "name": "plymouth-start.service", "source": "systemd", "state": "stopped", "status": "disabled" }, "plymouth-switch-root.service": { "name": "plymouth-switch-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "polkit.service": { "name": "polkit.service", "source": "systemd", "state": "running", "status": "static" }, "postfix.service": { "name": "postfix.service", "source": "systemd", "state": "running", "status": "enabled" }, "qemu-guest-agent.service": { "name": "qemu-guest-agent.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "quotaon.service": { "name": "quotaon.service", "source": "systemd", "state": "inactive", "status": "static" }, "rc-local.service": { "name": "rc-local.service", "source": "systemd", "state": "stopped", "status": "static" }, "rdisc.service": { "name": "rdisc.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rescue.service": { "name": "rescue.service", "source": "systemd", "state": "stopped", "status": "static" }, "restraintd.service": { "name": "restraintd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rhel-autorelabel-mark.service": { "name": "rhel-autorelabel-mark.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-autorelabel.service": { "name": "rhel-autorelabel.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-configure.service": { "name": "rhel-configure.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-dmesg.service": { "name": "rhel-dmesg.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-domainname.service": { "name": "rhel-domainname.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-import-state.service": { "name": "rhel-import-state.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-loadmodules.service": { "name": "rhel-loadmodules.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rhel-readonly.service": { "name": "rhel-readonly.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "rngd.service": { "name": "rngd.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpc-gssd.service": { "name": "rpc-gssd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-rquotad.service": { "name": "rpc-rquotad.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rpc-statd-notify.service": { "name": "rpc-statd-notify.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpc-statd.service": { "name": "rpc-statd.service", "source": "systemd", "state": "stopped", "status": "static" }, "rpcbind.service": { "name": "rpcbind.service", "source": "systemd", "state": "running", "status": "enabled" }, "rpcgssd.service": { "name": "rpcgssd.service", "source": "systemd", "state": "inactive", "status": "static" }, "rpcidmapd.service": { "name": "rpcidmapd.service", "source": "systemd", "state": "inactive", "status": "static" }, "rsyncd.service": { "name": "rsyncd.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "rsyncd@.service": { "name": "rsyncd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "rsyslog.service": { "name": "rsyslog.service", "source": "systemd", "state": "running", "status": "enabled" }, "selinux-policy-migrate-local-changes@.service": { "name": "selinux-policy-migrate-local-changes@.service", "source": "systemd", "state": "unknown", "status": "static" }, "selinux-policy-migrate-local-changes@targeted.service": { "name": "selinux-policy-migrate-local-changes@targeted.service", "source": "systemd", "state": "stopped", "status": "unknown" }, "serial-getty@.service": { "name": "serial-getty@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "serial-getty@ttyS0.service": { "name": "serial-getty@ttyS0.service", "source": "systemd", "state": "running", "status": "unknown" }, "sshd-keygen.service": { "name": "sshd-keygen.service", "source": "systemd", "state": "stopped", "status": "static" }, "sshd.service": { "name": "sshd.service", "source": "systemd", "state": "running", "status": "enabled" }, "sshd@.service": { "name": "sshd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-ask-password-console.service": { "name": "systemd-ask-password-console.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-plymouth.service": { "name": "systemd-ask-password-plymouth.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-ask-password-wall.service": { "name": "systemd-ask-password-wall.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-backlight@.service": { "name": "systemd-backlight@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-binfmt.service": { "name": "systemd-binfmt.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-bootchart.service": { "name": "systemd-bootchart.service", "source": "systemd", "state": "inactive", "status": "disabled" }, "systemd-firstboot.service": { "name": "systemd-firstboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck-root.service": { "name": "systemd-fsck-root.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-fsck@.service": { "name": "systemd-fsck@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-halt.service": { "name": "systemd-halt.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hibernate-resume@.service": { "name": "systemd-hibernate-resume@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-hibernate.service": { "name": "systemd-hibernate.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hostnamed.service": { "name": "systemd-hostnamed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-hwdb-update.service": { "name": "systemd-hwdb-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-hybrid-sleep.service": { "name": "systemd-hybrid-sleep.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-importd.service": { "name": "systemd-importd.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-initctl.service": { "name": "systemd-initctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-catalog-update.service": { "name": "systemd-journal-catalog-update.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journal-flush.service": { "name": "systemd-journal-flush.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-journald.service": { "name": "systemd-journald.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-kexec.service": { "name": "systemd-kexec.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-localed.service": { "name": "systemd-localed.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-logind.service": { "name": "systemd-logind.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-machine-id-commit.service": { "name": "systemd-machine-id-commit.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-machined.service": { "name": "systemd-machined.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-modules-load.service": { "name": "systemd-modules-load.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-nspawn@.service": { "name": "systemd-nspawn@.service", "source": "systemd", "state": "unknown", "status": "disabled" }, "systemd-poweroff.service": { "name": "systemd-poweroff.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-quotacheck.service": { "name": "systemd-quotacheck.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-random-seed.service": { "name": "systemd-random-seed.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-readahead-collect.service": { "name": "systemd-readahead-collect.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-readahead-done.service": { "name": "systemd-readahead-done.service", "source": "systemd", "state": "stopped", "status": "indirect" }, "systemd-readahead-drop.service": { "name": "systemd-readahead-drop.service", "source": "systemd", "state": "inactive", "status": "enabled" }, "systemd-readahead-replay.service": { "name": "systemd-readahead-replay.service", "source": "systemd", "state": "stopped", "status": "enabled" }, "systemd-reboot.service": { "name": "systemd-reboot.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-remount-fs.service": { "name": "systemd-remount-fs.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-rfkill@.service": { "name": "systemd-rfkill@.service", "source": "systemd", "state": "unknown", "status": "static" }, "systemd-shutdownd.service": { "name": "systemd-shutdownd.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-suspend.service": { "name": "systemd-suspend.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-sysctl.service": { "name": "systemd-sysctl.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-timedated.service": { "name": "systemd-timedated.service", "source": "systemd", "state": "inactive", "status": "static" }, "systemd-tmpfiles-clean.service": { "name": "systemd-tmpfiles-clean.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup-dev.service": { "name": "systemd-tmpfiles-setup-dev.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-tmpfiles-setup.service": { "name": "systemd-tmpfiles-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-settle.service": { "name": "systemd-udev-settle.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udev-trigger.service": { "name": "systemd-udev-trigger.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-udevd.service": { "name": "systemd-udevd.service", "source": "systemd", "state": "running", "status": "static" }, "systemd-update-done.service": { "name": "systemd-update-done.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp-runlevel.service": { "name": "systemd-update-utmp-runlevel.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-update-utmp.service": { "name": "systemd-update-utmp.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-user-sessions.service": { "name": "systemd-user-sessions.service", "source": "systemd", "state": "stopped", "status": "static" }, "systemd-vconsole-setup.service": { "name": "systemd-vconsole-setup.service", "source": "systemd", "state": "stopped", "status": "static" }, "teamd@.service": { "name": "teamd@.service", "source": "systemd", "state": "unknown", "status": "static" }, "tuned.service": { "name": "tuned.service", "source": "systemd", "state": "running", "status": "enabled" }, "wpa_supplicant.service": { "name": "wpa_supplicant.service", "source": "systemd", "state": "inactive", "status": "disabled" } } }, "changed": false } TASK [fedora.linux_system_roles.storage : Set storage_cryptsetup_services] ***** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 Tuesday 22 July 2025 08:35:16 -0400 (0:00:03.363) 0:00:23.776 ********** ok: [managed-node13] => { "ansible_facts": { "storage_cryptsetup_services": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Mask the systemd cryptsetup services] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:64 Tuesday 22 July 2025 08:35:17 -0400 (0:00:00.514) 0:00:24.290 ********** TASK [fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 Tuesday 22 July 2025 08:35:17 -0400 (0:00:00.293) 0:00:24.584 ********** ok: [managed-node13] => { "actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } TASK [fedora.linux_system_roles.storage : Workaround for udev issue on some platforms] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:85 Tuesday 22 July 2025 08:35:19 -0400 (0:00:02.097) 0:00:26.682 ********** skipping: [managed-node13] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Check if /etc/fstab is present] ****** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 Tuesday 22 July 2025 08:35:20 -0400 (0:00:00.311) 0:00:26.993 ********** ok: [managed-node13] => { "changed": false, "stat": { "atime": 1753187017.5739367, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "4db69458c23204aa354c1fce8c724ba0713d6623", "ctime": 1718881114.40265, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 131078, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1718881114.40265, "nlink": 1, "path": "/etc/fstab", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 1207, "uid": 0, "version": "18446744072852913878", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Add fingerprint to /etc/fstab if present] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:97 Tuesday 22 July 2025 08:35:21 -0400 (0:00:01.316) 0:00:28.310 ********** skipping: [managed-node13] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Unmask the systemd cryptsetup services] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:115 Tuesday 22 July 2025 08:35:21 -0400 (0:00:00.199) 0:00:28.510 ********** TASK [fedora.linux_system_roles.storage : Show blivet_output] ****************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:121 Tuesday 22 July 2025 08:35:21 -0400 (0:00:00.159) 0:00:28.669 ********** ok: [managed-node13] => { "blivet_output": { "actions": [], "changed": false, "crypts": [], "failed": false, "leaves": [], "mounts": [], "packages": [], "pools": [], "volumes": [] } } TASK [fedora.linux_system_roles.storage : Set the list of pools for test verification] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:130 Tuesday 22 July 2025 08:35:22 -0400 (0:00:00.359) 0:00:29.029 ********** ok: [managed-node13] => { "ansible_facts": { "_storage_pools_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Set the list of volumes for test verification] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:134 Tuesday 22 July 2025 08:35:22 -0400 (0:00:00.383) 0:00:29.413 ********** ok: [managed-node13] => { "ansible_facts": { "_storage_volumes_list": [] }, "changed": false } TASK [fedora.linux_system_roles.storage : Remove obsolete mounts] ************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:150 Tuesday 22 July 2025 08:35:22 -0400 (0:00:00.360) 0:00:29.774 ********** TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:161 Tuesday 22 July 2025 08:35:23 -0400 (0:00:00.276) 0:00:30.050 ********** skipping: [managed-node13] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Set up new/current mounts] *********** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:166 Tuesday 22 July 2025 08:35:23 -0400 (0:00:00.321) 0:00:30.372 ********** TASK [fedora.linux_system_roles.storage : Manage mount ownership/permissions] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:177 Tuesday 22 July 2025 08:35:23 -0400 (0:00:00.298) 0:00:30.670 ********** TASK [fedora.linux_system_roles.storage : Tell systemd to refresh its view of /etc/fstab] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:189 Tuesday 22 July 2025 08:35:24 -0400 (0:00:00.399) 0:00:31.069 ********** skipping: [managed-node13] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 Tuesday 22 July 2025 08:35:24 -0400 (0:00:00.297) 0:00:31.366 ********** ok: [managed-node13] => { "changed": false, "stat": { "atime": 1753187381.2504056, "attr_flags": "e", "attributes": [ "extents" ], "block_size": 4096, "blocks": 0, "charset": "binary", "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "ctime": 1718879272.062, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 131079, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "inode/x-empty", "mode": "0600", "mtime": 1718879026.308, "nlink": 1, "path": "/etc/crypttab", "pw_name": "root", "readable": true, "rgrp": false, "roth": false, "rusr": true, "size": 0, "uid": 0, "version": "18446744072852913879", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [fedora.linux_system_roles.storage : Manage /etc/crypttab to account for changes we just made] *** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:202 Tuesday 22 July 2025 08:35:25 -0400 (0:00:01.277) 0:00:32.644 ********** TASK [fedora.linux_system_roles.storage : Update facts] ************************ task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 Tuesday 22 July 2025 08:35:26 -0400 (0:00:00.333) 0:00:32.978 ********** ok: [managed-node13] TASK [Mark tasks to be skipped] ************************************************ task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:19 Tuesday 22 July 2025 08:35:28 -0400 (0:00:01.847) 0:00:34.826 ********** ok: [managed-node13] => { "ansible_facts": { "storage_skip_checks": [ "blivet_available", "packages_installed", "service_facts" ] }, "changed": false } TASK [Get unused disks for test] *********************************************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:26 Tuesday 22 July 2025 08:35:28 -0400 (0:00:00.336) 0:00:35.162 ********** included: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml for managed-node13 TASK [Ensure test packages] **************************************************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Tuesday 22 July 2025 08:35:28 -0400 (0:00:00.581) 0:00:35.743 ********** ok: [managed-node13] => { "changed": false, "rc": 0, "results": [ "util-linux-2.23.2-65.el7_9.1.x86_64 providing util-linux is already installed" ] } TASK [Find unused disks in the system] ***************************************** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 Tuesday 22 July 2025 08:35:31 -0400 (0:00:02.636) 0:00:38.380 ********** ok: [managed-node13] => { "changed": false, "disks": "Unable to find unused disk", "info": [ "Line: NAME=\"/dev/xvda\" TYPE=\"disk\" SIZE=\"268435456000\" FSTYPE=\"\" LOG-SEC=\"512\"", "Line: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"ext4\" LOG-SEC=\"512\"", "Line type [part] is not disk: NAME=\"/dev/xvda1\" TYPE=\"part\" SIZE=\"268434390528\" FSTYPE=\"ext4\" LOG-SEC=\"512\"", "filename [xvda1] is a partition", "Disk [/dev/xvda] attrs [{'fstype': '', 'type': 'disk', 'ssize': '512', 'size': '268435456000'}] has partitions" ] } TASK [Debug why there are no unused disks] ************************************* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Tuesday 22 July 2025 08:35:34 -0400 (0:00:02.574) 0:00:40.955 ********** ok: [managed-node13] => { "changed": false, "cmd": "set -x\nexec 1>&2\nlsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC\njournalctl -ex\n", "delta": "0:00:00.039949", "end": "2025-07-22 08:35:36.234666", "rc": 0, "start": "2025-07-22 08:35:36.194717" } STDERR: + exec + lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC NAME="/dev/xvda" TYPE="disk" SIZE="268435456000" FSTYPE="" LOG-SEC="512" NAME="/dev/xvda1" TYPE="part" SIZE="268434390528" FSTYPE="ext4" LOG-SEC="512" + journalctl -ex -- Logs begin at Tue 2025-07-22 08:23:16 EDT, end at Tue 2025-07-22 08:35:36 EDT. -- Jul 22 08:23:16 localhost.localdomain kernel: blkfront: xvda: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled; Jul 22 08:23:16 localhost.localdomain kernel: xvda: xvda1 Jul 22 08:23:16 localhost.localdomain kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input2 Jul 22 08:23:16 localhost.localdomain kernel: libata version 3.00 loaded. Jul 22 08:23:16 localhost.localdomain kernel: ata_piix 0000:00:01.1: version 2.13 Jul 22 08:23:16 localhost.localdomain kernel: scsi host0: ata_piix Jul 22 08:23:16 localhost.localdomain kernel: scsi host1: ata_piix Jul 22 08:23:16 localhost.localdomain kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc000 irq 14 Jul 22 08:23:16 localhost.localdomain kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc008 irq 15 Jul 22 08:23:17 localhost.localdomain systemd[1]: Found device /dev/disk/by-uuid/c7b7d6a5-fd01-4b9b-bcca-153eaff9d312. -- Subject: Unit dev-disk-by\x2duuid-c7b7d6a5\x2dfd01\x2d4b9b\x2dbcca\x2d153eaff9d312.device has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dev-disk-by\x2duuid-c7b7d6a5\x2dfd01\x2d4b9b\x2dbcca\x2d153eaff9d312.device has finished starting up. -- -- The start-up result is done. Jul 22 08:23:17 localhost.localdomain systemd[1]: Starting File System Check on /dev/disk/by-uuid/c7b7d6a5-fd01-4b9b-bcca-153eaff9d312... -- Subject: Unit systemd-fsck-root.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-fsck-root.service has begun starting up. Jul 22 08:23:17 localhost.localdomain systemd-fsck[255]: /dev/xvda1: clean, 63819/262144 files, 527174/1048320 blocks Jul 22 08:23:17 localhost.localdomain systemd[1]: Started File System Check on /dev/disk/by-uuid/c7b7d6a5-fd01-4b9b-bcca-153eaff9d312. -- Subject: Unit systemd-fsck-root.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-fsck-root.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:17 localhost.localdomain kernel: psmouse serio1: alps: Unknown ALPS touchpad: E7=10 00 64, EC=10 00 64 Jul 22 08:23:17 localhost.localdomain kernel: tsc: Refined TSC clocksource calibration: 2899.975 MHz Jul 22 08:23:17 localhost.localdomain kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 22 08:23:19 localhost.localdomain kernel: floppy0: no floppy controllers found Jul 22 08:23:19 localhost.localdomain systemd[1]: Started dracut initqueue hook. -- Subject: Unit dracut-initqueue.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-initqueue.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:19 localhost.localdomain systemd[1]: Reached target Remote File Systems (Pre). -- Subject: Unit remote-fs-pre.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs-pre.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:19 localhost.localdomain systemd[1]: Reached target Remote File Systems. -- Subject: Unit remote-fs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:19 localhost.localdomain systemd[1]: Mounting /sysroot... -- Subject: Unit sysroot.mount has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sysroot.mount has begun starting up. Jul 22 08:23:19 localhost.localdomain kernel: EXT4-fs (xvda1): mounted filesystem with ordered data mode. Opts: (null) Jul 22 08:23:19 localhost.localdomain systemd[1]: Mounted /sysroot. -- Subject: Unit sysroot.mount has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sysroot.mount has finished starting up. -- -- The start-up result is done. Jul 22 08:23:19 localhost.localdomain systemd[1]: Reached target Initrd Root File System. -- Subject: Unit initrd-root-fs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-root-fs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:19 localhost.localdomain systemd[1]: Starting Reload Configuration from the Real Root... -- Subject: Unit initrd-parse-etc.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-parse-etc.service has begun starting up. Jul 22 08:23:19 localhost.localdomain systemd[1]: Reloading. Jul 22 08:23:20 localhost.localdomain systemd[1]: Started Reload Configuration from the Real Root. -- Subject: Unit initrd-parse-etc.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-parse-etc.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:20 localhost.localdomain systemd[1]: Reached target Initrd File Systems. -- Subject: Unit initrd-fs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-fs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:20 localhost.localdomain systemd[1]: Reached target Initrd Default Target. -- Subject: Unit initrd.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:20 localhost.localdomain systemd[1]: Starting dracut pre-pivot and cleanup hook... -- Subject: Unit dracut-pre-pivot.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-pre-pivot.service has begun starting up. Jul 22 08:23:20 localhost.localdomain systemd[1]: Started dracut pre-pivot and cleanup hook. -- Subject: Unit dracut-pre-pivot.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-pre-pivot.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:20 localhost.localdomain systemd[1]: Starting Cleaning Up and Shutting Down Daemons... -- Subject: Unit initrd-cleanup.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-cleanup.service has begun starting up. Jul 22 08:23:20 localhost.localdomain systemd[1]: Starting Plymouth switch root service... -- Subject: Unit plymouth-switch-root.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-switch-root.service has begun starting up. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped dracut pre-pivot and cleanup hook. -- Subject: Unit dracut-pre-pivot.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-pre-pivot.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Initrd Default Target. -- Subject: Unit initrd.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Remote File Systems. -- Subject: Unit remote-fs.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Remote File Systems (Pre). -- Subject: Unit remote-fs-pre.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs-pre.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Basic System. -- Subject: Unit basic.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit basic.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Paths. -- Subject: Unit paths.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit paths.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target System Initialization. -- Subject: Unit sysinit.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sysinit.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Local File Systems. -- Subject: Unit local-fs.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit local-fs.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped Apply Kernel Variables. -- Subject: Unit systemd-sysctl.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-sysctl.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Sockets. -- Subject: Unit sockets.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sockets.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Slices. -- Subject: Unit slices.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit slices.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Timers. -- Subject: Unit timers.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit timers.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopping udev Kernel Device Manager... -- Subject: Unit systemd-udevd.service has begun shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd.service has begun shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped dracut initqueue hook. -- Subject: Unit dracut-initqueue.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-initqueue.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped udev Coldplug all Devices. -- Subject: Unit systemd-udev-trigger.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udev-trigger.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped target Swap. -- Subject: Unit swap.target has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit swap.target has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped udev Kernel Device Manager. -- Subject: Unit systemd-udevd.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped dracut pre-udev hook. -- Subject: Unit dracut-pre-udev.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-pre-udev.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped dracut cmdline hook. -- Subject: Unit dracut-cmdline.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dracut-cmdline.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped Create Static Device Nodes in /dev. -- Subject: Unit systemd-tmpfiles-setup-dev.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-tmpfiles-setup-dev.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Stopped Create list of required static device nodes for the current kernel. -- Subject: Unit kmod-static-nodes.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kmod-static-nodes.service has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Closed udev Kernel Socket. -- Subject: Unit systemd-udevd-kernel.socket has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd-kernel.socket has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Closed udev Control Socket. -- Subject: Unit systemd-udevd-control.socket has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd-control.socket has finished shutting down. Jul 22 08:23:20 localhost.localdomain systemd[1]: Starting Cleanup udevd DB... -- Subject: Unit initrd-udevadm-cleanup-db.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-udevadm-cleanup-db.service has begun starting up. Jul 22 08:23:20 localhost.localdomain systemd[1]: Started Cleaning Up and Shutting Down Daemons. -- Subject: Unit initrd-cleanup.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-cleanup.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:20 localhost.localdomain systemd[1]: Started Cleanup udevd DB. -- Subject: Unit initrd-udevadm-cleanup-db.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-udevadm-cleanup-db.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:20 localhost.localdomain systemd[1]: Reached target Switch Root. -- Subject: Unit initrd-switch-root.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-switch-root.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:20 localhost.localdomain systemd[1]: Started Plymouth switch root service. -- Subject: Unit plymouth-switch-root.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-switch-root.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:20 localhost.localdomain systemd[1]: Starting Switch Root... -- Subject: Unit initrd-switch-root.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit initrd-switch-root.service has begun starting up. Jul 22 08:23:20 localhost.localdomain systemd[1]: Switching root. Jul 22 08:23:20 localhost.localdomain unknown[97]: Journal stopped -- Subject: The journal has been stopped -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- The system journal process has shut down and closed all currently -- active journal files. Jul 22 08:23:27 localhost.localdomain systemd-journal[345]: Runtime journal is using 8.0M (max allowed 180.0M, trying to leave 270.0M free of 1.7G available → current limit 180.0M). Jul 22 08:23:27 localhost.localdomain systemd-journald[97]: Received SIGTERM from PID 1 (systemd). Jul 22 08:23:27 localhost.localdomain kernel: random: crng init done Jul 22 08:23:27 localhost.localdomain kernel: type=1404 audit(1753187002.490:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 Jul 22 08:23:27 localhost.localdomain kernel: SELinux: 2048 avtab hash slots, 112757 rules. Jul 22 08:23:27 localhost.localdomain kernel: SELinux: 2048 avtab hash slots, 112757 rules. Jul 22 08:23:27 localhost.localdomain kernel: SELinux: 8 users, 14 roles, 5049 types, 316 bools, 1 sens, 1024 cats Jul 22 08:23:27 localhost.localdomain kernel: SELinux: 130 classes, 112757 rules Jul 22 08:23:27 localhost.localdomain kernel: SELinux: Completing initialization. Jul 22 08:23:27 localhost.localdomain kernel: SELinux: Setting up existing superblocks. Jul 22 08:23:27 localhost.localdomain kernel: type=1403 audit(1753187003.479:3): policy loaded auid=4294967295 ses=4294967295 Jul 22 08:23:27 localhost.localdomain systemd[1]: Successfully loaded SELinux policy in 1.026287s. Jul 22 08:23:27 localhost.localdomain kernel: ip_tables: (C) 2000-2006 Netfilter Core Team Jul 22 08:23:27 localhost.localdomain systemd[1]: Inserted module 'ip_tables' Jul 22 08:23:27 localhost.localdomain systemd[1]: Relabelled /dev, /run and /sys/fs/cgroup in 8.808ms. Jul 22 08:23:27 localhost.localdomain kernel: EXT4-fs (xvda1): re-mounted. Opts: (null) Jul 22 08:23:27 localhost.localdomain systemd-journal[345]: Journal started -- Subject: The journal has been started -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- The system journal process has started up, opened the journal -- files for writing and is now ready to process requests. Jul 22 08:23:24 localhost.localdomain systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Jul 22 08:23:24 localhost.localdomain systemd[1]: Detected virtualization xen. Jul 22 08:23:24 localhost.localdomain systemd[1]: Detected architecture x86-64. Jul 22 08:23:24 localhost.localdomain systemd[1]: Set hostname to . Jul 22 08:23:24 localhost.localdomain systemd[1]: Initializing machine ID from random generator. Jul 22 08:23:24 localhost.localdomain systemd[1]: Installed transient /etc/machine-id file. Jul 22 08:23:27 localhost.localdomain systemd[1]: Starting Flush Journal to Persistent Storage... -- Subject: Unit systemd-journal-flush.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-journal-flush.service has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started udev Coldplug all Devices. -- Subject: Unit systemd-udev-trigger.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udev-trigger.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd-udevd[358]: starting version 219 Jul 22 08:23:27 localhost.localdomain systemd-udevd[358]: Network interface NamePolicy= disabled on kernel command line, ignoring. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started Flush Journal to Persistent Storage. -- Subject: Unit systemd-journal-flush.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-journal-flush.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started Configure read-only root support. -- Subject: Unit rhel-readonly.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-readonly.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Reached target Local File Systems. -- Subject: Unit local-fs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit local-fs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Starting Commit a transient machine-id on disk... -- Subject: Unit systemd-machine-id-commit.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-machine-id-commit.service has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Starting Tell Plymouth To Write Out Runtime Data... -- Subject: Unit plymouth-read-write.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-read-write.service has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Starting Preprocess NFS configuration... -- Subject: Unit nfs-config.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nfs-config.service has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Starting Import network configuration from initramfs... -- Subject: Unit rhel-import-state.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-import-state.service has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Starting Load/Save Random Seed... -- Subject: Unit systemd-random-seed.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-random-seed.service has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started Load/Save Random Seed. -- Subject: Unit systemd-random-seed.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-random-seed.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started Preprocess NFS configuration. -- Subject: Unit nfs-config.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nfs-config.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started Commit a transient machine-id on disk. -- Subject: Unit systemd-machine-id-commit.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-machine-id-commit.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started Tell Plymouth To Write Out Runtime Data. -- Subject: Unit plymouth-read-write.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-read-write.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started Import network configuration from initramfs. -- Subject: Unit rhel-import-state.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-import-state.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Starting Create Volatile Files and Directories... -- Subject: Unit systemd-tmpfiles-setup.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-tmpfiles-setup.service has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started Create Volatile Files and Directories. -- Subject: Unit systemd-tmpfiles-setup.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-tmpfiles-setup.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Mounting RPC Pipe File System... -- Subject: Unit var-lib-nfs-rpc_pipefs.mount has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit var-lib-nfs-rpc_pipefs.mount has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Starting Security Auditing Service... -- Subject: Unit auditd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit auditd.service has begun starting up. Jul 22 08:23:27 localhost.localdomain systemd[1]: Started udev Kernel Device Manager. -- Subject: Unit systemd-udevd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-udevd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain kernel: RPC: Registered named UNIX socket transport module. Jul 22 08:23:27 localhost.localdomain kernel: RPC: Registered udp transport module. Jul 22 08:23:27 localhost.localdomain kernel: RPC: Registered tcp transport module. Jul 22 08:23:27 localhost.localdomain kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 22 08:23:27 localhost.localdomain systemd[1]: Mounted RPC Pipe File System. -- Subject: Unit var-lib-nfs-rpc_pipefs.mount has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit var-lib-nfs-rpc_pipefs.mount has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain systemd[1]: Reached target rpc_pipefs.target. -- Subject: Unit rpc_pipefs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpc_pipefs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:27 localhost.localdomain kernel: input: PC Speaker as /devices/platform/pcspkr/input/input4 Jul 22 08:23:28 localhost.localdomain kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 22 08:23:28 localhost.localdomain systemd[1]: Found device /dev/ttyS0. -- Subject: Unit dev-ttyS0.device has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dev-ttyS0.device has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain kernel: [TTM] Zone kernel: Available graphics memory: 1843226 kiB Jul 22 08:23:28 localhost.localdomain kernel: [TTM] Initializing pool allocator Jul 22 08:23:28 localhost.localdomain kernel: [TTM] Initializing DMA pool allocator Jul 22 08:23:28 localhost.localdomain kernel: [drm] fb mappable at 0xF8000000 Jul 22 08:23:28 localhost.localdomain kernel: [drm] vram aper at 0xF8000000 Jul 22 08:23:28 localhost.localdomain kernel: [drm] size 33554432 Jul 22 08:23:28 localhost.localdomain kernel: [drm] fb depth is 16 Jul 22 08:23:28 localhost.localdomain kernel: [drm] pitch is 2048 Jul 22 08:23:28 localhost.localdomain auditd[426]: Started dispatcher: /sbin/audispd pid: 428 Jul 22 08:23:28 localhost.localdomain kernel: fbcon: cirrusdrmfb (fb0) is primary device Jul 22 08:23:28 localhost.localdomain kernel: Console: switching to colour frame buffer device 128x48 Jul 22 08:23:28 localhost.localdomain kernel: cirrus 0000:00:02.0: fb0: cirrusdrmfb frame buffer device Jul 22 08:23:28 localhost.localdomain kernel: cryptd: max_cpu_qlen set to 1000 Jul 22 08:23:28 localhost.localdomain kernel: type=1305 audit(1753187008.269:4): audit_pid=426 old=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Jul 22 08:23:28 localhost.localdomain kernel: [drm] Initialized cirrus 1.0.0 20110418 for 0000:00:02.0 on minor 0 Jul 22 08:23:28 localhost.localdomain audispd[428]: No plugins found, exiting Jul 22 08:23:28 localhost.localdomain kernel: ppdev: user-space parallel port driver Jul 22 08:23:28 localhost.localdomain kernel: AVX2 version of gcm_enc/dec engaged. Jul 22 08:23:28 localhost.localdomain kernel: AES CTR mode by8 optimization enabled Jul 22 08:23:28 localhost.localdomain kernel: alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni) Jul 22 08:23:28 localhost.localdomain kernel: alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni) Jul 22 08:23:28 localhost.localdomain auditd[426]: Init complete, auditd 2.8.5 listening for events (startup state enable) Jul 22 08:23:28 localhost.localdomain kernel: EDAC sbridge: Seeking for: PCI ID 8086:2fa0 Jul 22 08:23:28 localhost.localdomain kernel: EDAC sbridge: Ver: 1.1.2 Jul 22 08:23:28 localhost.localdomain augenrules[474]: /sbin/augenrules: No change Jul 22 08:23:28 localhost.localdomain augenrules[474]: No rules Jul 22 08:23:28 localhost.localdomain augenrules[474]: enabled 1 Jul 22 08:23:28 localhost.localdomain augenrules[474]: failure 1 Jul 22 08:23:28 localhost.localdomain augenrules[474]: pid 426 Jul 22 08:23:28 localhost.localdomain augenrules[474]: rate_limit 0 Jul 22 08:23:28 localhost.localdomain augenrules[474]: backlog_limit 8192 Jul 22 08:23:28 localhost.localdomain augenrules[474]: lost 0 Jul 22 08:23:28 localhost.localdomain augenrules[474]: backlog 0 Jul 22 08:23:28 localhost.localdomain augenrules[474]: enabled 1 Jul 22 08:23:28 localhost.localdomain augenrules[474]: failure 1 Jul 22 08:23:28 localhost.localdomain augenrules[474]: pid 426 Jul 22 08:23:28 localhost.localdomain augenrules[474]: rate_limit 0 Jul 22 08:23:28 localhost.localdomain augenrules[474]: backlog_limit 8192 Jul 22 08:23:28 localhost.localdomain augenrules[474]: lost 0 Jul 22 08:23:28 localhost.localdomain augenrules[474]: backlog 1 Jul 22 08:23:28 localhost.localdomain systemd[1]: Started Security Auditing Service. -- Subject: Unit auditd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit auditd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Starting Update UTMP about System Boot/Shutdown... -- Subject: Unit systemd-update-utmp.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-update-utmp.service has begun starting up. Jul 22 08:23:28 localhost.localdomain systemd[1]: Started Update UTMP about System Boot/Shutdown. -- Subject: Unit systemd-update-utmp.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-update-utmp.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Reached target System Initialization. -- Subject: Unit sysinit.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sysinit.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Listening on RPCbind Server Activation Socket. -- Subject: Unit rpcbind.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpcbind.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Starting RPC bind service... -- Subject: Unit rpcbind.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpcbind.service has begun starting up. Jul 22 08:23:28 localhost.localdomain systemd[1]: Started Daily Cleanup of Temporary Directories. -- Subject: Unit systemd-tmpfiles-clean.timer has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-tmpfiles-clean.timer has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Reached target Timers. -- Subject: Unit timers.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit timers.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Listening on D-Bus System Message Bus Socket. -- Subject: Unit dbus.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dbus.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Starting Initial cloud-init job (pre-networking)... -- Subject: Unit cloud-init-local.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-init-local.service has begun starting up. Jul 22 08:23:28 localhost.localdomain systemd[1]: Reached target Sockets. -- Subject: Unit sockets.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sockets.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Reached target Basic System. -- Subject: Unit basic.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit basic.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Started irqbalance daemon. -- Subject: Unit irqbalance.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit irqbalance.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:28 localhost.localdomain systemd[1]: Starting Authorization Manager... -- Subject: Unit polkit.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit polkit.service has begun starting up. Jul 22 08:23:28 localhost.localdomain systemd[1]: Started D-Bus System Message Bus. -- Subject: Unit dbus.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dbus.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain polkitd[503]: Started polkitd version 0.112 Jul 22 08:23:29 localhost.localdomain systemd[1]: Starting NTP client/server... -- Subject: Unit chronyd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit chronyd.service has begun starting up. Jul 22 08:23:29 localhost.localdomain systemd[1]: Starting GSSAPI Proxy Daemon... -- Subject: Unit gssproxy.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit gssproxy.service has begun starting up. Jul 22 08:23:29 localhost.localdomain systemd[1]: Started Hardware RNG Entropy Gatherer Daemon. -- Subject: Unit rngd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rngd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain systemd[1]: Starting Dump dmesg to /var/log/dmesg... -- Subject: Unit rhel-dmesg.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-dmesg.service has begun starting up. Jul 22 08:23:29 localhost.localdomain systemd[1]: Starting Login Service... -- Subject: Unit systemd-logind.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-logind.service has begun starting up. Jul 22 08:23:29 localhost.localdomain systemd[1]: Started RPC bind service. -- Subject: Unit rpcbind.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpcbind.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain chronyd[520]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG) Jul 22 08:23:29 localhost.localdomain systemd[1]: Started Dump dmesg to /var/log/dmesg. -- Subject: Unit rhel-dmesg.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rhel-dmesg.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain chronyd[520]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift Jul 22 08:23:29 localhost.localdomain systemd-logind[514]: New seat seat0. -- Subject: A new seat seat0 is now available -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new seat seat0 has been configured and is now available. Jul 22 08:23:29 localhost.localdomain systemd-logind[514]: Watching system buttons on /dev/input/event0 (Power Button) Jul 22 08:23:29 localhost.localdomain systemd-logind[514]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 22 08:23:29 localhost.localdomain systemd[1]: Started Login Service. -- Subject: Unit systemd-logind.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-logind.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain systemd[1]: Started NTP client/server. -- Subject: Unit chronyd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit chronyd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain polkitd[503]: Loading rules from directory /etc/polkit-1/rules.d Jul 22 08:23:29 localhost.localdomain polkitd[503]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 22 08:23:29 localhost.localdomain systemd[1]: Started GSSAPI Proxy Daemon. -- Subject: Unit gssproxy.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit gssproxy.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain systemd[1]: Reached target NFS client services. -- Subject: Unit nfs-client.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nfs-client.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain systemd[1]: Reached target Remote File Systems (Pre). -- Subject: Unit remote-fs-pre.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs-pre.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain systemd[1]: Reached target Remote File Systems. -- Subject: Unit remote-fs.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit remote-fs.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:29 localhost.localdomain polkitd[503]: Finished loading, compiling and executing 2 rules Jul 22 08:23:29 localhost.localdomain polkitd[503]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 22 08:23:29 localhost.localdomain systemd[1]: Started Authorization Manager. -- Subject: Unit polkit.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit polkit.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:30 localhost.localdomain rngd[512]: Initalizing available sources Jul 22 08:23:30 localhost.localdomain rngd[512]: Failed to init entropy source 0: Hardware RNG Device Jul 22 08:23:30 localhost.localdomain rngd[512]: Enabling RDRAND rng support Jul 22 08:23:30 localhost.localdomain rngd[512]: Initalizing entropy source Intel RDRAND Instruction RNG Jul 22 08:23:31 localhost.localdomain rngd[512]: Enabling JITTER rng support Jul 22 08:23:31 localhost.localdomain rngd[512]: Initalizing entropy source JITTER Entropy generator Jul 22 08:23:31 localhost.localdomain kernel: floppy0: no floppy controllers found Jul 22 08:23:31 localhost.localdomain kernel: work still pending Jul 22 08:23:33 localhost.localdomain cloud-init[534]: Cloud-init v. 0.7.9 running 'init-local' at Tue, 22 Jul 2025 12:23:33 +0000. Up 17.37 seconds. Jul 22 08:23:33 localhost.localdomain systemd[1]: Started Initial cloud-init job (pre-networking). -- Subject: Unit cloud-init-local.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-init-local.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:33 localhost.localdomain systemd[1]: Reached target Network (Pre). -- Subject: Unit network-pre.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network-pre.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:33 localhost.localdomain systemd[1]: Starting Network Manager... -- Subject: Unit NetworkManager.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager.service has begun starting up. Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.1408] NetworkManager (version 1.18.8-2.el7_9) is starting... (for the first time) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.1416] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 10-slaves-order.conf) Jul 22 08:23:34 localhost.localdomain systemd[1]: Started Network Manager. -- Subject: Unit NetworkManager.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.1536] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Jul 22 08:23:34 localhost.localdomain systemd[1]: Starting Network Manager Wait Online... -- Subject: Unit NetworkManager-wait-online.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-wait-online.service has begun starting up. Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.1611] manager[0x55e3fe76b090]: monitoring kernel firmware directory '/lib/firmware'. Jul 22 08:23:34 localhost.localdomain dbus[504]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' Jul 22 08:23:34 localhost.localdomain systemd[1]: Starting Hostname Service... -- Subject: Unit systemd-hostnamed.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-hostnamed.service has begun starting up. Jul 22 08:23:34 localhost.localdomain dbus[504]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 22 08:23:34 localhost.localdomain systemd[1]: Started Hostname Service. -- Subject: Unit systemd-hostnamed.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-hostnamed.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.2896] hostname: hostname: using hostnamed Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.2898] hostname: hostname changed from (none) to "localhost.localdomain" Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.2907] dns-mgr[0x55e3fe753220]: init: dns=default,systemd-resolved rc-manager=file Jul 22 08:23:34 localhost.localdomain dbus[504]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' Jul 22 08:23:34 localhost.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-dispatcher.service has begun starting up. Jul 22 08:23:34 localhost.localdomain dbus[504]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jul 22 08:23:34 localhost.localdomain systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.3347] settings: Loaded settings plugin: SettingsPluginIfcfg ("/usr/lib64/NetworkManager/1.18.8-2.el7_9/libnm-settings-plugin-ifcfg-rh.so") Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.3378] settings: Loaded settings plugin: NMSIbftPlugin ("/usr/lib64/NetworkManager/1.18.8-2.el7_9/libnm-settings-plugin-ibft.so") Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.3378] settings: Loaded settings plugin: NMSKeyfilePlugin (internal) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.3430] ifcfg-rh: new connection /etc/sysconfig/network-scripts/ifcfg-eth0 (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03,"System eth0") Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.3476] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.3477] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.3478] manager: Networking is enabled by state file Jul 22 08:23:34 localhost.localdomain nm-dispatcher[582]: req:1 'hostname': new request (4 scripts) Jul 22 08:23:34 localhost.localdomain nm-dispatcher[582]: req:1 'hostname': start running ordered scripts... Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.3499] dhcp-init: Using DHCP client 'dhclient' Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4269] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.18.8-2.el7_9/libnm-device-plugin-team.so) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4295] device (lo): carrier: link connected Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4298] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4317] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4330] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Jul 22 08:23:34 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4355] device (eth0): carrier: link connected Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4456] device (eth0): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4509] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4533] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4536] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4548] manager: NetworkManager state is now CONNECTING Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4555] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4566] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.4578] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Jul 22 08:23:34 localhost.localdomain nm-dispatcher[582]: req:2 'connectivity-change': new request (4 scripts) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.5033] dhcp4 (eth0): dhclient started with pid 596 Jul 22 08:23:34 localhost.localdomain nm-dispatcher[582]: req:2 'connectivity-change': start running ordered scripts... Jul 22 08:23:34 localhost.localdomain dhclient[596]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 (xid=0x398a265d) Jul 22 08:23:34 localhost.localdomain dhclient[596]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x398a265d) Jul 22 08:23:34 localhost.localdomain dhclient[596]: DHCPOFFER from 10.31.40.1 Jul 22 08:23:34 localhost.localdomain dhclient[596]: DHCPACK from 10.31.40.1 (xid=0x398a265d) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6163] dhcp4 (eth0): address 10.31.42.231 Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6163] dhcp4 (eth0): plen 22 (255.255.252.0) Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6163] dhcp4 (eth0): gateway 10.31.40.1 Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6164] dhcp4 (eth0): lease time 3600 Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6164] dhcp4 (eth0): hostname 'ip-10-31-42-231' Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6164] dhcp4 (eth0): nameserver '10.29.169.13' Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6164] dhcp4 (eth0): nameserver '10.29.170.12' Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6164] dhcp4 (eth0): nameserver '10.2.32.1' Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6164] dhcp4 (eth0): domain name 'testing-farm.us-east-1.aws.redhat.com' Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6164] dhcp4 (eth0): state changed unknown -> bound Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6175] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6184] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6186] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6191] manager: NetworkManager state is now CONNECTED_LOCAL Jul 22 08:23:34 localhost.localdomain dhclient[596]: bound to 10.31.42.231 -- renewal in 1437 seconds. Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6239] manager: NetworkManager state is now CONNECTED_SITE Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6240] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Jul 22 08:23:34 localhost.localdomain NetworkManager[576]: [1753187014.6242] policy: set-hostname: set hostname to 'ip-10-31-42-231' (from DHCPv4) Jul 22 08:23:34 ip-10-31-42-231 systemd-hostnamed[581]: Changed host name to 'ip-10-31-42-231' Jul 22 08:23:34 ip-10-31-42-231 NetworkManager[576]: [1753187014.6317] device (eth0): Activation: successful, device activated. Jul 22 08:23:34 ip-10-31-42-231 NetworkManager[576]: [1753187014.6324] manager: NetworkManager state is now CONNECTED_GLOBAL Jul 22 08:23:34 ip-10-31-42-231 NetworkManager[576]: [1753187014.6335] manager: startup complete Jul 22 08:23:34 ip-10-31-42-231 nm-dispatcher[582]: req:3 'up' [eth0]: new request (4 scripts) Jul 22 08:23:34 ip-10-31-42-231 nm-dispatcher[582]: req:3 'up' [eth0]: start running ordered scripts... Jul 22 08:23:34 ip-10-31-42-231 nm-dispatcher[582]: req:4 'connectivity-change': new request (4 scripts) Jul 22 08:23:34 ip-10-31-42-231 nm-dispatcher[582]: req:5 'hostname': new request (4 scripts) Jul 22 08:23:34 ip-10-31-42-231 systemd[1]: Started Network Manager Wait Online. -- Subject: Unit NetworkManager-wait-online.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-wait-online.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:34 ip-10-31-42-231 systemd[1]: Starting LSB: Bring up/down networking... -- Subject: Unit network.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network.service has begun starting up. Jul 22 08:23:34 ip-10-31-42-231 nm-dispatcher[582]: req:4 'connectivity-change': start running ordered scripts... Jul 22 08:23:34 ip-10-31-42-231 nm-dispatcher[582]: req:5 'hostname': start running ordered scripts... Jul 22 08:23:34 ip-10-31-42-231 network[666]: Bringing up loopback interface: [ OK ] Jul 22 08:23:35 ip-10-31-42-231 network[666]: Bringing up interface eth0: [ OK ] Jul 22 08:23:35 ip-10-31-42-231 systemd[1]: Started LSB: Bring up/down networking. -- Subject: Unit network.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:35 ip-10-31-42-231 systemd[1]: Starting Initial cloud-init job (metadata service crawler)... -- Subject: Unit cloud-init.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-init.service has begun starting up. Jul 22 08:23:35 ip-10-31-42-231 systemd[1]: Reached target Network. -- Subject: Unit network.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:35 ip-10-31-42-231 systemd[1]: Starting Postfix Mail Transport Agent... -- Subject: Unit postfix.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit postfix.service has begun starting up. Jul 22 08:23:35 ip-10-31-42-231 systemd[1]: Starting Dynamic System Tuning Daemon... -- Subject: Unit tuned.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tuned.service has begun starting up. Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: Cloud-init v. 0.7.9 running 'init' at Tue, 22 Jul 2025 12:23:35 +0000. Up 19.58 seconds. Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: ++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++ Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: +--------+------+--------------+---------------+-------+-------------------+ Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: +--------+------+--------------+---------------+-------+-------------------+ Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: | lo: | True | 127.0.0.1 | 255.0.0.0 | . | . | Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: | lo: | True | . | . | d | . | Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: | eth0: | True | 10.31.42.231 | 255.255.252.0 | . | 0e:84:f3:b9:c8:99 | Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: | eth0: | True | . | . | d | 0e:84:f3:b9:c8:99 | Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: +--------+------+--------------+---------------+-------+-------------------+ Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: ++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++ Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: | 0 | 0.0.0.0 | 10.31.40.1 | 0.0.0.0 | eth0 | UG | Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: | 1 | 10.31.40.0 | 0.0.0.0 | 255.255.252.0 | eth0 | U | Jul 22 08:23:35 ip-10-31-42-231 cloud-init[849]: ci-info: +-------+-------------+------------+---------------+-----------+-------+ Jul 22 08:23:36 ip-10-31-42-231 systemd[1]: Started Dynamic System Tuning Daemon. -- Subject: Unit tuned.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit tuned.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:36 ip-10-31-42-231 kernel: EXT4-fs (xvda1): resizing filesystem from 1048320 to 65535739 blocks Jul 22 08:23:36 ip-10-31-42-231 postfix/postfix-script[1184]: starting the Postfix mail system Jul 22 08:23:36 ip-10-31-42-231 postfix/master[1186]: daemon started -- version 2.10.1, configuration /etc/postfix Jul 22 08:23:36 ip-10-31-42-231 systemd[1]: Started Postfix Mail Transport Agent. -- Subject: Unit postfix.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit postfix.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:36 ip-10-31-42-231 kernel: EXT4-fs (xvda1): resized filesystem to 65535739 Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd-hostnamed[581]: Changed static host name to 'ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com' Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd-hostnamed[581]: Changed host name to 'ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com' Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com NetworkManager[576]: [1753187016.7924] hostname: hostname changed from "localhost.localdomain" to "ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com" Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com nm-dispatcher[582]: req:6 'hostname': new request (4 scripts) Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com nm-dispatcher[582]: req:6 'hostname': start running ordered scripts... Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com NetworkManager[576]: [1753187016.8096] policy: set-hostname: set hostname to 'ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com' (from system configuration) Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com nm-dispatcher[582]: req:7 'hostname': new request (4 scripts) Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com nm-dispatcher[582]: req:7 'hostname': start running ordered scripts... Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Initial cloud-init job (metadata service crawler). -- Subject: Unit cloud-init.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-init.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target Network is Online. -- Subject: Unit network-online.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit network-online.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Crash recovery kernel arming... -- Subject: Unit kdump.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kdump.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Notify NFS peers of a restart... -- Subject: Unit rpc-statd-notify.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpc-statd-notify.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting System Logging Service... -- Subject: Unit rsyslog.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rsyslog.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting The restraint harness.... -- Subject: Unit restraintd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit restraintd.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target Cloud-config availability. -- Subject: Unit cloud-config.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-config.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Apply the settings specified in cloud-config... -- Subject: Unit cloud-config.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-config.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Permit User Sessions... -- Subject: Unit systemd-user-sessions.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-user-sessions.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sm-notify[1216]: Version 1.3.0 starting Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting OpenSSH server daemon... -- Subject: Unit sshd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Notify NFS peers of a restart. -- Subject: Unit rpc-statd-notify.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rpc-statd-notify.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Permit User Sessions. -- Subject: Unit systemd-user-sessions.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-user-sessions.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Command Scheduler. -- Subject: Unit crond.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit crond.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Terminate Plymouth Boot Screen... -- Subject: Unit plymouth-quit.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-quit.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Wait for Plymouth Boot Screen to Quit... -- Subject: Unit plymouth-quit-wait.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-quit-wait.service has begun starting up. Jul 22 08:23:36 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started The restraint harness.. -- Subject: Unit restraintd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit restraintd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com crond[1248]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 50% if used.) Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Received SIGRTMIN+21 from PID 229 (plymouthd). Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Terminate Plymouth Boot Screen. -- Subject: Unit plymouth-quit.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-quit.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Wait for Plymouth Boot Screen to Quit. -- Subject: Unit plymouth-quit-wait.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit plymouth-quit-wait.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Serial Getty on ttyS0. -- Subject: Unit serial-getty@ttyS0.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit serial-getty@ttyS0.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Getty on tty1. -- Subject: Unit getty@tty1.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit getty@tty1.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target Login Prompts. -- Subject: Unit getty.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit getty.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com restraintd[1252]: Listening on http://localhost:8081 Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com rsyslogd[1230]: [origin software="rsyslogd" swVersion="8.24.0-57.el7_9.3" x-pid="1230" x-info="http://www.rsyslog.com"] start Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started System Logging Service. -- Subject: Unit rsyslog.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit rsyslog.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[1237]: Server listening on 0.0.0.0 port 22. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[1237]: Server listening on :: port 22. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started OpenSSH server daemon. -- Subject: Unit sshd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com cloud-init[1232]: Cloud-init v. 0.7.9 running 'modules:config' at Tue, 22 Jul 2025 12:23:37 +0000. Up 21.53 seconds. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com crond[1248]: (CRON) INFO (running with inotify support) Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com kdumpctl[1214]: No kdump initial ramdisk found. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com kdumpctl[1214]: Rebuilding /boot/initramfs-3.10.0-1160.119.1.el7.x86_64kdump.img Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopping OpenSSH server daemon... -- Subject: Unit sshd.service has begun shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has begun shutting down. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[1237]: Received signal 15; terminating. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Stopped OpenSSH server daemon. -- Subject: Unit sshd.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has finished shutting down. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting OpenSSH server daemon... -- Subject: Unit sshd.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has begun starting up. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[1336]: Server listening on 0.0.0.0 port 22. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[1336]: Server listening on :: port 22. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started OpenSSH server daemon. -- Subject: Unit sshd.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit sshd.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Apply the settings specified in cloud-config. -- Subject: Unit cloud-config.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-config.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Execute cloud user/final scripts... -- Subject: Unit cloud-final.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-final.service has begun starting up. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com cloud-init[1378]: Cloud-init v. 0.7.9 running 'modules:final' at Tue, 22 Jul 2025 12:23:37 +0000. Up 22.00 seconds. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com ec2[1529]: Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com ec2[1529]: ############################################################# Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com ec2[1529]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com ec2[1529]: 256 SHA256:6XI06CKZZBRV/e3/IgrDLqYzZkOtPFSrZnF8OajHEho no comment (ECDSA) Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com ec2[1529]: 256 SHA256:JrbbBdywcMOrldKxyOAXBx69rlIzfdGLj+pLfM+mDE0 no comment (ED25519) Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com ec2[1529]: 2048 SHA256:PpouZOZBkzpae6L06EXka+DhqyUqJ1ceG3xB7C5VtE0 no comment (RSA) Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com ec2[1529]: -----END SSH HOST KEY FINGERPRINTS----- Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com ec2[1529]: ############################################################# Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com cloud-init[1378]: Cloud-init v. 0.7.9 finished at Tue, 22 Jul 2025 12:23:37 +0000. Datasource DataSourceEc2. Up 22.13 seconds Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Execute cloud user/final scripts. -- Subject: Unit cloud-final.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit cloud-final.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Reached target Multi-User System. -- Subject: Unit multi-user.target has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit multi-user.target has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Update UTMP about System Runlevel Changes... -- Subject: Unit systemd-update-utmp-runlevel.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-update-utmp-runlevel.service has begun starting up. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Update UTMP about System Runlevel Changes. -- Subject: Unit systemd-update-utmp-runlevel.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-update-utmp-runlevel.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1551]: dracut-033-572.el7 Jul 22 08:23:37 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: Executing: /usr/sbin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict -o "plymouth dash resume ifcfg" --mount "/dev/disk/by-uuid/c7b7d6a5-fd01-4b9b-bcca-153eaff9d312 /sysroot ext4 defaults" --no-hostonly-default-device -f /boot/initramfs-3.10.0-1160.119.1.el7.x86_64kdump.img 3.10.0-1160.119.1.el7.x86_64 Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'ifcfg' will not be installed, because it's in the list to be omitted! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'plymouth' will not be installed, because it's in the list to be omitted! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'crypt' will not be installed, because command 'cryptsetup' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'dmsquash-live-ntfs' will not be installed, because command 'ntfs-3g' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'iscsi' will not be installed, because command 'iscsistart' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'resume' will not be installed, because it's in the list to be omitted! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'crypt' will not be installed, because command 'cryptsetup' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'dmsquash-live-ntfs' will not be installed, because command 'ntfs-3g' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'multipath' will not be installed, because command 'multipath' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'iscsi' will not be installed, because command 'iscsistart' could not be found! Jul 22 08:23:38 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Jul 22 08:23:39 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: bash *** Jul 22 08:23:39 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: nss-softokn *** Jul 22 08:23:39 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: i18n *** Jul 22 08:23:39 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: drm *** Jul 22 08:23:40 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: kernel-modules *** Jul 22 08:23:40 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com chronyd[520]: Selected source 85.209.17.10 Jul 22 08:23:47 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: fstab-sys *** Jul 22 08:23:47 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: rootfs-block *** Jul 22 08:23:47 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: terminfo *** Jul 22 08:23:47 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: udev-rules *** Jul 22 08:23:47 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: Skipping udev rule: 40-redhat-cpu-hotplug.rules Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: Skipping udev rule: 91-permissions.rules Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: biosdevname *** Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: systemd *** Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: usrmount *** Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: base *** Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: fs-lib *** Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: kdumpbase *** Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: microcode_ctl-fw_dir_override *** Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl module: mangling fw_dir Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware" Jul 22 08:23:48 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: intel: caveats check for kernel version "3.10.0-1160.119.1.el7.x86_64" passed, adding "/usr/share/microcode_ctl/ucode_with_caveats/intel" to fw_dir variable Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-2d-07" is ignored Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-4e-03" is ignored Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-4f-01" is ignored Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-55-04" is ignored Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-5e-03" is ignored Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: configuration "intel-06-8c-01" is ignored Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: microcode_ctl: final fw_dir: "/usr/share/microcode_ctl/ucode_with_caveats/intel /lib/firmware/updates /lib/firmware" Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including module: shutdown *** Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Including modules done *** Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Installing kernel module dependencies and firmware *** Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Installing kernel module dependencies and firmware done *** Jul 22 08:23:49 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Resolving executable dependencies *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Resolving executable dependencies done*** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Hardlinking files *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Hardlinking files done *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Stripping files *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Stripping files done *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Generating early-microcode cpio image contents *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Constructing GenuineIntel.bin **** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Store current command line parameters *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Creating image file *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Creating microcode section *** Jul 22 08:23:50 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Created microcode section *** Jul 22 08:23:54 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Creating image file done *** Jul 22 08:23:54 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dracut[1556]: *** Creating initramfs image file '/boot/initramfs-3.10.0-1160.119.1.el7.x86_64kdump.img' done *** Jul 22 08:23:55 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com kdumpctl[1214]: kexec: loaded kdump kernel Jul 22 08:23:55 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com kdumpctl[1214]: Starting kdump: [OK] Jul 22 08:23:55 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Crash recovery kernel arming. -- Subject: Unit kdump.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kdump.service has finished starting up. -- -- The start-up result is done. Jul 22 08:23:55 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Startup finished in 846ms (kernel) + 5.941s (initrd) + 32.897s (userspace) = 39.685s. -- Subject: System start-up is now complete -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- All system services necessary queued for starting at boot have been -- successfully started. Note that this does not mean that the machine is -- now idle as services might still be busy with completing start-up. -- -- Kernel start-up required 846762 microseconds. -- -- Initial RAM disk start-up required 5941278 microseconds. -- -- Userspace start-up required 32897359 microseconds. Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7773]: Accepted publickey for root from 10.30.34.89 port 41712 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Created slice User Slice of root. -- Subject: Unit user-0.slice has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit user-0.slice has finished starting up. -- -- The start-up result is done. Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd-logind[514]: New session 1 of user root. -- Subject: A new session 1 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 1 has been created for the user root. -- -- The leading process of the session is 7773. Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Session 1 of user root. -- Subject: Unit session-1.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-1.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7773]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7773]: Received disconnect from 10.30.34.89 port 41712:11: disconnected by user Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7773]: Disconnected from 10.30.34.89 port 41712 Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7773]: pam_unix(sshd:session): session closed for user root Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd-logind[514]: Removed session 1. -- Subject: Session 1 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 1 has been terminated. Jul 22 08:26:08 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Removed slice User Slice of root. -- Subject: Unit user-0.slice has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit user-0.slice has finished shutting down. Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7787]: reverse mapping checking getaddrinfo for b7034625-83bc-41d8-963c-73ccfab3bff8.testing-farm.us-east-1.aws.redhat.com [10.31.9.79] failed - POSSIBLE BREAK-IN ATTEMPT! Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7786]: reverse mapping checking getaddrinfo for b7034625-83bc-41d8-963c-73ccfab3bff8.testing-farm.us-east-1.aws.redhat.com [10.31.9.79] failed - POSSIBLE BREAK-IN ATTEMPT! Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7787]: Accepted publickey for root from 10.31.9.79 port 51656 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7786]: Accepted publickey for root from 10.31.9.79 port 51642 ssh2: RSA SHA256:W3cSdmPJK+d9RwU97ardijPXIZnxHswrpTHWW9oYtEU Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Created slice User Slice of root. -- Subject: Unit user-0.slice has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit user-0.slice has finished starting up. -- -- The start-up result is done. Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd-logind[514]: New session 3 of user root. -- Subject: A new session 3 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 3 has been created for the user root. -- -- The leading process of the session is 7786. Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Session 3 of user root. -- Subject: Unit session-3.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-3.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd-logind[514]: New session 2 of user root. -- Subject: A new session 2 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 2 has been created for the user root. -- -- The leading process of the session is 7787. Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Session 2 of user root. -- Subject: Unit session-2.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-2.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7786]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7787]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7787]: Received disconnect from 10.31.9.79 port 51656:11: disconnected by user Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7787]: Disconnected from 10.31.9.79 port 51656 Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com sshd[7787]: pam_unix(sshd:session): session closed for user root Jul 22 08:26:14 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd-logind[514]: Removed session 2. -- Subject: Session 2 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 2 has been terminated. Jul 22 08:27:55 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com unknown: Running test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1) with reboot count 0 and test restart count 0. (Be aware the test name is sanitized!) Jul 22 08:27:56 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dbus[504]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' Jul 22 08:27:56 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Starting Hostname Service... -- Subject: Unit systemd-hostnamed.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-hostnamed.service has begun starting up. Jul 22 08:27:56 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com dbus[504]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 22 08:27:56 ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com systemd[1]: Started Hostname Service. -- Subject: Unit systemd-hostnamed.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit systemd-hostnamed.service has finished starting up. -- -- The start-up result is done. Jul 22 08:27:56 managed-node13 systemd-hostnamed[8782]: Changed static host name to 'managed-node13' Jul 22 08:27:56 managed-node13 NetworkManager[576]: [1753187276.2070] hostname: hostname changed from "ip-10-31-42-231.testing-farm.us-east-1.aws.redhat.com" to "managed-node13" Jul 22 08:27:56 managed-node13 systemd-hostnamed[8782]: Changed host name to 'managed-node13' Jul 22 08:27:56 managed-node13 dbus[504]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' Jul 22 08:27:56 managed-node13 systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-dispatcher.service has begun starting up. Jul 22 08:27:56 managed-node13 NetworkManager[576]: [1753187276.2235] policy: set-hostname: set hostname to 'managed-node13' (from system configuration) Jul 22 08:27:56 managed-node13 dbus[504]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jul 22 08:27:56 managed-node13 systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Jul 22 08:27:56 managed-node13 nm-dispatcher[8787]: req:1 'hostname': new request (4 scripts) Jul 22 08:27:56 managed-node13 nm-dispatcher[8787]: req:1 'hostname': start running ordered scripts... Jul 22 08:27:56 managed-node13 nm-dispatcher[8787]: req:2 'hostname': new request (4 scripts) Jul 22 08:27:56 managed-node13 nm-dispatcher[8787]: req:2 'hostname': start running ordered scripts... Jul 22 08:27:56 managed-node13 unknown: Leaving test '/Prepare-managed-node/tests/prep_managed_node' (serial number 1). (Be aware the test name is sanitized!) Jul 22 08:29:00 managed-node13 sshd[9585]: Accepted publickey for root from 10.31.42.107 port 55234 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:00 managed-node13 systemd-logind[514]: New session 4 of user root. -- Subject: A new session 4 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 4 has been created for the user root. -- -- The leading process of the session is 9585. Jul 22 08:29:00 managed-node13 systemd[1]: Started Session 4 of user root. -- Subject: Unit session-4.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-4.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:29:00 managed-node13 sshd[9585]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:29:00 managed-node13 sshd[9585]: Received disconnect from 10.31.42.107 port 55234:11: disconnected by user Jul 22 08:29:00 managed-node13 sshd[9585]: Disconnected from 10.31.42.107 port 55234 Jul 22 08:29:00 managed-node13 sshd[9585]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:00 managed-node13 systemd-logind[514]: Removed session 4. -- Subject: Session 4 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 4 has been terminated. Jul 22 08:29:00 managed-node13 sshd[9593]: Accepted publickey for root from 10.31.42.107 port 55236 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:00 managed-node13 systemd-logind[514]: New session 5 of user root. -- Subject: A new session 5 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 5 has been created for the user root. -- -- The leading process of the session is 9593. Jul 22 08:29:00 managed-node13 systemd[1]: Started Session 5 of user root. -- Subject: Unit session-5.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-5.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:29:00 managed-node13 sshd[9593]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:29:00 managed-node13 sshd[9593]: Received disconnect from 10.31.42.107 port 55236:11: disconnected by user Jul 22 08:29:00 managed-node13 sshd[9593]: Disconnected from 10.31.42.107 port 55236 Jul 22 08:29:00 managed-node13 sshd[9593]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:00 managed-node13 systemd-logind[514]: Removed session 5. -- Subject: Session 5 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 5 has been terminated. Jul 22 08:29:19 managed-node13 sshd[9606]: Accepted publickey for root from 10.31.42.107 port 55298 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:29:20 managed-node13 systemd[1]: Started Session 6 of user root. -- Subject: Unit session-6.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-6.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:29:20 managed-node13 systemd-logind[514]: New session 6 of user root. -- Subject: A new session 6 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 6 has been created for the user root. -- -- The leading process of the session is 9606. Jul 22 08:29:20 managed-node13 sshd[9606]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:29:20 managed-node13 sshd[9606]: Received disconnect from 10.31.42.107 port 55298:11: disconnected by user Jul 22 08:29:20 managed-node13 sshd[9606]: Disconnected from 10.31.42.107 port 55298 Jul 22 08:29:20 managed-node13 sshd[9606]: pam_unix(sshd:session): session closed for user root Jul 22 08:29:20 managed-node13 systemd-logind[514]: Removed session 6. -- Subject: Session 6 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 6 has been terminated. Jul 22 08:29:21 managed-node13 sshd[9616]: Accepted publickey for root from 10.31.42.107 port 55304 ssh2: ECDSA SHA256:VQ9XK4k6Vt8UPOycnISNyAPDgpYR9n+H9XDBRYiSHdA Jul 22 08:29:21 managed-node13 systemd[1]: Started Session 7 of user root. -- Subject: Unit session-7.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-7.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:29:21 managed-node13 sshd[9616]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:29:21 managed-node13 systemd-logind[514]: New session 7 of user root. -- Subject: A new session 7 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 7 has been created for the user root. -- -- The leading process of the session is 9616. Jul 22 08:29:22 managed-node13 ansible-setup[9673]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:29:23 managed-node13 sudo[9747]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdlnsinybvttuqtkwmkyptwzxnmnhpgh ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187362.94-11075-82298668564881/AnsiballZ_setup.py Jul 22 08:29:23 managed-node13 sudo[9747]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:23 managed-node13 ansible-setup[9750]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:29:23 managed-node13 sudo[9747]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:24 managed-node13 sudo[9824]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbqpxhuwrdgqcjimawztjxlfufaockrn ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187364.01-11167-198883157189311/AnsiballZ_stat.py Jul 22 08:29:24 managed-node13 sudo[9824]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:24 managed-node13 ansible-stat[9827]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:29:24 managed-node13 sudo[9824]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:25 managed-node13 sudo[9876]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zugzovtmdnqyrxaxybpnovcdjfpinofg ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187364.72-11238-147373108086553/AnsiballZ_yum.py Jul 22 08:29:25 managed-node13 sudo[9876]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:25 managed-node13 ansible-yum[9879]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:29:40 managed-node13 yum[9922]: Installed: libblockdev-utils-2.18-5.el7.x86_64 Jul 22 08:29:40 managed-node13 yum[9922]: Installed: 7:device-mapper-event-libs-1.02.170-6.el7_9.5.x86_64 Jul 22 08:29:40 managed-node13 yum[9922]: Installed: libsolv-0.6.34-4.el7.x86_64 Jul 22 08:29:40 managed-node13 yum[9922]: Installed: libaio-0.3.109-13.el7.x86_64 Jul 22 08:29:40 managed-node13 yum[9922]: Installed: librepo-1.8.1-8.el7_9.x86_64 Jul 22 08:29:40 managed-node13 yum[9922]: Installed: libmodulemd-1.6.3-1.el7.x86_64 Jul 22 08:29:41 managed-node13 yum[9922]: Installed: libdnf-0.22.5-2.el7_9.x86_64 Jul 22 08:29:41 managed-node13 yum[9922]: Installed: device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64 Jul 22 08:29:41 managed-node13 systemd[1]: Reloading. Jul 22 08:29:41 managed-node13 systemd[1]: Reloading. Jul 22 08:29:41 managed-node13 systemd[1]: Listening on Device-mapper event daemon FIFOs. -- Subject: Unit dm-event.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit dm-event.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:29:41 managed-node13 yum[9922]: Installed: 7:device-mapper-event-1.02.170-6.el7_9.5.x86_64 Jul 22 08:29:41 managed-node13 yum[9922]: Installed: libbytesize-1.2-1.el7.x86_64 Jul 22 08:29:41 managed-node13 yum[9922]: Installed: python2-bytesize-1.2-1.el7.x86_64 Jul 22 08:29:41 managed-node13 yum[9922]: Installed: 7:lvm2-libs-2.02.187-6.el7_9.5.x86_64 Jul 22 08:29:41 managed-node13 systemd[1]: Reloading. Jul 22 08:29:41 managed-node13 systemd[1]: Reloading. Jul 22 08:29:41 managed-node13 systemd[1]: Listening on LVM2 metadata daemon socket. -- Subject: Unit lvm2-lvmetad.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-lvmetad.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:29:41 managed-node13 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... -- Subject: Unit lvm2-monitor.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-monitor.service has begun starting up. Jul 22 08:29:41 managed-node13 systemd[1]: Started LVM2 metadata daemon. -- Subject: Unit lvm2-lvmetad.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-lvmetad.service has finished starting up. -- -- The start-up result is done. Jul 22 08:29:41 managed-node13 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. -- Subject: Unit lvm2-monitor.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-monitor.service has finished starting up. -- -- The start-up result is done. Jul 22 08:29:41 managed-node13 systemd[1]: Reloading. Jul 22 08:29:42 managed-node13 systemd[1]: Reloading. Jul 22 08:29:42 managed-node13 systemd[1]: Reloading. Jul 22 08:29:42 managed-node13 systemd[1]: Reloading. Jul 22 08:29:42 managed-node13 systemd[1]: Listening on LVM2 poll daemon socket. -- Subject: Unit lvm2-lvmpolld.socket has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit lvm2-lvmpolld.socket has finished starting up. -- -- The start-up result is done. Jul 22 08:29:42 managed-node13 yum[9922]: Installed: 7:lvm2-2.02.187-6.el7_9.5.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: python2-libdnf-0.22.5-2.el7_9.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: python2-hawkey-0.22.5-2.el7_9.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: libblockdev-2.18-5.el7.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: python2-blockdev-2.18-5.el7.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: 1:pyparted-3.9-15.el7.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: sgpio-1.2.0.10-13.el7.x86_64 Jul 22 08:29:42 managed-node13 systemd[1]: Reloading. Jul 22 08:29:42 managed-node13 yum[9922]: Installed: dmraid-1.0.0.rc16-28.el7.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: dmraid-events-1.0.0.rc16-28.el7.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: volume_key-libs-0.3.9-9.el7.x86_64 Jul 22 08:29:42 managed-node13 yum[9922]: Installed: libreport-filesystem-2.1.11-53.el7.centos.x86_64 Jul 22 08:29:42 managed-node13 systemd[1]: Reloading. Jul 22 08:29:42 managed-node13 yum[9922]: Installed: mdadm-4.1-9.el7_9.x86_64 Jul 22 08:29:42 managed-node13 dbus[504]: [system] Reloaded configuration Jul 22 08:29:42 managed-node13 dbus[504]: [system] Reloaded configuration Jul 22 08:29:42 managed-node13 dbus[504]: [system] Reloaded configuration Jul 22 08:29:42 managed-node13 yum[9922]: Installed: 1:blivet3-data-3.1.3-3.el7.noarch Jul 22 08:29:42 managed-node13 yum[9922]: Installed: lsof-4.87-6.el7.x86_64 Jul 22 08:29:43 managed-node13 yum[9922]: Installed: 1:python2-blivet3-3.1.3-3.el7.noarch Jul 22 08:29:43 managed-node13 yum[9922]: Installed: libblockdev-mdraid-2.18-5.el7.x86_64 Jul 22 08:29:43 managed-node13 yum[9922]: Installed: libblockdev-crypto-2.18-5.el7.x86_64 Jul 22 08:29:43 managed-node13 yum[9922]: Installed: libblockdev-dm-2.18-5.el7.x86_64 Jul 22 08:29:43 managed-node13 yum[9922]: Installed: libblockdev-lvm-2.18-5.el7.x86_64 Jul 22 08:29:43 managed-node13 yum[9922]: Installed: libblockdev-swap-2.18-5.el7.x86_64 Jul 22 08:29:43 managed-node13 yum[9922]: Installed: python-enum34-1.0.4-1.el7.noarch Jul 22 08:29:43 managed-node13 sudo[9876]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:46 managed-node13 sudo[10221]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdmblkmtdxomkvgwraqxopugywpdbtwk ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187384.98-12789-214733269108726/AnsiballZ_blivet.py Jul 22 08:29:46 managed-node13 sudo[10221]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:46 managed-node13 ansible-fedora.linux_system_roles.blivet[10224]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:29:46 managed-node13 sudo[10221]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:48 managed-node13 sudo[10279]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pymqsrnlanhgwtculmoqsyfdruwdinyl ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187387.8-13112-6837660660622/AnsiballZ_yum.py Jul 22 08:29:48 managed-node13 sudo[10279]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:48 managed-node13 ansible-yum[10282]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:29:48 managed-node13 sudo[10279]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:49 managed-node13 sudo[10335]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dofjliebxejxnsppcogaqpddbetydpvu ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187388.87-13295-145413393607047/AnsiballZ_service_facts.py Jul 22 08:29:49 managed-node13 sudo[10335]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:50 managed-node13 ansible-service_facts[10338]: Invoked Jul 22 08:29:50 managed-node13 sudo[10335]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:52 managed-node13 sudo[10500]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbhsczubpaakwtngpuaorsdkdclzwqij ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187392.02-13557-257553896450527/AnsiballZ_blivet.py Jul 22 08:29:52 managed-node13 sudo[10500]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:52 managed-node13 ansible-fedora.linux_system_roles.blivet[10503]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:29:52 managed-node13 sudo[10500]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:54 managed-node13 sudo[10558]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zngcwvyvvywrrrsnccucyqahbpazimhl ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187393.51-13709-74365409370847/AnsiballZ_stat.py Jul 22 08:29:54 managed-node13 sudo[10558]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:54 managed-node13 ansible-stat[10561]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:29:54 managed-node13 sudo[10558]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:57 managed-node13 sudo[10612]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbnczdqifbwuqyxhpjxtrkwlhnjojlbc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187397.03-14034-144897351479872/AnsiballZ_stat.py Jul 22 08:29:57 managed-node13 sudo[10612]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:57 managed-node13 ansible-stat[10615]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:29:57 managed-node13 sudo[10612]: pam_unix(sudo:session): session closed for user root Jul 22 08:29:58 managed-node13 sudo[10666]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psqfluphnftwgilphetsnlybpvjewhdm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187398.3-14128-226835750808013/AnsiballZ_setup.py Jul 22 08:29:58 managed-node13 sudo[10666]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:29:58 managed-node13 ansible-setup[10669]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:29:59 managed-node13 sudo[10666]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:01 managed-node13 sudo[10749]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtpbgjfwcziqizxulnmxffppdtwdqeom ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187400.67-14405-270976959688918/AnsiballZ_yum.py Jul 22 08:30:01 managed-node13 sudo[10749]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:01 managed-node13 ansible-yum[10752]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:30:01 managed-node13 sudo[10749]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:03 managed-node13 sudo[10805]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igrevsxrmowpxcejybfbzgzlnualcnqq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187402.06-14623-6042957945228/AnsiballZ_find_unused_disk.py Jul 22 08:30:03 managed-node13 sudo[10805]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:03 managed-node13 ansible-fedora.linux_system_roles.find_unused_disk[10808]: Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:30:03 managed-node13 sudo[10805]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:05 managed-node13 sudo[10859]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgoamniqebmaamcpsxwcpqyuljxbtkye ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187404.12-14805-245099417915033/AnsiballZ_command.py Jul 22 08:30:05 managed-node13 sudo[10859]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:05 managed-node13 ansible-command[10862]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:30:05 managed-node13 sudo[10859]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:07 managed-node13 sshd[10873]: Accepted publickey for root from 10.31.42.107 port 55384 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:07 managed-node13 systemd-logind[514]: New session 8 of user root. -- Subject: A new session 8 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 8 has been created for the user root. -- -- The leading process of the session is 10873. Jul 22 08:30:07 managed-node13 systemd[1]: Started Session 8 of user root. -- Subject: Unit session-8.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-8.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:30:07 managed-node13 sshd[10873]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:30:08 managed-node13 sshd[10873]: Received disconnect from 10.31.42.107 port 55384:11: disconnected by user Jul 22 08:30:08 managed-node13 sshd[10873]: Disconnected from 10.31.42.107 port 55384 Jul 22 08:30:08 managed-node13 sshd[10873]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:08 managed-node13 systemd-logind[514]: Removed session 8. -- Subject: Session 8 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 8 has been terminated. Jul 22 08:30:08 managed-node13 sshd[10883]: Accepted publickey for root from 10.31.42.107 port 55386 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:09 managed-node13 systemd[1]: Started Session 9 of user root. -- Subject: Unit session-9.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-9.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:30:09 managed-node13 systemd-logind[514]: New session 9 of user root. -- Subject: A new session 9 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 9 has been created for the user root. -- -- The leading process of the session is 10883. Jul 22 08:30:09 managed-node13 sshd[10883]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:30:09 managed-node13 sshd[10883]: Received disconnect from 10.31.42.107 port 55386:11: disconnected by user Jul 22 08:30:09 managed-node13 sshd[10883]: Disconnected from 10.31.42.107 port 55386 Jul 22 08:30:09 managed-node13 sshd[10883]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:09 managed-node13 systemd-logind[514]: Removed session 9. -- Subject: Session 9 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 9 has been terminated. Jul 22 08:30:20 managed-node13 sshd[10893]: Accepted publickey for root from 10.31.42.107 port 55414 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:30:20 managed-node13 systemd-logind[514]: New session 10 of user root. -- Subject: A new session 10 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 10 has been created for the user root. -- -- The leading process of the session is 10893. Jul 22 08:30:20 managed-node13 systemd[1]: Started Session 10 of user root. -- Subject: Unit session-10.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-10.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:30:20 managed-node13 sshd[10893]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:30:20 managed-node13 sshd[10893]: Received disconnect from 10.31.42.107 port 55414:11: disconnected by user Jul 22 08:30:20 managed-node13 sshd[10893]: Disconnected from 10.31.42.107 port 55414 Jul 22 08:30:20 managed-node13 sshd[10893]: pam_unix(sshd:session): session closed for user root Jul 22 08:30:20 managed-node13 systemd-logind[514]: Removed session 10. -- Subject: Session 10 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 10 has been terminated. Jul 22 08:30:29 managed-node13 sudo[10957]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-buhwdecvgoebtlfvuesuolazxsspxmka ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187426.82-17120-254329467847199/AnsiballZ_setup.py Jul 22 08:30:29 managed-node13 sudo[10957]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:29 managed-node13 ansible-setup[10960]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:30:29 managed-node13 sudo[10957]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:34 managed-node13 sudo[11040]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gmolipwpycaiacbircouclafejekvkvv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187432.68-17664-147019132532009/AnsiballZ_stat.py Jul 22 08:30:34 managed-node13 sudo[11040]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:34 managed-node13 ansible-stat[11043]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:30:34 managed-node13 sudo[11040]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:39 managed-node13 sudo[11092]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mobrltbwgcstqstusvhgiekjikwcvcjw ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187437.62-18174-276359055673646/AnsiballZ_yum.py Jul 22 08:30:39 managed-node13 sudo[11092]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:39 managed-node13 ansible-yum[11095]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:30:43 managed-node13 sudo[11092]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:47 managed-node13 sudo[11169]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqznltrojaiemitgujoatwqzpgezgznd ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187445.09-18810-183108997550037/AnsiballZ_blivet.py Jul 22 08:30:47 managed-node13 sudo[11169]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:48 managed-node13 ansible-fedora.linux_system_roles.blivet[11172]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:30:48 managed-node13 sudo[11169]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:52 managed-node13 sudo[11227]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nlowmtgjuwmlfijaxwbaytrxgipdygcq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187451.47-19339-185014901045667/AnsiballZ_yum.py Jul 22 08:30:52 managed-node13 sudo[11227]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:52 managed-node13 ansible-yum[11230]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:30:52 managed-node13 sudo[11227]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:55 managed-node13 sudo[11283]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdqtmsfqkyoftoloygdaatzpbxozneao ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187453.14-19645-15163756211240/AnsiballZ_service_facts.py Jul 22 08:30:55 managed-node13 sudo[11283]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:55 managed-node13 ansible-service_facts[11286]: Invoked Jul 22 08:30:56 managed-node13 sudo[11283]: pam_unix(sudo:session): session closed for user root Jul 22 08:30:59 managed-node13 sudo[11448]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axkftlgaqhgacapaeyowfoxoswzkblgg ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187458.63-20096-150367286353458/AnsiballZ_blivet.py Jul 22 08:30:59 managed-node13 sudo[11448]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:30:59 managed-node13 ansible-fedora.linux_system_roles.blivet[11451]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:30:59 managed-node13 sudo[11448]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:01 managed-node13 sudo[11506]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihlzfqhboreujpjmwnkzlrgdtibrfhbw ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187460.62-20239-119489250942681/AnsiballZ_stat.py Jul 22 08:31:01 managed-node13 sudo[11506]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:01 managed-node13 ansible-stat[11509]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:31:01 managed-node13 sudo[11506]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:06 managed-node13 sudo[11560]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uifehisuojzuyfuzqrvmscctwwqboydq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187465.69-20661-249115153310443/AnsiballZ_stat.py Jul 22 08:31:06 managed-node13 sudo[11560]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:06 managed-node13 ansible-stat[11563]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:31:06 managed-node13 sudo[11560]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:08 managed-node13 sudo[11614]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfltielkpqhzzujngjbnxyjsiukowhsy ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187467.7-20830-70239917443607/AnsiballZ_setup.py Jul 22 08:31:08 managed-node13 sudo[11614]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:08 managed-node13 ansible-setup[11617]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:31:08 managed-node13 sudo[11614]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:12 managed-node13 sudo[11697]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oejqupamotsaeswyxgpmijrzvjybwxze ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187472.08-21141-100380709930287/AnsiballZ_yum.py Jul 22 08:31:12 managed-node13 sudo[11697]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:13 managed-node13 ansible-yum[11700]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:31:13 managed-node13 sudo[11697]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:16 managed-node13 sudo[11753]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dncvxckffbbspnqbdqsedekrwxsetfiv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187473.9-21443-266389196514782/AnsiballZ_find_unused_disk.py Jul 22 08:31:16 managed-node13 sudo[11753]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:16 managed-node13 ansible-fedora.linux_system_roles.find_unused_disk[11756]: Invoked with min_size=10g max_return=1 max_size=0 with_interface=None match_sector_size=False Jul 22 08:31:16 managed-node13 sudo[11753]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:18 managed-node13 sudo[11807]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzpkabyrlwmwnvffepexxuuuzfhzicgl ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187476.85-21777-209756758047075/AnsiballZ_command.py Jul 22 08:31:18 managed-node13 sudo[11807]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:18 managed-node13 ansible-command[11810]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:31:18 managed-node13 sudo[11807]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:22 managed-node13 sshd[11821]: Accepted publickey for root from 10.31.42.107 port 55536 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:22 managed-node13 systemd-logind[514]: New session 11 of user root. -- Subject: A new session 11 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 11 has been created for the user root. -- -- The leading process of the session is 11821. Jul 22 08:31:22 managed-node13 systemd[1]: Started Session 11 of user root. -- Subject: Unit session-11.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-11.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:31:22 managed-node13 sshd[11821]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:31:22 managed-node13 sshd[11821]: Received disconnect from 10.31.42.107 port 55536:11: disconnected by user Jul 22 08:31:22 managed-node13 sshd[11821]: Disconnected from 10.31.42.107 port 55536 Jul 22 08:31:22 managed-node13 sshd[11821]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:22 managed-node13 systemd-logind[514]: Removed session 11. -- Subject: Session 11 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 11 has been terminated. Jul 22 08:31:23 managed-node13 sshd[11831]: Accepted publickey for root from 10.31.42.107 port 55540 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:23 managed-node13 systemd-logind[514]: New session 12 of user root. -- Subject: A new session 12 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 12 has been created for the user root. -- -- The leading process of the session is 11831. Jul 22 08:31:23 managed-node13 systemd[1]: Started Session 12 of user root. -- Subject: Unit session-12.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-12.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:31:23 managed-node13 sshd[11831]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:31:23 managed-node13 sshd[11831]: Received disconnect from 10.31.42.107 port 55540:11: disconnected by user Jul 22 08:31:23 managed-node13 sshd[11831]: Disconnected from 10.31.42.107 port 55540 Jul 22 08:31:23 managed-node13 sshd[11831]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:23 managed-node13 systemd-logind[514]: Removed session 12. -- Subject: Session 12 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 12 has been terminated. Jul 22 08:31:37 managed-node13 sshd[11841]: Accepted publickey for root from 10.31.42.107 port 55572 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:31:38 managed-node13 systemd-logind[514]: New session 13 of user root. -- Subject: A new session 13 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 13 has been created for the user root. -- -- The leading process of the session is 11841. Jul 22 08:31:38 managed-node13 systemd[1]: Started Session 13 of user root. -- Subject: Unit session-13.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-13.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:31:38 managed-node13 sshd[11841]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:31:38 managed-node13 sshd[11841]: Received disconnect from 10.31.42.107 port 55572:11: disconnected by user Jul 22 08:31:38 managed-node13 sshd[11841]: Disconnected from 10.31.42.107 port 55572 Jul 22 08:31:38 managed-node13 sshd[11841]: pam_unix(sshd:session): session closed for user root Jul 22 08:31:38 managed-node13 systemd-logind[514]: Removed session 13. -- Subject: Session 13 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 13 has been terminated. Jul 22 08:31:51 managed-node13 ansible-setup[11905]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:31:54 managed-node13 sudo[11985]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhltgmxumdowtewiqbnkawuqboeglrcv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187513.31-25147-85972059830947/AnsiballZ_setup.py Jul 22 08:31:54 managed-node13 sudo[11985]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:54 managed-node13 ansible-setup[11988]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:31:55 managed-node13 sudo[11985]: pam_unix(sudo:session): session closed for user root Jul 22 08:31:59 managed-node13 sudo[12068]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-spfkekquxaxksscgzuebponmjhqdwmgb ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187517.73-25663-64232302727881/AnsiballZ_stat.py Jul 22 08:31:59 managed-node13 sudo[12068]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:31:59 managed-node13 ansible-stat[12071]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:31:59 managed-node13 sudo[12068]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:04 managed-node13 sudo[12120]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxaqucwulrixiuiykqbybdfpodxejefd ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187522.67-26242-145858467617061/AnsiballZ_yum.py Jul 22 08:32:04 managed-node13 sudo[12120]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:05 managed-node13 ansible-yum[12123]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:32:08 managed-node13 sudo[12120]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:11 managed-node13 sudo[12197]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zryrqjjpllmolwuijwaertbxuapgjawh ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187529.73-27304-135476161359681/AnsiballZ_blivet.py Jul 22 08:32:11 managed-node13 sudo[12197]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:11 managed-node13 ansible-fedora.linux_system_roles.blivet[12200]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:32:11 managed-node13 sudo[12197]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:14 managed-node13 sudo[12255]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oucuozdpnkvrqzniptepknqdoucuqljd ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187534.23-27648-182577200398859/AnsiballZ_yum.py Jul 22 08:32:14 managed-node13 sudo[12255]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:14 managed-node13 ansible-yum[12258]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:32:15 managed-node13 sudo[12255]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:17 managed-node13 sudo[12311]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fvaofpqoeecvpzoxjefaqqwyjixrycau ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187535.4-27878-216364672995121/AnsiballZ_service_facts.py Jul 22 08:32:17 managed-node13 sudo[12311]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:17 managed-node13 ansible-service_facts[12314]: Invoked Jul 22 08:32:17 managed-node13 sudo[12311]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:19 managed-node13 sudo[12476]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-favdhxnwpyxlopsndngitufjeraavngf ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187538.96-28234-122027553373185/AnsiballZ_blivet.py Jul 22 08:32:19 managed-node13 sudo[12476]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:20 managed-node13 ansible-fedora.linux_system_roles.blivet[12479]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:32:20 managed-node13 sudo[12476]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:21 managed-node13 sudo[12534]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-moutytagouxrxyygvsvqdjhhfudzymqa ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187540.78-28341-216889814878497/AnsiballZ_stat.py Jul 22 08:32:21 managed-node13 sudo[12534]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:21 managed-node13 ansible-stat[12537]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:32:21 managed-node13 sudo[12534]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:26 managed-node13 sudo[12588]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojxdjnqqefcfrkfckbxgqamuvqdmyomi ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187545.66-28728-18100261419183/AnsiballZ_stat.py Jul 22 08:32:26 managed-node13 sudo[12588]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:26 managed-node13 ansible-stat[12591]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:32:26 managed-node13 sudo[12588]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:27 managed-node13 sudo[12642]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlikxcreoxzvsnwwnvpgipvdwykesynv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187547.02-28866-143824487781159/AnsiballZ_setup.py Jul 22 08:32:27 managed-node13 sudo[12642]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:27 managed-node13 ansible-setup[12645]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:32:27 managed-node13 sudo[12642]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:31 managed-node13 sudo[12725]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnwvgjcgaddsiyuabfpinpdcpbkmayci ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187550.47-29091-250835374161394/AnsiballZ_yum.py Jul 22 08:32:31 managed-node13 sudo[12725]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:31 managed-node13 ansible-yum[12728]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:32:31 managed-node13 sudo[12725]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:33 managed-node13 sudo[12781]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzwevbzqtmobhcpcxfqcigwrojwjwcnu ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187551.94-29315-112454742983095/AnsiballZ_find_unused_disk.py Jul 22 08:32:33 managed-node13 sudo[12781]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:33 managed-node13 ansible-fedora.linux_system_roles.find_unused_disk[12784]: Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:32:33 managed-node13 sudo[12781]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:36 managed-node13 sudo[12835]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kygcbhwbnvxtalatcdurgmqnqsqmyeuc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187554.25-29568-201920553883612/AnsiballZ_command.py Jul 22 08:32:36 managed-node13 sudo[12835]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:36 managed-node13 ansible-command[12838]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:32:36 managed-node13 sudo[12835]: pam_unix(sudo:session): session closed for user root Jul 22 08:32:39 managed-node13 sshd[12849]: Accepted publickey for root from 10.31.42.107 port 55706 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:39 managed-node13 systemd-logind[514]: New session 14 of user root. -- Subject: A new session 14 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 14 has been created for the user root. -- -- The leading process of the session is 12849. Jul 22 08:32:39 managed-node13 systemd[1]: Started Session 14 of user root. -- Subject: Unit session-14.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-14.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:32:39 managed-node13 sshd[12849]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:32:39 managed-node13 sshd[12849]: Received disconnect from 10.31.42.107 port 55706:11: disconnected by user Jul 22 08:32:39 managed-node13 sshd[12849]: Disconnected from 10.31.42.107 port 55706 Jul 22 08:32:39 managed-node13 sshd[12849]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:39 managed-node13 systemd-logind[514]: Removed session 14. -- Subject: Session 14 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 14 has been terminated. Jul 22 08:32:40 managed-node13 sshd[12859]: Accepted publickey for root from 10.31.42.107 port 55710 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:32:40 managed-node13 systemd[1]: Started Session 15 of user root. -- Subject: Unit session-15.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-15.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:32:40 managed-node13 systemd-logind[514]: New session 15 of user root. -- Subject: A new session 15 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 15 has been created for the user root. -- -- The leading process of the session is 12859. Jul 22 08:32:40 managed-node13 sshd[12859]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:32:41 managed-node13 sshd[12859]: Received disconnect from 10.31.42.107 port 55710:11: disconnected by user Jul 22 08:32:41 managed-node13 sshd[12859]: Disconnected from 10.31.42.107 port 55710 Jul 22 08:32:41 managed-node13 sshd[12859]: pam_unix(sshd:session): session closed for user root Jul 22 08:32:41 managed-node13 systemd-logind[514]: Removed session 15. -- Subject: Session 15 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 15 has been terminated. Jul 22 08:32:50 managed-node13 ansible-setup[12923]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:32:52 managed-node13 sudo[13003]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acvndsyovgqzmpnmkdcrcehpursdzqkc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187572.15-31627-131805023208695/AnsiballZ_setup.py Jul 22 08:32:52 managed-node13 sudo[13003]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:32:52 managed-node13 ansible-setup[13006]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:32:53 managed-node13 sudo[13003]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:00 managed-node13 sudo[13086]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-easwluyghbhecdvcnazsbkkbtsfcpijh ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187579.17-32432-198814695961486/AnsiballZ_stat.py Jul 22 08:33:00 managed-node13 sudo[13086]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:00 managed-node13 ansible-stat[13089]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:33:00 managed-node13 sudo[13086]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:05 managed-node13 sudo[13138]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkqavpblablzlmzubvcgluaugrxeigmd ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187583.9-32763-53625799964728/AnsiballZ_yum.py Jul 22 08:33:05 managed-node13 sudo[13138]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:06 managed-node13 ansible-yum[13141]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:33:09 managed-node13 sudo[13138]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:13 managed-node13 sudo[13215]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tofjixkgvyofhwsumhkyatapuxbhkcya ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187591.27-1191-111734390981554/AnsiballZ_blivet.py Jul 22 08:33:13 managed-node13 sudo[13215]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:14 managed-node13 ansible-fedora.linux_system_roles.blivet[13218]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:33:14 managed-node13 sudo[13215]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:18 managed-node13 sudo[13273]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hnzutbbbfrkyhocctfxuzdfttsrelsou ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187597.51-1798-131799520530390/AnsiballZ_yum.py Jul 22 08:33:18 managed-node13 sudo[13273]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:18 managed-node13 ansible-yum[13276]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:33:18 managed-node13 sudo[13273]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:21 managed-node13 sudo[13329]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azyhdpxygwvnocwoiqthfmaawcoaqdut ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187599.32-2010-196113685534272/AnsiballZ_service_facts.py Jul 22 08:33:21 managed-node13 sudo[13329]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:21 managed-node13 ansible-service_facts[13332]: Invoked Jul 22 08:33:21 managed-node13 sudo[13329]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:23 managed-node13 sudo[13494]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojipteathydwfgjjixusdxixttqbziaw ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187603.39-2370-187519938664858/AnsiballZ_blivet.py Jul 22 08:33:23 managed-node13 sudo[13494]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:24 managed-node13 ansible-fedora.linux_system_roles.blivet[13497]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:33:24 managed-node13 sudo[13494]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:25 managed-node13 sudo[13552]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcckzhqdxggyexticogdetiajlghxgrb ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187605.19-2522-241909063694176/AnsiballZ_stat.py Jul 22 08:33:25 managed-node13 sudo[13552]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:25 managed-node13 ansible-stat[13555]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:33:25 managed-node13 sudo[13552]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:30 managed-node13 sudo[13606]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wnswlbypbyheyeuzywnctlrfkndtirsk ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187609.23-2980-53496897630547/AnsiballZ_stat.py Jul 22 08:33:30 managed-node13 sudo[13606]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:30 managed-node13 ansible-stat[13609]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:33:30 managed-node13 sudo[13606]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:31 managed-node13 sudo[13660]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckpbovmcknnqajtzhhrsxihbafwgubko ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187611.2-3128-2610997279948/AnsiballZ_setup.py Jul 22 08:33:31 managed-node13 sudo[13660]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:32 managed-node13 ansible-setup[13663]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:33:32 managed-node13 sudo[13660]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:35 managed-node13 sudo[13743]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cryxgtcpelersrprpwwlbfvcwodsawda ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187614.41-3293-197574901269319/AnsiballZ_yum.py Jul 22 08:33:35 managed-node13 sudo[13743]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:35 managed-node13 ansible-yum[13746]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:33:35 managed-node13 sudo[13743]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:38 managed-node13 sudo[13799]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qicrubveukyzinggfmoibrqgwksetbad ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187616.22-3511-240526089954635/AnsiballZ_find_unused_disk.py Jul 22 08:33:38 managed-node13 sudo[13799]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:38 managed-node13 ansible-fedora.linux_system_roles.find_unused_disk[13802]: Invoked with min_size=5g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:33:38 managed-node13 sudo[13799]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:42 managed-node13 sudo[13853]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uiuknfdxwmnxjwqgcfdjrfehowwakodm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187619.77-3786-164621490397299/AnsiballZ_command.py Jul 22 08:33:42 managed-node13 sudo[13853]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:33:42 managed-node13 ansible-command[13856]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:33:42 managed-node13 sudo[13853]: pam_unix(sudo:session): session closed for user root Jul 22 08:33:46 managed-node13 sshd[13867]: Accepted publickey for root from 10.31.42.107 port 55824 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:46 managed-node13 systemd-logind[514]: New session 16 of user root. -- Subject: A new session 16 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 16 has been created for the user root. -- -- The leading process of the session is 13867. Jul 22 08:33:46 managed-node13 systemd[1]: Started Session 16 of user root. -- Subject: Unit session-16.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-16.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:33:46 managed-node13 sshd[13867]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:33:46 managed-node13 sshd[13867]: Received disconnect from 10.31.42.107 port 55824:11: disconnected by user Jul 22 08:33:46 managed-node13 sshd[13867]: Disconnected from 10.31.42.107 port 55824 Jul 22 08:33:46 managed-node13 sshd[13867]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:46 managed-node13 systemd-logind[514]: Removed session 16. -- Subject: Session 16 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 16 has been terminated. Jul 22 08:33:47 managed-node13 sshd[13877]: Accepted publickey for root from 10.31.42.107 port 55826 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:33:47 managed-node13 systemd[1]: Started Session 17 of user root. -- Subject: Unit session-17.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-17.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:33:47 managed-node13 systemd-logind[514]: New session 17 of user root. -- Subject: A new session 17 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 17 has been created for the user root. -- -- The leading process of the session is 13877. Jul 22 08:33:47 managed-node13 sshd[13877]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:33:47 managed-node13 sshd[13877]: Received disconnect from 10.31.42.107 port 55826:11: disconnected by user Jul 22 08:33:47 managed-node13 sshd[13877]: Disconnected from 10.31.42.107 port 55826 Jul 22 08:33:47 managed-node13 sshd[13877]: pam_unix(sshd:session): session closed for user root Jul 22 08:33:47 managed-node13 systemd-logind[514]: Removed session 17. -- Subject: Session 17 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 17 has been terminated. Jul 22 08:33:58 managed-node13 ansible-setup[13941]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:00 managed-node13 sudo[14021]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yujpmoqqbkfgpmilioyxtfbkloypsagv ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187639.72-5832-76384444773134/AnsiballZ_setup.py Jul 22 08:34:00 managed-node13 sudo[14021]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:00 managed-node13 ansible-setup[14024]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:01 managed-node13 sudo[14021]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:06 managed-node13 sudo[14104]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-todppyjgncwrjdbdqvklqrbgcajmuaet ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187644.62-6336-134186930385085/AnsiballZ_stat.py Jul 22 08:34:06 managed-node13 sudo[14104]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:06 managed-node13 ansible-stat[14107]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:34:06 managed-node13 sudo[14104]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:11 managed-node13 sudo[14156]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwzbithoagtlyqsfpbcuudzmumghjskg ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187649.78-6841-228322127808712/AnsiballZ_yum.py Jul 22 08:34:11 managed-node13 sudo[14156]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:12 managed-node13 ansible-yum[14159]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:34:15 managed-node13 sudo[14156]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:19 managed-node13 sudo[14233]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-diedhogrsjjcffchdkphhsxfnajiuhgt ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187657.57-7870-37476047866052/AnsiballZ_blivet.py Jul 22 08:34:19 managed-node13 sudo[14233]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:19 managed-node13 ansible-fedora.linux_system_roles.blivet[14236]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:34:19 managed-node13 sudo[14233]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:22 managed-node13 sudo[14291]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixslnmspjfbkthsnefgejuinjraubcia ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187661.71-8253-119556232283502/AnsiballZ_yum.py Jul 22 08:34:22 managed-node13 sudo[14291]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:22 managed-node13 ansible-yum[14294]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:34:22 managed-node13 sudo[14291]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:24 managed-node13 sudo[14347]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqjfeczcabalcorlcexvqdqmoxnckldq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187663.18-8565-201810431125792/AnsiballZ_service_facts.py Jul 22 08:34:24 managed-node13 sudo[14347]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:25 managed-node13 ansible-service_facts[14350]: Invoked Jul 22 08:34:25 managed-node13 sudo[14347]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:28 managed-node13 sudo[14512]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uabpyipcfezdrzhjznlohhhofmuqgvya ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187667.37-8907-22478514172803/AnsiballZ_blivet.py Jul 22 08:34:28 managed-node13 sudo[14512]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:28 managed-node13 ansible-fedora.linux_system_roles.blivet[14515]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:34:28 managed-node13 sudo[14512]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:29 managed-node13 sudo[14570]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-anrrtqigyhzkikgooodtpdrjqzazylvy ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187668.7-9127-97311725161406/AnsiballZ_stat.py Jul 22 08:34:29 managed-node13 sudo[14570]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:29 managed-node13 ansible-stat[14573]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:34:29 managed-node13 sudo[14570]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:32 managed-node13 sudo[14624]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bjakxtmykpnvuxjwxbygyrkjpfwkupyy ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187671.97-9576-166913404112606/AnsiballZ_stat.py Jul 22 08:34:32 managed-node13 sudo[14624]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:32 managed-node13 ansible-stat[14627]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:34:32 managed-node13 sudo[14624]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:34 managed-node13 sudo[14678]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eoxmkgbjiufazkloqszesymdfgsgcsna ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187673.45-9765-236261505928957/AnsiballZ_setup.py Jul 22 08:34:34 managed-node13 sudo[14678]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:34 managed-node13 ansible-setup[14681]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:34 managed-node13 sudo[14678]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:36 managed-node13 sudo[14761]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imjbdbacgeclpckdffcxgnivbotmatuz ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187676.09-9861-68308810680593/AnsiballZ_yum.py Jul 22 08:34:36 managed-node13 sudo[14761]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:36 managed-node13 ansible-yum[14764]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:34:36 managed-node13 sudo[14761]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:38 managed-node13 sudo[14817]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylpfjksfgqkhbqrabkicocqefwjepbuy ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187677.29-10074-271327866545251/AnsiballZ_find_unused_disk.py Jul 22 08:34:38 managed-node13 sudo[14817]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:38 managed-node13 ansible-fedora.linux_system_roles.find_unused_disk[14820]: Invoked with min_size=10g max_return=1 with_interface=scsi max_size=0 match_sector_size=False Jul 22 08:34:38 managed-node13 sudo[14817]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:40 managed-node13 sudo[14871]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbfefvyuwxqhxkuhnakocspezuappzuy ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187679.1-10210-178419737330962/AnsiballZ_command.py Jul 22 08:34:40 managed-node13 sudo[14871]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:40 managed-node13 ansible-command[14874]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:34:40 managed-node13 sudo[14871]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:43 managed-node13 sshd[14885]: Accepted publickey for root from 10.31.42.107 port 55968 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:43 managed-node13 systemd-logind[514]: New session 18 of user root. -- Subject: A new session 18 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 18 has been created for the user root. -- -- The leading process of the session is 14885. Jul 22 08:34:43 managed-node13 systemd[1]: Started Session 18 of user root. -- Subject: Unit session-18.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-18.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:43 managed-node13 sshd[14885]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:43 managed-node13 sshd[14885]: Received disconnect from 10.31.42.107 port 55968:11: disconnected by user Jul 22 08:34:43 managed-node13 sshd[14885]: Disconnected from 10.31.42.107 port 55968 Jul 22 08:34:43 managed-node13 sshd[14885]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:43 managed-node13 systemd-logind[514]: Removed session 18. -- Subject: Session 18 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 18 has been terminated. Jul 22 08:34:44 managed-node13 sshd[14895]: Accepted publickey for root from 10.31.42.107 port 55972 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:34:44 managed-node13 systemd[1]: Started Session 19 of user root. -- Subject: Unit session-19.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-19.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:34:44 managed-node13 systemd-logind[514]: New session 19 of user root. -- Subject: A new session 19 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 19 has been created for the user root. -- -- The leading process of the session is 14895. Jul 22 08:34:44 managed-node13 sshd[14895]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:34:45 managed-node13 sshd[14895]: Received disconnect from 10.31.42.107 port 55972:11: disconnected by user Jul 22 08:34:45 managed-node13 sshd[14895]: Disconnected from 10.31.42.107 port 55972 Jul 22 08:34:45 managed-node13 sshd[14895]: pam_unix(sshd:session): session closed for user root Jul 22 08:34:45 managed-node13 systemd-logind[514]: Removed session 19. -- Subject: Session 19 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 19 has been terminated. Jul 22 08:34:55 managed-node13 sudo[14959]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kiwgvnqwlxeabmuobaqgohgyvgrbstiq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187693.66-12011-42408988706467/AnsiballZ_setup.py Jul 22 08:34:55 managed-node13 sudo[14959]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:55 managed-node13 ansible-setup[14962]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:56 managed-node13 sudo[14959]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:58 managed-node13 sudo[15042]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-czslkbgfcerkgmfqdaevxvdgqygmklig ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187697.97-12604-246383236000674/AnsiballZ_stat.py Jul 22 08:34:58 managed-node13 sudo[15042]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:58 managed-node13 ansible-stat[15045]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:34:58 managed-node13 sudo[15042]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:02 managed-node13 sudo[15094]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vibczvpclhvqxcoesqjupotovqjyucvm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187701.17-12844-140859389080833/AnsiballZ_yum.py Jul 22 08:35:02 managed-node13 sudo[15094]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:03 managed-node13 ansible-yum[15097]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:06 managed-node13 sudo[15094]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:10 managed-node13 sudo[15171]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbvqrzfzbbyjpxkmfzrlpvhhvxfxnqvl ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187707.97-13579-7900527114431/AnsiballZ_blivet.py Jul 22 08:35:10 managed-node13 sudo[15171]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:10 managed-node13 ansible-fedora.linux_system_roles.blivet[15174]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:35:10 managed-node13 sudo[15171]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:13 managed-node13 sudo[15229]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngzofhcrfwmenprbnjbrbfojmirciiiq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187712.49-13779-29446550426492/AnsiballZ_yum.py Jul 22 08:35:13 managed-node13 sudo[15229]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:13 managed-node13 ansible-yum[15232]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:13 managed-node13 sudo[15229]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:15 managed-node13 sudo[15285]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uweetxzhocqcskoknelhzfoypiiuuzff ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187713.92-13902-136337050564357/AnsiballZ_service_facts.py Jul 22 08:35:15 managed-node13 sudo[15285]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:15 managed-node13 ansible-service_facts[15288]: Invoked Jul 22 08:35:16 managed-node13 sudo[15285]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:19 managed-node13 sudo[15450]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jogifuignmpthnqhlgildohwkxpfukwc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187718.27-14181-165606039719296/AnsiballZ_blivet.py Jul 22 08:35:19 managed-node13 sudo[15450]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:19 managed-node13 ansible-fedora.linux_system_roles.blivet[15453]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:35:19 managed-node13 sudo[15450]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:21 managed-node13 sudo[15508]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlfuieshgmwsjjqqgpxdedcnkregazya ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187720.44-14336-138191480947132/AnsiballZ_stat.py Jul 22 08:35:21 managed-node13 sudo[15508]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:21 managed-node13 ansible-stat[15511]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:35:21 managed-node13 sudo[15508]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:25 managed-node13 sudo[15562]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfstudvplkprxpdtowgddfmdwjhzgmht ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187724.85-14619-202959984966782/AnsiballZ_stat.py Jul 22 08:35:25 managed-node13 sudo[15562]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:25 managed-node13 ansible-stat[15565]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:35:25 managed-node13 sudo[15562]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:27 managed-node13 sudo[15616]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbmymdheehovbwqgphaesqhrqogfrdtb ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187726.49-14758-263713836568203/AnsiballZ_setup.py Jul 22 08:35:27 managed-node13 sudo[15616]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:27 managed-node13 ansible-setup[15619]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:35:27 managed-node13 sudo[15616]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:30 managed-node13 sudo[15699]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgtzcuyalnscxsrqsilmzqlzbuifznjn ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187730.18-15031-119688241381375/AnsiballZ_yum.py Jul 22 08:35:30 managed-node13 sudo[15699]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:31 managed-node13 ansible-yum[15702]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:31 managed-node13 sudo[15699]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:33 managed-node13 sudo[15755]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yorudnthzmpvryeglzmjgkzvbqxkzvte ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187732.0-15288-211185526069626/AnsiballZ_find_unused_disk.py Jul 22 08:35:33 managed-node13 sudo[15755]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:33 managed-node13 ansible-fedora.linux_system_roles.find_unused_disk[15758]: Invoked with min_size=5g max_return=1 max_size=0 with_interface=None match_sector_size=False Jul 22 08:35:33 managed-node13 sudo[15755]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:36 managed-node13 sudo[15809]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfkseituriljsdydpjmxvdyjjxssbgii ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187734.5-15527-218832003748077/AnsiballZ_command.py Jul 22 08:35:36 managed-node13 sudo[15809]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:36 managed-node13 ansible-command[15812]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None TASK [Set unused_disks if necessary] ******************************************* task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:29 Tuesday 22 July 2025 08:35:37 -0400 (0:00:02.943) 0:00:43.899 ********** skipping: [managed-node13] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Exit playbook when there's not enough unused disks in the system] ******** task path: /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34 Tuesday 22 July 2025 08:35:37 -0400 (0:00:00.353) 0:00:44.252 ********** fatal: [managed-node13]: FAILED! => { "changed": false } MSG: Unable to find enough unused disks. Exiting playbook. PLAY RECAP ********************************************************************* managed-node13 : ok=28 changed=0 unreachable=0 failed=1 skipped=15 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.9.27", "end_time": "2025-07-22T12:35:37.556415Z", "host": "managed-node13", "message": "Unable to find enough unused disks. Exiting playbook.", "start_time": "2025-07-22T12:35:37.460572Z", "task_name": "Exit playbook when there's not enough unused disks in the system", "task_path": "/tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:34" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Tuesday 22 July 2025 08:35:37 -0400 (0:00:00.133) 0:00:44.386 ********** =============================================================================== fedora.linux_system_roles.storage : Make sure blivet is available ------- 6.16s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:2 fedora.linux_system_roles.storage : Get service facts ------------------- 3.36s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:52 Gathering Facts --------------------------------------------------------- 3.11s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:2 fedora.linux_system_roles.storage : Get required packages --------------- 3.08s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:19 Debug why there are no unused disks ------------------------------------- 2.94s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:20 Ensure test packages ---------------------------------------------------- 2.64s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:2 Find unused disks in the system ----------------------------------------- 2.57s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/get_unused_disk.yml:11 fedora.linux_system_roles.storage : Manage the pools and volumes to match the specified state --- 2.10s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:70 fedora.linux_system_roles.storage : Make sure required packages are installed --- 2.01s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:38 fedora.linux_system_roles.storage : Update facts ------------------------ 1.85s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:224 fedora.linux_system_roles.storage : Check if system is ostree ----------- 1.33s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/set_vars.yml:25 fedora.linux_system_roles.storage : Check if /etc/fstab is present ------ 1.32s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:92 fedora.linux_system_roles.storage : Retrieve facts for the /etc/crypttab file --- 1.28s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:197 fedora.linux_system_roles.storage : Include the appropriate provider tasks --- 0.60s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:13 Get unused disks for test ----------------------------------------------- 0.58s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/tests/storage/tests_misc.yml:26 fedora.linux_system_roles.storage : Enable copr repositories if needed --- 0.55s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:32 fedora.linux_system_roles.storage : Set storage_cryptsetup_services ----- 0.51s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:58 fedora.linux_system_roles.storage : Set platform/version specific variables --- 0.48s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main.yml:2 fedora.linux_system_roles.storage : Show storage_volumes ---------------- 0.44s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:14 fedora.linux_system_roles.storage : Show storage_pools ------------------ 0.42s /tmp/collections-YxY/ansible_collections/fedora/linux_system_roles/roles/storage/tasks/main-blivet.yml:9 -- Logs begin at Tue 2025-07-22 08:23:16 EDT, end at Tue 2025-07-22 08:35:39 EDT. -- Jul 22 08:34:55 managed-node13 sudo[14959]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kiwgvnqwlxeabmuobaqgohgyvgrbstiq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187693.66-12011-42408988706467/AnsiballZ_setup.py Jul 22 08:34:55 managed-node13 sudo[14959]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:55 managed-node13 ansible-setup[14962]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:34:56 managed-node13 sudo[14959]: pam_unix(sudo:session): session closed for user root Jul 22 08:34:58 managed-node13 sudo[15042]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-czslkbgfcerkgmfqdaevxvdgqygmklig ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187697.97-12604-246383236000674/AnsiballZ_stat.py Jul 22 08:34:58 managed-node13 sudo[15042]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:34:58 managed-node13 ansible-stat[15045]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/run/ostree-booted get_md5=False get_mime=True get_attributes=True Jul 22 08:34:58 managed-node13 sudo[15042]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:02 managed-node13 sudo[15094]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vibczvpclhvqxcoesqjupotovqjyucvm ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187701.17-12844-140859389080833/AnsiballZ_yum.py Jul 22 08:35:02 managed-node13 sudo[15094]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:03 managed-node13 ansible-yum[15097]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['python-enum34', 'python-blivet3', 'libblockdev-crypto', 'libblockdev-dm', 'libblockdev-lvm', 'libblockdev-mdraid', 'libblockdev-swap', 'libblockdev'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:06 managed-node13 sudo[15094]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:10 managed-node13 sudo[15171]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbvqrzfzbbyjpxkmfzrlpvhhvxfxnqvl ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187707.97-13579-7900527114431/AnsiballZ_blivet.py Jul 22 08:35:10 managed-node13 sudo[15171]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:10 managed-node13 ansible-fedora.linux_system_roles.blivet[15174]: Invoked with packages_only=True uses_kmod_kvdo=True disklabel_type=None safe_mode=True diskvolume_mkfs_option_map={} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:35:10 managed-node13 sudo[15171]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:13 managed-node13 sudo[15229]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngzofhcrfwmenprbnjbrbfojmirciiiq ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187712.49-13779-29446550426492/AnsiballZ_yum.py Jul 22 08:35:13 managed-node13 sudo[15229]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:13 managed-node13 ansible-yum[15232]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['kpartx'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:13 managed-node13 sudo[15229]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:15 managed-node13 sudo[15285]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uweetxzhocqcskoknelhzfoypiiuuzff ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187713.92-13902-136337050564357/AnsiballZ_service_facts.py Jul 22 08:35:15 managed-node13 sudo[15285]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:15 managed-node13 ansible-service_facts[15288]: Invoked Jul 22 08:35:16 managed-node13 sudo[15285]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:19 managed-node13 sudo[15450]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jogifuignmpthnqhlgildohwkxpfukwc ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187718.27-14181-165606039719296/AnsiballZ_blivet.py Jul 22 08:35:19 managed-node13 sudo[15450]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:19 managed-node13 ansible-fedora.linux_system_roles.blivet[15453]: Invoked with packages_only=False uses_kmod_kvdo=True disklabel_type=None safe_mode=False diskvolume_mkfs_option_map={'ext4': '-F', 'ext3': '-F', 'ext2': '-F'} pools=[] volumes=[] pool_defaults={'encryption_password': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_cipher': None, 'disks': [], 'raid_level': None, 'encryption_key_size': None, 'encryption_key': None, 'raid_device_count': None, 'state': 'present', 'volumes': [], 'shared': False, 'encryption_luks_version': None, 'type': 'lvm', 'grow_to_fill': False, 'raid_spare_count': None, 'raid_chunk_size': None} volume_defaults={'raid_metadata_version': None, 'raid_level': None, 'fs_type': 'xfs', 'mount_options': 'defaults', 'size': 0, 'mount_point': '', 'compression': None, 'encryption_password': None, 'encryption': False, 'mount_device_identifier': 'uuid', 'raid_device_count': None, 'state': 'present', 'vdo_pool_size': None, 'thin_pool_name': None, 'fs_overwrite_existing': True, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_key': None, 'fs_label': '', 'encryption_luks_version': None, 'raid_stripe_size': None, 'cache_size': 0, 'raid_spare_count': None, 'cache_mode': None, 'deduplication': None, 'cached': False, 'type': 'lvm', 'disks': [], 'thin_pool_size': None, 'thin': None, 'mount_check': 0, 'mount_passno': 0, 'raid_chunk_size': None, 'cache_devices': [], 'fs_create_options': ''} use_partitions=None Jul 22 08:35:19 managed-node13 sudo[15450]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:21 managed-node13 sudo[15508]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlfuieshgmwsjjqqgpxdedcnkregazya ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187720.44-14336-138191480947132/AnsiballZ_stat.py Jul 22 08:35:21 managed-node13 sudo[15508]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:21 managed-node13 ansible-stat[15511]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/fstab get_md5=False get_mime=True get_attributes=True Jul 22 08:35:21 managed-node13 sudo[15508]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:25 managed-node13 sudo[15562]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfstudvplkprxpdtowgddfmdwjhzgmht ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187724.85-14619-202959984966782/AnsiballZ_stat.py Jul 22 08:35:25 managed-node13 sudo[15562]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:25 managed-node13 ansible-stat[15565]: Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/crypttab get_md5=False get_mime=True get_attributes=True Jul 22 08:35:25 managed-node13 sudo[15562]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:27 managed-node13 sudo[15616]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbmymdheehovbwqgphaesqhrqogfrdtb ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187726.49-14758-263713836568203/AnsiballZ_setup.py Jul 22 08:35:27 managed-node13 sudo[15616]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:27 managed-node13 ansible-setup[15619]: Invoked with filter=* gather_subset=['all'] fact_path=/etc/ansible/facts.d gather_timeout=10 Jul 22 08:35:27 managed-node13 sudo[15616]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:30 managed-node13 sudo[15699]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgtzcuyalnscxsrqsilmzqlzbuifznjn ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187730.18-15031-119688241381375/AnsiballZ_yum.py Jul 22 08:35:30 managed-node13 sudo[15699]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:31 managed-node13 ansible-yum[15702]: Invoked with lock_timeout=30 update_cache=False disable_excludes=None exclude=[] allow_downgrade=False disable_gpg_check=False conf_file=None use_backend=auto state=present disablerepo=[] releasever=None skip_broken=False autoremove=False download_dir=None enable_plugin=[] installroot=/ install_weak_deps=True name=['util-linux'] download_only=False bugfix=False list=None install_repoquery=True update_only=False disable_plugin=[] enablerepo=[] security=False validate_certs=True Jul 22 08:35:31 managed-node13 sudo[15699]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:33 managed-node13 sudo[15755]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yorudnthzmpvryeglzmjgkzvbqxkzvte ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187732.0-15288-211185526069626/AnsiballZ_find_unused_disk.py Jul 22 08:35:33 managed-node13 sudo[15755]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:33 managed-node13 ansible-fedora.linux_system_roles.find_unused_disk[15758]: Invoked with min_size=5g max_return=1 max_size=0 with_interface=None match_sector_size=False Jul 22 08:35:33 managed-node13 sudo[15755]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:36 managed-node13 sudo[15809]: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfkseituriljsdydpjmxvdyjjxssbgii ; /usr/bin/python /root/.ansible/tmp/ansible-tmp-1753187734.5-15527-218832003748077/AnsiballZ_command.py Jul 22 08:35:36 managed-node13 sudo[15809]: pam_unix(sudo:session): session opened for user root by root(uid=0) Jul 22 08:35:36 managed-node13 ansible-command[15812]: Invoked with creates=None executable=None _uses_shell=True strip_empty_ends=True _raw_params=set -x exec 1>&2 lsblk -p --pairs --bytes -o NAME,TYPE,SIZE,FSTYPE,LOG-SEC journalctl -ex removes=None argv=None warn=True chdir=None stdin_add_newline=True stdin=None Jul 22 08:35:36 managed-node13 sudo[15809]: pam_unix(sudo:session): session closed for user root Jul 22 08:35:38 managed-node13 sshd[15823]: Accepted publickey for root from 10.31.42.107 port 56076 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:38 managed-node13 systemd-logind[514]: New session 20 of user root. -- Subject: A new session 20 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 20 has been created for the user root. -- -- The leading process of the session is 15823. Jul 22 08:35:38 managed-node13 sshd[15823]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 22 08:35:38 managed-node13 systemd[1]: Started Session 20 of user root. -- Subject: Unit session-20.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-20.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:38 managed-node13 sshd[15823]: Received disconnect from 10.31.42.107 port 56076:11: disconnected by user Jul 22 08:35:38 managed-node13 sshd[15823]: Disconnected from 10.31.42.107 port 56076 Jul 22 08:35:38 managed-node13 sshd[15823]: pam_unix(sshd:session): session closed for user root Jul 22 08:35:38 managed-node13 systemd-logind[514]: Removed session 20. -- Subject: Session 20 has been terminated -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 20 has been terminated. Jul 22 08:35:39 managed-node13 sshd[15833]: Accepted publickey for root from 10.31.42.107 port 56078 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Jul 22 08:35:39 managed-node13 systemd-logind[514]: New session 21 of user root. -- Subject: A new session 21 has been created for user root -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 21 has been created for the user root. -- -- The leading process of the session is 15833. Jul 22 08:35:39 managed-node13 systemd[1]: Started Session 21 of user root. -- Subject: Unit session-21.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-21.scope has finished starting up. -- -- The start-up result is done. Jul 22 08:35:39 managed-node13 sshd[15833]: pam_unix(sshd:session): session opened for user root by (uid=0)