ansible-playbook [core 2.17.14] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-lxG executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Aug 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_add_rm.yml ***************************************************** 1 plays in /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tests_add_rm.yml PLAY [Test creating, modifying, and removing kernels] ************************** TASK [Skip on s390x architecture] ********************************************** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tests_add_rm.yml:11 Saturday 25 October 2025 08:54:45 -0400 (0:00:00.019) 0:00:00.019 ****** included: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tasks/skip_on_s390x.yml for managed-node1 TASK [Gather architecture facts] *********************************************** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tasks/skip_on_s390x.yml:3 Saturday 25 October 2025 08:54:45 -0400 (0:00:00.012) 0:00:00.031 ****** [WARNING]: Platform linux on host managed-node1 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node1] TASK [End host on s390x architecture] ****************************************** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tasks/skip_on_s390x.yml:8 Saturday 25 October 2025 08:54:47 -0400 (0:00:01.989) 0:00:02.021 ****** META: end_host conditional evaluated to False, continuing execution for managed-node1 skipping: [managed-node1] => { "skip_reason": "end_host conditional evaluated to False, continuing execution for managed-node1" } MSG: end_host conditional evaluated to false, continuing execution for managed-node1 TASK [Get bootloader_facts] **************************************************** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tests_add_rm.yml:16 Saturday 25 October 2025 08:54:47 -0400 (0:00:00.011) 0:00:02.032 ****** included: fedora.linux_system_roles.bootloader for managed-node1 TASK [fedora.linux_system_roles.bootloader : Set platform/version specific variables] *** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:3 Saturday 25 October 2025 08:54:47 -0400 (0:00:00.089) 0:00:02.121 ****** included: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml for managed-node1 TASK [fedora.linux_system_roles.bootloader : Ensure ansible_facts used by role] *** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml:3 Saturday 25 October 2025 08:54:47 -0400 (0:00:00.023) 0:00:02.145 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "__bootloader_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.bootloader : Check if system is ostree] ******** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml:11 Saturday 25 October 2025 08:54:47 -0400 (0:00:00.032) 0:00:02.178 ****** ok: [managed-node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.bootloader : Set flag to indicate system is ostree] *** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml:16 Saturday 25 October 2025 08:54:48 -0400 (0:00:00.460) 0:00:02.638 ****** ok: [managed-node1] => { "ansible_facts": { "__bootloader_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.bootloader : Set platform/version specific variables] *** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml:20 Saturday 25 October 2025 08:54:48 -0400 (0:00:00.023) 0:00:02.662 ****** skipping: [managed-node1] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__bootloader_vars_file is file", "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__bootloader_vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__bootloader_vars_file is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__bootloader_vars_file is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.bootloader : Fail on s390x architecture] ******* task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:6 Saturday 25 October 2025 08:54:48 -0400 (0:00:00.037) 0:00:02.699 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "ansible_architecture == 's390x'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.bootloader : Check bootloader settings for boolean and null values] *** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:15 Saturday 25 October 2025 08:54:48 -0400 (0:00:00.017) 0:00:02.717 ****** skipping: [managed-node1] => { "changed": false, "false_condition": "(__values | selectattr(\"value\", \"sameas\", true) | list | length > 0) or (__values | selectattr(\"value\", \"sameas\", false) | list | length > 0) or (__values_with_null_and_state_present | length > 0) or (__values_with_null_and_no_state | length > 0)", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.bootloader : Ensure required packages are installed] *** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:25 Saturday 25 October 2025 08:54:48 -0400 (0:00:00.056) 0:00:02.773 ****** fatal: [managed-node1]: FAILED! => { "changed": false, "rc": 1, "results": [] } MSG: Failed to download metadata for repo 'highavailability': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried TASK [Remove cloned kernels] *************************************************** task path: /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tests_add_rm.yml:138 Saturday 25 October 2025 08:55:00 -0400 (0:00:12.659) 0:00:15.433 ****** fatal: [managed-node1]: FAILED! => {} MSG: The task includes an option with an undefined variable.. '__default_kernel' is undefined The error appears to be in '/tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tests_add_rm.yml': line 138, column 11, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: always: - name: Remove cloned kernels ^ here PLAY RECAP ********************************************************************* managed-node1 : ok=6 changed=0 unreachable=0 failed=2 skipped=4 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.14", "end_time": "2025-10-25T12:55:00.792704+00:00Z", "host": "managed-node1", "message": "Failed to download metadata for repo 'highavailability': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "rc": 1, "start_time": "2025-10-25T12:54:48.136961+00:00Z", "task_name": "Ensure required packages are installed", "task_path": "/tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:25" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 25 October 2025 08:55:00 -0400 (0:00:00.011) 0:00:15.444 ****** =============================================================================== fedora.linux_system_roles.bootloader : Ensure required packages are installed -- 12.66s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:25 Gather architecture facts ----------------------------------------------- 1.99s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tasks/skip_on_s390x.yml:3 fedora.linux_system_roles.bootloader : Check if system is ostree -------- 0.46s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml:11 Get bootloader_facts ---------------------------------------------------- 0.09s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tests_add_rm.yml:16 fedora.linux_system_roles.bootloader : Check bootloader settings for boolean and null values --- 0.06s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:15 fedora.linux_system_roles.bootloader : Set platform/version specific variables --- 0.04s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml:20 fedora.linux_system_roles.bootloader : Ensure ansible_facts used by role --- 0.03s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml:3 fedora.linux_system_roles.bootloader : Set flag to indicate system is ostree --- 0.02s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/set_vars.yml:16 fedora.linux_system_roles.bootloader : Set platform/version specific variables --- 0.02s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:3 fedora.linux_system_roles.bootloader : Fail on s390x architecture ------- 0.02s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/roles/bootloader/tasks/main.yml:6 Skip on s390x architecture ---------------------------------------------- 0.01s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tests_add_rm.yml:11 Remove cloned kernels --------------------------------------------------- 0.01s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tests_add_rm.yml:138 End host on s390x architecture ------------------------------------------ 0.01s /tmp/collections-lxG/ansible_collections/fedora/linux_system_roles/tests/bootloader/tasks/skip_on_s390x.yml:8 Oct 25 08:54:44 managed-node1 sshd-session[7355]: Accepted publickey for root from 10.31.40.188 port 54418 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 08:54:44 managed-node1 systemd-logind[607]: New session 5 of user root. ░░ Subject: A new session 5 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 5 has been created for the user root. ░░ ░░ The leading process of the session is 7355. Oct 25 08:54:44 managed-node1 systemd[1]: Started Session 5 of User root. ░░ Subject: A start job for unit session-5.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-5.scope has finished successfully. ░░ ░░ The job identifier is 900. Oct 25 08:54:44 managed-node1 sshd-session[7355]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Oct 25 08:54:44 managed-node1 sshd-session[7358]: Received disconnect from 10.31.40.188 port 54418:11: disconnected by user Oct 25 08:54:44 managed-node1 sshd-session[7358]: Disconnected from user root 10.31.40.188 port 54418 Oct 25 08:54:44 managed-node1 sshd-session[7355]: pam_unix(sshd:session): session closed for user root Oct 25 08:54:44 managed-node1 systemd[1]: session-5.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-5.scope has successfully entered the 'dead' state. Oct 25 08:54:44 managed-node1 systemd-logind[607]: Session 5 logged out. Waiting for processes to exit. Oct 25 08:54:44 managed-node1 systemd-logind[607]: Removed session 5. ░░ Subject: Session 5 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 5 has been terminated. Oct 25 08:54:44 managed-node1 sshd-session[7382]: Accepted publickey for root from 10.31.40.188 port 54434 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 08:54:44 managed-node1 systemd-logind[607]: New session 6 of user root. ░░ Subject: A new session 6 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 6 has been created for the user root. ░░ ░░ The leading process of the session is 7382. Oct 25 08:54:44 managed-node1 systemd[1]: Started Session 6 of User root. ░░ Subject: A start job for unit session-6.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-6.scope has finished successfully. ░░ ░░ The job identifier is 969. Oct 25 08:54:44 managed-node1 sshd-session[7382]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Oct 25 08:54:44 managed-node1 sshd-session[7385]: Received disconnect from 10.31.40.188 port 54434:11: disconnected by user Oct 25 08:54:44 managed-node1 sshd-session[7385]: Disconnected from user root 10.31.40.188 port 54434 Oct 25 08:54:44 managed-node1 sshd-session[7382]: pam_unix(sshd:session): session closed for user root Oct 25 08:54:44 managed-node1 systemd[1]: session-6.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-6.scope has successfully entered the 'dead' state. Oct 25 08:54:44 managed-node1 systemd-logind[607]: Session 6 logged out. Waiting for processes to exit. Oct 25 08:54:44 managed-node1 systemd-logind[607]: Removed session 6. ░░ Subject: Session 6 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 6 has been terminated. Oct 25 08:54:44 managed-node1 systemd[1]: systemd-hostnamed.service: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit systemd-hostnamed.service has successfully entered the 'dead' state. Oct 25 08:54:45 managed-node1 sshd-session[7413]: Accepted publickey for root from 10.31.40.188 port 54438 ssh2: ECDSA SHA256:kE+NqJX1lM/dfe/TXW4AD4h8XI149UO4bJZABYPqYUA Oct 25 08:54:45 managed-node1 systemd-logind[607]: New session 7 of user root. ░░ Subject: A new session 7 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 7 has been created for the user root. ░░ ░░ The leading process of the session is 7413. Oct 25 08:54:45 managed-node1 systemd[1]: Started Session 7 of User root. ░░ Subject: A start job for unit session-7.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-7.scope has finished successfully. ░░ ░░ The job identifier is 1038. Oct 25 08:54:45 managed-node1 sshd-session[7413]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Oct 25 08:54:47 managed-node1 python3.9[7590]: ansible-setup Invoked with gather_subset=['architecture'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 25 08:54:47 managed-node1 python3.9[7743]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 25 08:54:48 managed-node1 python3.9[7892]: ansible-ansible.legacy.dnf Invoked with name=['grubby'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 25 08:55:00 managed-node1 sshd-session[7974]: Accepted publickey for root from 10.31.40.188 port 36954 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 08:55:00 managed-node1 systemd-logind[607]: New session 8 of user root. ░░ Subject: A new session 8 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 8 has been created for the user root. ░░ ░░ The leading process of the session is 7974. Oct 25 08:55:00 managed-node1 systemd[1]: Started Session 8 of User root. ░░ Subject: A start job for unit session-8.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-8.scope has finished successfully. ░░ ░░ The job identifier is 1108. Oct 25 08:55:00 managed-node1 sshd-session[7974]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Oct 25 08:55:01 managed-node1 sshd-session[7977]: Received disconnect from 10.31.40.188 port 36954:11: disconnected by user Oct 25 08:55:01 managed-node1 sshd-session[7977]: Disconnected from user root 10.31.40.188 port 36954 Oct 25 08:55:01 managed-node1 sshd-session[7974]: pam_unix(sshd:session): session closed for user root Oct 25 08:55:01 managed-node1 systemd[1]: session-8.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-8.scope has successfully entered the 'dead' state. Oct 25 08:55:01 managed-node1 systemd-logind[607]: Session 8 logged out. Waiting for processes to exit. Oct 25 08:55:01 managed-node1 systemd-logind[607]: Removed session 8. ░░ Subject: Session 8 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 8 has been terminated. Oct 25 08:55:01 managed-node1 sshd-session[8002]: Accepted publickey for root from 10.31.40.188 port 36968 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 08:55:01 managed-node1 systemd-logind[607]: New session 9 of user root. ░░ Subject: A new session 9 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 9 has been created for the user root. ░░ ░░ The leading process of the session is 8002. Oct 25 08:55:01 managed-node1 systemd[1]: Started Session 9 of User root. ░░ Subject: A start job for unit session-9.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-9.scope has finished successfully. ░░ ░░ The job identifier is 1177. Oct 25 08:55:01 managed-node1 sshd-session[8002]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)