ansible-playbook [core 2.17.14] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-L5y executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Aug 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_qnetd.yml ****************************************************** 2 plays in /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml PLAY [all] ********************************************************************* TASK [Include vault variables] ************************************************* task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml:5 Saturday 25 October 2025 02:36:26 -0400 (0:00:00.019) 0:00:00.019 ****** ok: [managed-node6] => { "ansible_facts": { "ha_cluster_hacluster_password": { "__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n31303833633366333561656439323930303361333161363239346166656537323933313436\n3432386236656563343237306335323637396239616230353561330a313731623238393238\n62343064666336643930663239383936616465643134646536656532323461356237646133\n3761616633323839633232353637366266350a313163633236376666653238633435306565\n3264623032333736393535663833\n" } }, "ansible_included_var_files": [ "/tmp/ha_cluster-6jA/tests/vars/vault-variables.yml" ], "changed": false } PLAY [Test qnetd setup] ******************************************************** TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml:9 Saturday 25 October 2025 02:36:26 -0400 (0:00:00.028) 0:00:00.048 ****** [WARNING]: Platform linux on host managed-node6 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node6] TASK [Set up test environment] ************************************************* task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml:22 Saturday 25 October 2025 02:36:27 -0400 (0:00:01.187) 0:00:01.235 ****** included: fedora.linux_system_roles.ha_cluster for managed-node6 TASK [fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:9 Saturday 25 October 2025 02:36:27 -0400 (0:00:00.054) 0:00:01.290 ****** ok: [managed-node6] => { "ansible_facts": { "inventory_hostname": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Ensure facts used by tests] ******* task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:14 Saturday 25 October 2025 02:36:27 -0400 (0:00:00.046) 0:00:01.336 ****** skipping: [managed-node6] => { "changed": false, "false_condition": "'distribution' not in ansible_facts", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:22 Saturday 25 October 2025 02:36:27 -0400 (0:00:00.014) 0:00:01.351 ****** ok: [managed-node6] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:27 Saturday 25 October 2025 02:36:28 -0400 (0:00:00.551) 0:00:01.903 ****** ok: [managed-node6] => { "ansible_facts": { "__ha_cluster_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:32 Saturday 25 October 2025 02:36:28 -0400 (0:00:00.039) 0:00:01.942 ****** skipping: [managed-node6] => { "changed": false, "false_condition": "ansible_distribution == 'RedHat'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:41 Saturday 25 October 2025 02:36:28 -0400 (0:00:00.024) 0:00:01.967 ****** skipping: [managed-node6] => { "changed": false, "false_condition": "__ha_cluster_is_ostree | d(false)", "skip_reason": "Conditional result was False" } TASK [Clean up test environment for qnetd] ************************************* task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml:27 Saturday 25 October 2025 02:36:28 -0400 (0:00:00.065) 0:00:02.032 ****** included: fedora.linux_system_roles.ha_cluster for managed-node6 TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_cleanup_qnetd.yml:9 Saturday 25 October 2025 02:36:28 -0400 (0:00:00.057) 0:00:02.090 ****** fatal: [managed-node6]: FAILED! => { "changed": false, "rc": 1, "results": [] } MSG: Failed to download metadata for repo 'highavailability': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried PLAY RECAP ********************************************************************* managed-node6 : ok=7 changed=0 unreachable=0 failed=1 skipped=3 rescued=0 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.14", "end_time": "2025-10-25T06:36:32.648989+00:00Z", "host": "managed-node6", "message": "Failed to download metadata for repo 'highavailability': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "rc": 1, "start_time": "2025-10-25T06:36:28.243907+00:00Z", "task_name": "Make sure qnetd is not installed", "task_path": "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_cleanup_qnetd.yml:9" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 25 October 2025 02:36:32 -0400 (0:00:04.408) 0:00:06.498 ****** =============================================================================== fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed --- 4.41s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_cleanup_qnetd.yml:9 Gathering Facts --------------------------------------------------------- 1.19s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml:9 fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 0.55s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:22 fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd --- 0.07s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:41 Clean up test environment for qnetd ------------------------------------- 0.06s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml:27 Set up test environment ------------------------------------------------- 0.05s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml:22 fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters --- 0.05s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:9 fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree --- 0.04s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:27 Include vault variables ------------------------------------------------- 0.03s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_qnetd.yml:5 fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories --- 0.02s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:32 fedora.linux_system_roles.ha_cluster : Ensure facts used by tests ------- 0.01s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:14 Oct 25 02:36:27 managed-node6 python3.9[10462]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 25 02:36:27 managed-node6 python3.9[10637]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 25 02:36:28 managed-node6 python3.9[10786]: ansible-ansible.legacy.dnf Invoked with name=['corosync-qnetd'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 25 02:36:32 managed-node6 sshd-session[10845]: Accepted publickey for root from 10.31.45.39 port 60358 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 02:36:32 managed-node6 systemd-logind[608]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 10845. Oct 25 02:36:32 managed-node6 systemd[1]: Started Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 1522. Oct 25 02:36:32 managed-node6 sshd-session[10845]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Oct 25 02:36:33 managed-node6 sshd-session[10848]: Received disconnect from 10.31.45.39 port 60358:11: disconnected by user Oct 25 02:36:33 managed-node6 sshd-session[10848]: Disconnected from user root 10.31.45.39 port 60358 Oct 25 02:36:33 managed-node6 sshd-session[10845]: pam_unix(sshd:session): session closed for user root Oct 25 02:36:33 managed-node6 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Oct 25 02:36:33 managed-node6 systemd-logind[608]: Session 14 logged out. Waiting for processes to exit. Oct 25 02:36:33 managed-node6 systemd-logind[608]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Oct 25 02:36:33 managed-node6 sshd-session[10873]: Accepted publickey for root from 10.31.45.39 port 60372 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 02:36:33 managed-node6 systemd-logind[608]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 10873. Oct 25 02:36:33 managed-node6 systemd[1]: Started Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 1591. Oct 25 02:36:33 managed-node6 sshd-session[10873]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)