ansible-playbook [core 2.17.14] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.12/site-packages/ansible ansible collection location = /tmp/collections-L5y executable location = /usr/local/bin/ansible-playbook python version = 3.12.11 (main, Aug 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True No config file found; using defaults running playbook inside collection fedora.linux_system_roles Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_default.yml **************************************************** 1 plays in /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml PLAY [Ensure mandatory variables are defined] ********************************** TASK [Set up test environment] ************************************************* task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:10 Saturday 25 October 2025 02:36:04 -0400 (0:00:00.028) 0:00:00.028 ****** included: fedora.linux_system_roles.ha_cluster for managed-node3 TASK [fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:9 Saturday 25 October 2025 02:36:04 -0400 (0:00:00.032) 0:00:00.061 ****** ok: [managed-node3] => { "ansible_facts": { "inventory_hostname": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Ensure facts used by tests] ******* task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:14 Saturday 25 October 2025 02:36:04 -0400 (0:00:00.062) 0:00:00.123 ****** [WARNING]: Platform linux on host localhost is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed-node3] TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:22 Saturday 25 October 2025 02:36:06 -0400 (0:00:01.222) 0:00:01.345 ****** ok: [managed-node3] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:27 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.876) 0:00:02.222 ****** ok: [managed-node3] => { "ansible_facts": { "__ha_cluster_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:32 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.158) 0:00:02.380 ****** skipping: [managed-node3] => { "changed": false, "false_condition": "ansible_distribution == 'RedHat'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:41 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.173) 0:00:02.554 ****** skipping: [managed-node3] => { "changed": false, "false_condition": "__ha_cluster_is_ostree | d(false)", "skip_reason": "Conditional result was False" } TASK [Run the role] ************************************************************ task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:15 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.078) 0:00:02.633 ****** included: fedora.linux_system_roles.ha_cluster for managed-node3 TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:3 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.110) 0:00:02.743 ****** included: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml for managed-node3 TASK [fedora.linux_system_roles.ha_cluster : Ensure ansible_facts used by role] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:2 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.061) 0:00:02.804 ****** skipping: [managed-node3] => { "changed": false, "false_condition": "__ha_cluster_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:10 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.102) 0:00:02.907 ****** skipping: [managed-node3] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:15 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.060) 0:00:02.967 ****** skipping: [managed-node3] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:19 Saturday 25 October 2025 02:36:07 -0400 (0:00:00.073) 0:00:03.040 ****** ok: [managed-node3] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": {}, "__ha_cluster_fence_agent_packages_default": "{{ ['fence-agents-all'] + (['fence-virt'] if ansible_architecture == 'x86_64' else []) }}", "__ha_cluster_fullstack_node_packages": [ "corosync", "libknet1-plugins-all", "resource-agents", "pacemaker" ], "__ha_cluster_qdevice_node_packages": [ "corosync-qdevice", "bash", "coreutils", "curl", "grep", "nss-tools", "openssl", "sed" ], "__ha_cluster_repos": [], "__ha_cluster_role_essential_packages": [ "pcs", "corosync-qnetd", "openssl" ], "__ha_cluster_sbd_packages": [ "sbd" ], "__ha_cluster_services": [ "corosync", "corosync-qdevice", "pacemaker" ] }, "ansible_included_var_files": [ "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed-node3] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node3] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": { "aarch64": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "noarch": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc" ], "ppc64le": [ "fence-agents-compute", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "s390x": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "x86_64": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ] }, "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node3] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": { "aarch64": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "noarch": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc" ], "ppc64le": [ "fence-agents-compute", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "s390x": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "x86_64": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ] }, "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node3] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": { "aarch64": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "noarch": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc" ], "ppc64le": [ "fence-agents-compute", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "s390x": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "x86_64": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ] }, "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } ok: [managed-node3] => (item=CentOS_9.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": { "aarch64": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "noarch": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc" ], "ppc64le": [ "fence-agents-compute", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "s390x": [ "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt" ], "x86_64": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ] }, "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_9.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_9.yml" } TASK [fedora.linux_system_roles.ha_cluster : Set Linux Pacemaker shell specific variables] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:42 Saturday 25 October 2025 02:36:08 -0400 (0:00:00.313) 0:00:03.354 ****** ok: [managed-node3] => { "ansible_facts": {}, "ansible_included_var_files": [ "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/shell_pcs.yml" ], "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Enable package repositories] ****** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:6 Saturday 25 October 2025 02:36:08 -0400 (0:00:00.056) 0:00:03.411 ****** included: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml for managed-node3 TASK [fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:3 Saturday 25 October 2025 02:36:08 -0400 (0:00:00.073) 0:00:03.484 ****** ok: [managed-node3] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/RedHat.yml" }, "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } ok: [managed-node3] => (item=CentOS.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml" }, "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml" } skipping: [managed-node3] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node3] => (item=CentOS_9.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_9.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:21 Saturday 25 October 2025 02:36:08 -0400 (0:00:00.131) 0:00:03.616 ****** included: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml for managed-node3 TASK [fedora.linux_system_roles.ha_cluster : List active CentOS repositories] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:3 Saturday 25 October 2025 02:36:08 -0400 (0:00:00.175) 0:00:03.791 ****** ok: [managed-node3] => { "changed": false, "cmd": [ "dnf", "repolist" ], "delta": "0:00:00.190540", "end": "2025-10-25 02:36:09.399443", "rc": 0, "start": "2025-10-25 02:36:09.208903" } STDOUT: repo id repo name appstream CentOS Stream 9 - AppStream baseos CentOS Stream 9 - BaseOS beaker-client Beaker Client - RedHatEnterpriseLinux9 beaker-harness Beaker harness beakerlib-libraries Copr repo for beakerlib-libraries owned by bgoncalv copr:copr.devel.redhat.com:lpol:qa-tools Copr repo for qa-tools owned by lpol extras-common CentOS Stream 9 - Extras packages highavailability CentOS Stream 9 - HighAvailability TASK [fedora.linux_system_roles.ha_cluster : Enable CentOS repositories] ******* task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:10 Saturday 25 October 2025 02:36:09 -0400 (0:00:00.862) 0:00:04.654 ****** skipping: [managed-node3] => (item={'id': 'highavailability', 'name': 'HighAvailability'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.id not in __ha_cluster_repolist.stdout", "item": { "id": "highavailability", "name": "HighAvailability" }, "skip_reason": "Conditional result was False" } skipping: [managed-node3] => (item={'id': 'resilientstorage', 'name': 'ResilientStorage'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.name != \"ResilientStorage\" or ha_cluster_enable_repos_resilient_storage", "item": { "id": "resilientstorage", "name": "ResilientStorage" }, "skip_reason": "Conditional result was False" } skipping: [managed-node3] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.ha_cluster : Install role essential packages] *** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:11 Saturday 25 October 2025 02:36:09 -0400 (0:00:00.061) 0:00:04.715 ****** fatal: [managed-node3]: FAILED! => { "changed": false, "rc": 1, "results": [] } MSG: Failed to download metadata for repo 'highavailability': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried TASK [Extract errors] ********************************************************** task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:19 Saturday 25 October 2025 02:36:13 -0400 (0:00:04.418) 0:00:09.134 ****** ok: [managed-node3] => { "ansible_facts": { "error_list": [] }, "changed": false } TASK [Check errors] ************************************************************ task path: /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:24 Saturday 25 October 2025 02:36:14 -0400 (0:00:00.052) 0:00:09.186 ****** fatal: [managed-node3]: FAILED! => { "assertion": "'ha_cluster_hacluster_password must be specified' in error_list", "changed": false, "evaluated_to": false } MSG: Assertion failed PLAY RECAP ********************************************************************* managed-node3 : ok=14 changed=0 unreachable=0 failed=1 skipped=6 rescued=1 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.17.14", "end_time": "2025-10-25T06:36:13.953601+00:00Z", "host": "managed-node3", "message": "Failed to download metadata for repo 'highavailability': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "rc": 1, "start_time": "2025-10-25T06:36:09.539767+00:00Z", "task_name": "Install role essential packages", "task_path": "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:11" }, { "ansible_version": "2.17.14", "end_time": "2025-10-25T06:36:14.032099+00:00Z", "host": "managed-node3", "message": "Assertion failed", "start_time": "2025-10-25T06:36:14.013376+00:00Z", "task_name": "Check errors", "task_path": "/tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:24" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 25 October 2025 02:36:14 -0400 (0:00:00.023) 0:00:09.210 ****** =============================================================================== fedora.linux_system_roles.ha_cluster : Install role essential packages --- 4.42s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:11 fedora.linux_system_roles.ha_cluster : Ensure facts used by tests ------- 1.22s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:14 fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 0.88s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:22 fedora.linux_system_roles.ha_cluster : List active CentOS repositories --- 0.86s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:3 fedora.linux_system_roles.ha_cluster : Set platform/version specific variables --- 0.31s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:19 fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories --- 0.18s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:21 fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories --- 0.17s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:32 fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree --- 0.16s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:27 fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories --- 0.13s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml:3 Run the role ------------------------------------------------------------ 0.11s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:15 fedora.linux_system_roles.ha_cluster : Ensure ansible_facts used by role --- 0.10s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:2 fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd --- 0.08s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:41 fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree --- 0.07s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:15 fedora.linux_system_roles.ha_cluster : Enable package repositories ------ 0.07s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:6 fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters --- 0.06s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/test_setup.yml:9 fedora.linux_system_roles.ha_cluster : Enable CentOS repositories ------- 0.06s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml:10 fedora.linux_system_roles.ha_cluster : Set platform/version specific variables --- 0.06s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/main.yml:3 fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 0.06s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:10 fedora.linux_system_roles.ha_cluster : Set Linux Pacemaker shell specific variables --- 0.06s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml:42 Extract errors ---------------------------------------------------------- 0.05s /tmp/collections-L5y/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tests_default.yml:19 Oct 25 02:36:03 managed-node3 sshd-session[10548]: Accepted publickey for root from 10.31.45.39 port 52380 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 02:36:03 managed-node3 systemd-logind[609]: New session 14 of user root. ░░ Subject: A new session 14 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 14 has been created for the user root. ░░ ░░ The leading process of the session is 10548. Oct 25 02:36:03 managed-node3 systemd[1]: Started Session 14 of User root. ░░ Subject: A start job for unit session-14.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-14.scope has finished successfully. ░░ ░░ The job identifier is 1522. Oct 25 02:36:03 managed-node3 sshd-session[10548]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Oct 25 02:36:03 managed-node3 sshd-session[10551]: Received disconnect from 10.31.45.39 port 52380:11: disconnected by user Oct 25 02:36:03 managed-node3 sshd-session[10551]: Disconnected from user root 10.31.45.39 port 52380 Oct 25 02:36:03 managed-node3 sshd-session[10548]: pam_unix(sshd:session): session closed for user root Oct 25 02:36:03 managed-node3 systemd-logind[609]: Session 14 logged out. Waiting for processes to exit. Oct 25 02:36:03 managed-node3 systemd[1]: session-14.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-14.scope has successfully entered the 'dead' state. Oct 25 02:36:03 managed-node3 systemd-logind[609]: Removed session 14. ░░ Subject: Session 14 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 14 has been terminated. Oct 25 02:36:03 managed-node3 sshd-session[10576]: Accepted publickey for root from 10.31.45.39 port 60664 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 02:36:03 managed-node3 systemd-logind[609]: New session 15 of user root. ░░ Subject: A new session 15 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 15 has been created for the user root. ░░ ░░ The leading process of the session is 10576. Oct 25 02:36:03 managed-node3 systemd[1]: Started Session 15 of User root. ░░ Subject: A start job for unit session-15.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-15.scope has finished successfully. ░░ ░░ The job identifier is 1591. Oct 25 02:36:03 managed-node3 sshd-session[10576]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Oct 25 02:36:03 managed-node3 sshd-session[10579]: Received disconnect from 10.31.45.39 port 60664:11: disconnected by user Oct 25 02:36:03 managed-node3 sshd-session[10579]: Disconnected from user root 10.31.45.39 port 60664 Oct 25 02:36:03 managed-node3 sshd-session[10576]: pam_unix(sshd:session): session closed for user root Oct 25 02:36:03 managed-node3 systemd-logind[609]: Session 15 logged out. Waiting for processes to exit. Oct 25 02:36:03 managed-node3 systemd[1]: session-15.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-15.scope has successfully entered the 'dead' state. Oct 25 02:36:03 managed-node3 systemd-logind[609]: Removed session 15. ░░ Subject: Session 15 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 15 has been terminated. Oct 25 02:36:06 managed-node3 python3.9[10777]: ansible-setup Invoked with gather_subset=['min'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 25 02:36:06 managed-node3 python3.9[10930]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 25 02:36:09 managed-node3 python3.9[11079]: ansible-ansible.legacy.command Invoked with _raw_params=dnf repolist _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 25 02:36:10 managed-node3 python3.9[11229]: ansible-ansible.legacy.dnf Invoked with name=['pcs', 'corosync-qnetd', 'openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 25 02:36:14 managed-node3 sshd-session[11288]: Accepted publickey for root from 10.31.45.39 port 53170 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 02:36:14 managed-node3 systemd-logind[609]: New session 16 of user root. ░░ Subject: A new session 16 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 16 has been created for the user root. ░░ ░░ The leading process of the session is 11288. Oct 25 02:36:14 managed-node3 systemd[1]: Started Session 16 of User root. ░░ Subject: A start job for unit session-16.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-16.scope has finished successfully. ░░ ░░ The job identifier is 1660. Oct 25 02:36:14 managed-node3 sshd-session[11288]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Oct 25 02:36:14 managed-node3 sshd-session[11291]: Received disconnect from 10.31.45.39 port 53170:11: disconnected by user Oct 25 02:36:14 managed-node3 sshd-session[11291]: Disconnected from user root 10.31.45.39 port 53170 Oct 25 02:36:14 managed-node3 sshd-session[11288]: pam_unix(sshd:session): session closed for user root Oct 25 02:36:14 managed-node3 systemd[1]: session-16.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-16.scope has successfully entered the 'dead' state. Oct 25 02:36:14 managed-node3 systemd-logind[609]: Session 16 logged out. Waiting for processes to exit. Oct 25 02:36:14 managed-node3 systemd-logind[609]: Removed session 16. ░░ Subject: Session 16 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 16 has been terminated. Oct 25 02:36:14 managed-node3 sshd-session[11316]: Accepted publickey for root from 10.31.45.39 port 53178 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 25 02:36:14 managed-node3 systemd-logind[609]: New session 17 of user root. ░░ Subject: A new session 17 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 17 has been created for the user root. ░░ ░░ The leading process of the session is 11316. Oct 25 02:36:14 managed-node3 systemd[1]: Started Session 17 of User root. ░░ Subject: A start job for unit session-17.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-17.scope has finished successfully. ░░ ░░ The job identifier is 1729. Oct 25 02:36:14 managed-node3 sshd-session[11316]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)