[WARNING]: Collection infra.leapp does not support Ansible version 2.14.18 [WARNING]: running playbook inside collection infra.leapp ansible-playbook [core 2.14.18] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible-playbook python version = 3.9.23 (main, Aug 19 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3) jinja version = 3.1.2 libyaml = True Using /etc/ansible/ansible.cfg as config file Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_default.yml **************************************************** 1 plays in /root/.ansible/collections/ansible_collections/infra/leapp/roles/upgrade/tests/tests_default.yml PLAY [Test] ******************************************************************** TASK [Gathering Facts] ********************************************************* task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/upgrade/tests/tests_default.yml:2 ok: [managed-node01] TASK [Initialize lock, logging, and common vars] ******************************* task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/upgrade/tasks/main.yml:3 TASK [infra.leapp.common : Log directory exists] ******************************* task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:3 ok: [managed-node01] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/ripu", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 22, "state": "directory", "uid": 0} TASK [infra.leapp.common : Check for existing log file] ************************ task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:11 ok: [managed-node01] => {"changed": false, "stat": {"atime": 1762892776.0551593, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 24, "charset": "us-ascii", "checksum": "d2defd0b1bc8f9ec3caae38c6e88e19c363f3f35", "ctime": 1762892745.3181553, "dev": 51715, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 427819160, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1762892745.3181553, "nlink": 1, "path": "/var/log/ripu/ripu.log", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 9298, "uid": 0, "version": "333338064", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} TASK [infra.leapp.common : Fail if log file already exists] ******************** task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:16 fatal: [managed-node01]: FAILED! => {"changed": false, "msg": "Another RIPU playbook job is already running. See /var/log/ripu/ripu.log for details. If the previous job was aborted, rename the log file to clear this failure and try again."} PLAY RECAP ********************************************************************* managed-node01 : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 -- Logs begin at Tue 2025-11-11 15:18:49 EST, end at Tue 2025-11-11 15:26:38 EST. -- Nov 11 15:26:36 managed-node01 platform-python[13482]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 11 15:26:37 managed-node01 platform-python[13616]: ansible-ansible.builtin.file Invoked with path=/var/log/ripu state=directory owner=root group=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 11 15:26:38 managed-node01 platform-python[13721]: ansible-ansible.builtin.stat Invoked with path=/var/log/ripu/ripu.log follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 11 15:26:38 managed-node01 sshd[13741]: Accepted publickey for root from 10.31.47.17 port 51772 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Nov 11 15:26:38 managed-node01 systemd[1]: Started Session 18 of user root. -- Subject: Unit session-18.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-18.scope has finished starting up. -- -- The start-up result is done. Nov 11 15:26:38 managed-node01 systemd-logind[617]: New session 18 of user root. -- Subject: A new session 18 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 18 has been created for the user root. -- -- The leading process of the session is 13741. Nov 11 15:26:38 managed-node01 sshd[13741]: pam_unix(sshd:session): session opened for user root by (uid=0) Nov 11 15:26:38 managed-node01 sshd[13744]: Received disconnect from 10.31.47.17 port 51772:11: disconnected by user Nov 11 15:26:38 managed-node01 sshd[13744]: Disconnected from user root 10.31.47.17 port 51772 Nov 11 15:26:38 managed-node01 sshd[13741]: pam_unix(sshd:session): session closed for user root Nov 11 15:26:38 managed-node01 systemd[1]: session-18.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-18.scope has successfully entered the 'dead' state. Nov 11 15:26:38 managed-node01 systemd-logind[617]: Session 18 logged out. Waiting for processes to exit. Nov 11 15:26:38 managed-node01 systemd-logind[617]: Removed session 18. -- Subject: Session 18 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 18 has been terminated. Nov 11 15:26:38 managed-node01 sshd[13762]: Accepted publickey for root from 10.31.47.17 port 51782 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Nov 11 15:26:38 managed-node01 systemd[1]: Started Session 19 of user root. -- Subject: Unit session-19.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-19.scope has finished starting up. -- -- The start-up result is done. Nov 11 15:26:38 managed-node01 systemd-logind[617]: New session 19 of user root. -- Subject: A new session 19 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 19 has been created for the user root. -- -- The leading process of the session is 13762. Nov 11 15:26:38 managed-node01 sshd[13762]: pam_unix(sshd:session): session opened for user root by (uid=0)