[WARNING]: Collection infra.leapp does not support Ansible version 2.14.18 [WARNING]: running playbook inside collection infra.leapp [WARNING]: Collection community.general does not support Ansible version 2.14.18 ansible-playbook [core 2.14.18] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible-playbook python version = 3.9.23 (main, Aug 19 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3) jinja version = 3.1.2 libyaml = True Using /etc/ansible/ansible.cfg as config file Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_default.yml **************************************************** 1 plays in /root/.ansible/collections/ansible_collections/infra/leapp/roles/analysis/tests/tests_default.yml PLAY [Test] ******************************************************************** TASK [Gathering Facts] ********************************************************* task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/analysis/tests/tests_default.yml:2 ok: [managed-node01] TASK [Initialize lock, logging, and common vars] ******************************* task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/analysis/tasks/main.yml:3 TASK [infra.leapp.common : Log directory exists] ******************************* task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:3 ok: [managed-node01] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/ripu", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 22, "state": "directory", "uid": 0} TASK [infra.leapp.common : Check for existing log file] ************************ task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:11 ok: [managed-node01] => {"changed": false, "stat": {"atime": 1762530331.8362799, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "38d9cc3842ff489755b0244703b4fd4b0b944302", "ctime": 1762530331.84228, "dev": 51716, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 343933125, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1762530331.4902797, "nlink": 1, "path": "/var/log/ripu/ripu.log", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 61, "uid": 0, "version": "3890230587", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} TASK [infra.leapp.common : Fail if log file already exists] ******************** task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:16 fatal: [managed-node01]: FAILED! => {"changed": false, "msg": "Another RIPU playbook job is already running. See /var/log/ripu/ripu.log for details. If the previous job was aborted, rename the log file to clear this failure and try again."} PLAY RECAP ********************************************************************* managed-node01 : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 Nov 07 10:46:07 managed-node01 python3[8877]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 07 10:46:08 managed-node01 python3[9030]: ansible-ansible.builtin.file Invoked with path=/var/log/ripu state=directory owner=root group=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 07 10:46:09 managed-node01 python3[9155]: ansible-ansible.builtin.stat Invoked with path=/var/log/ripu/ripu.log follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 07 10:46:09 managed-node01 sshd[9178]: Accepted publickey for root from 10.31.42.57 port 53260 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Nov 07 10:46:09 managed-node01 systemd-logind[604]: New session 11 of user root. ░░ Subject: A new session 11 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 11 has been created for the user root. ░░ ░░ The leading process of the session is 9178. Nov 07 10:46:09 managed-node01 systemd[1]: Started Session 11 of User root. ░░ Subject: A start job for unit session-11.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-11.scope has finished successfully. ░░ ░░ The job identifier is 1775. Nov 07 10:46:09 managed-node01 sshd[9178]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Nov 07 10:46:09 managed-node01 sshd[9181]: Received disconnect from 10.31.42.57 port 53260:11: disconnected by user Nov 07 10:46:09 managed-node01 sshd[9181]: Disconnected from user root 10.31.42.57 port 53260 Nov 07 10:46:09 managed-node01 sshd[9178]: pam_unix(sshd:session): session closed for user root Nov 07 10:46:09 managed-node01 systemd[1]: session-11.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-11.scope has successfully entered the 'dead' state. Nov 07 10:46:09 managed-node01 systemd-logind[604]: Session 11 logged out. Waiting for processes to exit. Nov 07 10:46:09 managed-node01 systemd-logind[604]: Removed session 11. ░░ Subject: Session 11 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 11 has been terminated. Nov 07 10:46:09 managed-node01 sshd[9202]: Accepted publickey for root from 10.31.42.57 port 53276 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Nov 07 10:46:09 managed-node01 systemd-logind[604]: New session 12 of user root. ░░ Subject: A new session 12 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 12 has been created for the user root. ░░ ░░ The leading process of the session is 9202. Nov 07 10:46:09 managed-node01 systemd[1]: Started Session 12 of User root. ░░ Subject: A start job for unit session-12.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-12.scope has finished successfully. ░░ ░░ The job identifier is 1860. Nov 07 10:46:09 managed-node01 sshd[9202]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)