ansible-playbook 2.9.27 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.19 (main, May 16 2024, 11:40:09) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] No config file found; using defaults [WARNING]: running playbook inside collection fedora.linux_system_roles Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: tests_quadlet_demo.yml *********************************************** 2 plays in /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml PLAY [all] ********************************************************************* META: ran handlers TASK [Include vault variables] ************************************************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:5 Saturday 04 October 2025 12:36:10 -0400 (0:00:00.019) 0:00:00.019 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_test_password": { "__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n35383939616163653333633431363463313831383037386236646138333162396161356130303461\n3932623930643263313563336163316337643562333936360a363538636631313039343233383732\n38666530383538656639363465313230343533386130303833336434303438333161656262346562\n3362626538613031640a663330613638366132356534363534353239616666653466353961323533\n6565\n" }, "mysql_container_root_password": { "__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n61333932373230333539663035366431326163363166363036323963623131363530326231303634\n6635326161643165363366323062333334363730376631660a393566366139353861656364656661\n38653463363837336639363032646433666361646535366137303464623261313663643336306465\n6264663730656337310a343962353137386238383064646533366433333437303566656433386233\n34343235326665646661623131643335313236313131353661386338343366316261643634653633\n3832313034366536616531323963333234326461353130303532\n" } }, "ansible_included_var_files": [ "/tmp/podman-SJc/tests/vars/vault-variables.yml" ], "changed": false } META: ran handlers META: ran handlers PLAY [Deploy the quadlet demo app] ********************************************* TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:9 Saturday 04 October 2025 12:36:10 -0400 (0:00:00.018) 0:00:00.038 ****** ok: [managed-node2] META: ran handlers TASK [Test is only supported on x86_64] **************************************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:38 Saturday 04 October 2025 12:36:11 -0400 (0:00:00.947) 0:00:00.985 ****** skipping: [managed-node2] => {} META: TASK [Generate certificates] *************************************************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:51 Saturday 04 October 2025 12:36:11 -0400 (0:00:00.058) 0:00:01.044 ****** TASK [fedora.linux_system_roles.certificate : Set version specific variables] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:2 Saturday 04 October 2025 12:36:11 -0400 (0:00:00.036) 0:00:01.080 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.certificate : Ensure ansible_facts used by role] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:2 Saturday 04 October 2025 12:36:11 -0400 (0:00:00.023) 0:00:01.104 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Check if system is ostree] ******* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:10 Saturday 04 October 2025 12:36:11 -0400 (0:00:00.016) 0:00:01.120 ****** ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.certificate : Set flag to indicate system is ostree] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:15 Saturday 04 October 2025 12:36:12 -0400 (0:00:00.419) 0:00:01.540 ****** ok: [managed-node2] => { "ansible_facts": { "__certificate_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.certificate : Run systemctl] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:22 Saturday 04 October 2025 12:36:12 -0400 (0:00:00.019) 0:00:01.559 ****** ok: [managed-node2] => { "changed": false, "cmd": [ "systemctl", "is-system-running" ], "delta": "0:00:00.007596", "end": "2025-10-04 12:36:12.492897", "failed_when_result": false, "rc": 0, "start": "2025-10-04 12:36:12.485301" } STDOUT: running TASK [fedora.linux_system_roles.certificate : Require installed systemd] ******* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:30 Saturday 04 October 2025 12:36:12 -0400 (0:00:00.417) 0:00:01.976 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:35 Saturday 04 October 2025 12:36:12 -0400 (0:00:00.026) 0:00:02.003 ****** ok: [managed-node2] => { "ansible_facts": { "__certificate_is_booted": true }, "changed": false } TASK [fedora.linux_system_roles.certificate : Set platform/version specific variables] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:40 Saturday 04 October 2025 12:36:12 -0400 (0:00:00.026) 0:00:02.030 ****** skipping: [managed-node2] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_8.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_8.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Ensure certificate role dependencies are installed] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:5 Saturday 04 October 2025 12:36:12 -0400 (0:00:00.056) 0:00:02.086 ****** changed: [managed-node2] => { "changed": true, "rc": 0, "results": [ "Installed: python3-pyasn1-0.3.7-6.el8.noarch" ] } lsrpackages: python3-cryptography python3-dbus python3-pyasn1 TASK [fedora.linux_system_roles.certificate : Ensure provider packages are installed] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:15 Saturday 04 October 2025 12:36:15 -0400 (0:00:03.199) 0:00:05.286 ****** changed: [managed-node2] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": true, "rc": 0, "results": [ "Installed: xmlrpc-c-client-1.51.0-9.el8.x86_64", "Installed: xmlrpc-c-1.51.0-9.el8.x86_64", "Installed: certmonger-0.79.17-2.el8.x86_64" ] } lsrpackages: certmonger TASK [fedora.linux_system_roles.certificate : Ensure pre-scripts hooks directory exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:25 Saturday 04 October 2025 12:36:19 -0400 (0:00:03.918) 0:00:09.204 ****** changed: [managed-node2] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": true, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/certmonger//pre-scripts", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.certificate : Ensure post-scripts hooks directory exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:49 Saturday 04 October 2025 12:36:20 -0400 (0:00:00.460) 0:00:09.665 ****** changed: [managed-node2] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": true, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/certmonger//post-scripts", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.certificate : Ensure provider service is running] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:76 Saturday 04 October 2025 12:36:20 -0400 (0:00:00.341) 0:00:10.006 ****** changed: [managed-node2] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": true, "enabled": true, "name": "certmonger", "state": "started", "status": { "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "dbus.socket syslog.target systemd-journald.socket sysinit.target system.slice basic.target network.target dbus.service", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedorahosted.certmonger", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Certificate monitoring and PKI enrollment", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/certmonger (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/certmonger ; argv[]=/usr/sbin/certmonger -S -p /run/certmonger.pid -n $OPTS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/certmonger.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "certmonger.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "certmonger.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PIDFile": "/run/certmonger.pid", "PartOf": "dbus.service", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target dbus.socket system.slice", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "UtmpMode": "init", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.certificate : Ensure certificate requests] ***** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:86 Saturday 04 October 2025 12:36:21 -0400 (0:00:00.975) 0:00:10.982 ****** changed: [managed-node2] => (item={'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}) => { "ansible_loop_var": "item", "changed": true, "item": { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } } MSG: Certificate requested (new). TASK [fedora.linux_system_roles.certificate : Check if test mode is supported] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:138 Saturday 04 October 2025 12:36:22 -0400 (0:00:00.844) 0:00:11.827 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Slurp the contents of the files] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:143 Saturday 04 October 2025 12:36:22 -0400 (0:00:00.028) 0:00:11.855 ****** ok: [managed-node2] => (item=['cert', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnakNDQW1xZ0F3SUJBZ0lRVy8rUjE1VFVSTUs0NUdwK3djV3FrREFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVNBd0hnWURWUVFEREJkTWIyTmhiQ0JUYVdkdWFXNW5JRUYxZEdodmNtbDBlVEVzTUNvR0ExVUVBd3dqTldKbQpaamt4WkRjdE9UUmtORFEwWXpJdFlqaGxORFpoTjJVdFl6RmpOV0ZoT0dZd0hoY05NalV4TURBME1UWXpOakl5CldoY05Nall4TURBME1UWXpOakl4V2pBVU1SSXdFQVlEVlFRREV3bHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ1VFa0h5OC9mdFJaQ1JnTzdpbTFJZWdheVNkeTZzK3VJMgpQc0ZGeDltMG5NVURUbHpYN1ZuN2VWaDlabEVBUHlrQjZSS2NjTFJZVFRWajh3NjNHNjA0Wi84K1VHK0NsYVRECkdCQTU1cDg3U0VBdFd1Zy9rSFE1cm55by9KVDVnMExBdnFOWXdSV1YyMnUyQlNENEt3eEtpZlhSTTZQaWl4L0MKYVJTWHVhS2g1c2hyQUoxWEpNcGd1ODZhWWZ0ZnlrSTFEZ0phZVdnc1g5NVJUcE1ld2hFV3p4anpjMlhWREdEYwpXUUZKc0VlaDZMZFR5WDhHTi9nYjllTnBDbTlTeS96RC8yNlc1UzU3S1g3b0lsWWZFZlFRNnlWbzdHODZ0bkF1CkQybUZaOE4reWNka3FtbU1LMnRpaVVYcXBHS1FhZUxhcFJ0WnI5bytyMEtjU0RxS1FGQ0pBZ01CQUFHamdaTXcKZ1pBd0N3WURWUjBQQkFRREFnV2dNQlFHQTFVZEVRUU5NQXVDQ1d4dlkyRnNhRzl6ZERBZEJnTlZIU1VFRmpBVQpCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVVLRmFlCjhuVjNvWHRHUGlrRnZHNXJqczl5N3RRd0h3WURWUjBqQkJnd0ZvQVVNK05JM1RqUVBaSUtSM3NoTFp6KzZxUk8KZHpNd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCWjVaTUUzT0FJRzUzeHRsajRkYS8zd0VjY05jb2RhcWMzVQp0QlFRajFmY0tTYkZiMmtvWXgvRklhVXlBRFFpcUg0WlhjN2FVL1hQSTA4Vko5N0xTcHZPcTJ6ZFdQaFBIUDMyCmRnTlRSam5Fd3hOdy9sR2F4U0lCcWFQT2dSRXU2YTBjWHl1aG9vUUdDK25oS3BMeFNYM1h4WEtsNjlJYkUzOVoKZ0hpVXJIVllBR0Jrb2w3MmVLdDJaS25nZmJBZElDSGp2UVVPdnZnZUkxK2paNWlMV09neDNLQVdSdW8zMDFYbgpsMGFLTjFKQUNJaWNkaEJMaFh4TDVYTGxsbk9sbHZIbXpYdEMwb1dxWWY5aUhIdkFqZnRRdmZLOThqTjZFVThWCnl1b0pBRXZhcmhlYitSSlpHbGZDWEdwTkFrejFnRlFkVUpMVkhjekNNd215KzJ1V0NVdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": [ "cert", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/certs/quadlet_demo.crt" } ok: [managed-node2] => (item=['key', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRQ1VFa0h5OC9mdFJaQ1IKZ083aW0xSWVnYXlTZHk2cyt1STJQc0ZGeDltMG5NVURUbHpYN1ZuN2VWaDlabEVBUHlrQjZSS2NjTFJZVFRWago4dzYzRzYwNFovOCtVRytDbGFUREdCQTU1cDg3U0VBdFd1Zy9rSFE1cm55by9KVDVnMExBdnFOWXdSV1YyMnUyCkJTRDRLd3hLaWZYUk02UGlpeC9DYVJTWHVhS2g1c2hyQUoxWEpNcGd1ODZhWWZ0ZnlrSTFEZ0phZVdnc1g5NVIKVHBNZXdoRVd6eGp6YzJYVkRHRGNXUUZKc0VlaDZMZFR5WDhHTi9nYjllTnBDbTlTeS96RC8yNlc1UzU3S1g3bwpJbFlmRWZRUTZ5Vm83Rzg2dG5BdUQybUZaOE4reWNka3FtbU1LMnRpaVVYcXBHS1FhZUxhcFJ0WnI5bytyMEtjClNEcUtRRkNKQWdNQkFBRUNnZ0VBTHZhRGVEMHMyMUovQWNjMC9TWnFLMGJScHpxcDBTOVpZLzhQYWNSekpqZTYKdk11ejRzQmpFOEZ1OFlic0ZmbnlWYXJJdmxsNHViRHpTQm9sQnFwK2pDOWY0ekc3ekYwTi90cTQrc1JNcUk3SAozQnJESXJYOFJ2Y3lqcDVkMmExcUZKdmlUeG9lY0lOQmJGL0FEellJRmZRZnhSUnQzRUpuOWs5QnVzV2o4VmszCm92OWhDL05hSHY3L1ludEQ5SDJPZGYzNlFobEdoZU9WVEdMWE03T3p3UHRHeXllY2FPRmNuRW5kcmFzamlTMzUKS3Fhc2h4OVlIdnBacG9nVnFpaHQ3a0FDTGUyZGRDT1VFclVvdEFUY1pjYUtDbVdUNWJwaHZVekdaVWVsa1I4bQpIVUlUbG9wcjVCZjQwWXNtbnVuck5rNWR0MkhxMTUzcjZSVlE0SmFrb1FLQmdRRERMRW5nVmVyWkh0QnhQRUNkCk1UcHkvTTgzRXIrak81QURqeGRQZTBZamR6RnFOOWtxOUUxUVZsUWFIRkEwNTRoNm5oNmlsSG9HWGhRSy9NQlQKblphQXFVR2tYK3BwSnJ4NjV5d2VwYmFnakhhQkFvZ3hpMnFBbHdDL1N3S1JBVVFoZFc2MkE3TjRGZUFxblhaawpqdDQvYmhzYUxqNVJJU3F4YjN5anUzK3p0UUtCZ1FEQ09BQmMyWkJnQmUxQnRtRHdCOTczQVlIMlhFeVJjZzMyCk12Umt2TDdQaWhsOFRrNHlUTWp5djI0dU1tb3VtNFNqa0tYRnZIUEpha3pXd3dXYXRROWFlVFNva1hyZFdWYnIKREUxTVJnZHpBNHVIRFVEUkNxMkNlWEY5L1FsYjNOQjNBVzhTdWw5dnJ6MUlBN1B3VmFIaEpoOGFBSGNaZzN3WgplZFExOUxkV0JRS0JnUUMrWHBDZ2xLMUJvbURHVW5MajRJU1diQ2ppR3hONWNEdUVmU25MaVA1YzBZSU5qUFB5CmhlQnpvQURnaHdWazFRRzJPRXpCWC9tMkJFV2dnZkJHbnN1U0s0V3ZneTd0NmE2bVlwNFNOcWp2NkpJZVBBNEQKNVd5NGlKRmVCUmczdi9ob2VsYkdpczJmTUJjNitlUGxLY1YyTVR1V1Njelc3WGJySTBkN25RTnVrUUtCZ1FDago2ZjJzWDFZeEpHOWo2VmVRM1NPNVZnVm9kZWVOVFRRcFdFSFpEMDcrKzYrY3NMM2dSOXZFdS9seWRjd1Z2OTFHCjZscHVNeW1Ka1BSK3dLTm5PVzVXempxNkZlWWJFRDZDSzZURlBja2xzWlU5aXRyc1VsV3o2MmowaXUwdUlZT3oKSEh1dzA2aWVLc2pPa1lsNHlkelFsNHJpT0FoTWVTTHdvVmlQblJScVBRS0JnRzhLM2Y4RHY4WVp1WXNKOUcyVQp6eDhRUi9sWkhNZnFtL0RqTmw0cjZvTjlxUncwWGRiOElqajU3QnNNcVRNcnJ3UU9Pak0yVUp1TVR0SHpEWnBFCmx6SC93em5yay9xa1RWb0ZNTUNiY3FLQkE3VGpOT0ZNWUJkOTRzV3Q4K21NcEZiL1hSYjlOMWtrTWZuVnNoVjkKLy9oZ1F3TVd4MnRtZU9sRkRkVzJFVE1OCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K", "encoding": "base64", "item": [ "key", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/private/quadlet_demo.key" } ok: [managed-node2] => (item=['ca', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnakNDQW1xZ0F3SUJBZ0lRVy8rUjE1VFVSTUs0NUdwK3djV3FrREFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVNBd0hnWURWUVFEREJkTWIyTmhiQ0JUYVdkdWFXNW5JRUYxZEdodmNtbDBlVEVzTUNvR0ExVUVBd3dqTldKbQpaamt4WkRjdE9UUmtORFEwWXpJdFlqaGxORFpoTjJVdFl6RmpOV0ZoT0dZd0hoY05NalV4TURBME1UWXpOakl5CldoY05Nall4TURBME1UWXpOakl4V2pBVU1SSXdFQVlEVlFRREV3bHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ1VFa0h5OC9mdFJaQ1JnTzdpbTFJZWdheVNkeTZzK3VJMgpQc0ZGeDltMG5NVURUbHpYN1ZuN2VWaDlabEVBUHlrQjZSS2NjTFJZVFRWajh3NjNHNjA0Wi84K1VHK0NsYVRECkdCQTU1cDg3U0VBdFd1Zy9rSFE1cm55by9KVDVnMExBdnFOWXdSV1YyMnUyQlNENEt3eEtpZlhSTTZQaWl4L0MKYVJTWHVhS2g1c2hyQUoxWEpNcGd1ODZhWWZ0ZnlrSTFEZ0phZVdnc1g5NVJUcE1ld2hFV3p4anpjMlhWREdEYwpXUUZKc0VlaDZMZFR5WDhHTi9nYjllTnBDbTlTeS96RC8yNlc1UzU3S1g3b0lsWWZFZlFRNnlWbzdHODZ0bkF1CkQybUZaOE4reWNka3FtbU1LMnRpaVVYcXBHS1FhZUxhcFJ0WnI5bytyMEtjU0RxS1FGQ0pBZ01CQUFHamdaTXcKZ1pBd0N3WURWUjBQQkFRREFnV2dNQlFHQTFVZEVRUU5NQXVDQ1d4dlkyRnNhRzl6ZERBZEJnTlZIU1VFRmpBVQpCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVVLRmFlCjhuVjNvWHRHUGlrRnZHNXJqczl5N3RRd0h3WURWUjBqQkJnd0ZvQVVNK05JM1RqUVBaSUtSM3NoTFp6KzZxUk8KZHpNd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCWjVaTUUzT0FJRzUzeHRsajRkYS8zd0VjY05jb2RhcWMzVQp0QlFRajFmY0tTYkZiMmtvWXgvRklhVXlBRFFpcUg0WlhjN2FVL1hQSTA4Vko5N0xTcHZPcTJ6ZFdQaFBIUDMyCmRnTlRSam5Fd3hOdy9sR2F4U0lCcWFQT2dSRXU2YTBjWHl1aG9vUUdDK25oS3BMeFNYM1h4WEtsNjlJYkUzOVoKZ0hpVXJIVllBR0Jrb2w3MmVLdDJaS25nZmJBZElDSGp2UVVPdnZnZUkxK2paNWlMV09neDNLQVdSdW8zMDFYbgpsMGFLTjFKQUNJaWNkaEJMaFh4TDVYTGxsbk9sbHZIbXpYdEMwb1dxWWY5aUhIdkFqZnRRdmZLOThqTjZFVThWCnl1b0pBRXZhcmhlYitSSlpHbGZDWEdwTkFrejFnRlFkVUpMVkhjekNNd215KzJ1V0NVdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": [ "ca", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/certs/quadlet_demo.crt" } TASK [fedora.linux_system_roles.certificate : Reset certificate_test_certs] **** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:151 Saturday 04 October 2025 12:36:23 -0400 (0:00:01.241) 0:00:13.096 ****** ok: [managed-node2] => { "ansible_facts": { "certificate_test_certs": {} }, "changed": false } TASK [fedora.linux_system_roles.certificate : Create return data] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:155 Saturday 04 October 2025 12:36:23 -0400 (0:00:00.031) 0:00:13.128 ****** ok: [managed-node2] => (item=quadlet_demo) => { "ansible_facts": { "certificate_test_certs": { "quadlet_demo": { "ca": "/etc/pki/tls/certs/quadlet_demo.crt", "ca_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQW/+R15TURMK45Gp+wcWqkDANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNWJm\nZjkxZDctOTRkNDQ0YzItYjhlNDZhN2UtYzFjNWFhOGYwHhcNMjUxMDA0MTYzNjIy\nWhcNMjYxMDA0MTYzNjIxWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCUEkHy8/ftRZCRgO7im1IegaySdy6s+uI2\nPsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj8w63G604Z/8+UG+ClaTD\nGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2BSD4KwxKifXRM6Piix/C\naRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95RTpMewhEWzxjzc2XVDGDc\nWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7oIlYfEfQQ6yVo7G86tnAu\nD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0KcSDqKQFCJAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUKFae\n8nV3oXtGPikFvG5rjs9y7tQwHwYDVR0jBBgwFoAUM+NI3TjQPZIKR3shLZz+6qRO\ndzMwDQYJKoZIhvcNAQELBQADggEBABZ5ZME3OAIG53xtlj4da/3wEccNcodaqc3U\ntBQQj1fcKSbFb2koYx/FIaUyADQiqH4ZXc7aU/XPI08VJ97LSpvOq2zdWPhPHP32\ndgNTRjnEwxNw/lGaxSIBqaPOgREu6a0cXyuhooQGC+nhKpLxSX3XxXKl69IbE39Z\ngHiUrHVYAGBkol72eKt2ZKngfbAdICHjvQUOvvgeI1+jZ5iLWOgx3KAWRuo301Xn\nl0aKN1JACIicdhBLhXxL5XLllnOllvHmzXtC0oWqYf9iHHvAjftQvfK98jN6EU8V\nyuoJAEvarheb+RJZGlfCXGpNAkz1gFQdUJLVHczCMwmy+2uWCUw=\n-----END CERTIFICATE-----\n", "cert": "/etc/pki/tls/certs/quadlet_demo.crt", "cert_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQW/+R15TURMK45Gp+wcWqkDANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNWJm\nZjkxZDctOTRkNDQ0YzItYjhlNDZhN2UtYzFjNWFhOGYwHhcNMjUxMDA0MTYzNjIy\nWhcNMjYxMDA0MTYzNjIxWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCUEkHy8/ftRZCRgO7im1IegaySdy6s+uI2\nPsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj8w63G604Z/8+UG+ClaTD\nGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2BSD4KwxKifXRM6Piix/C\naRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95RTpMewhEWzxjzc2XVDGDc\nWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7oIlYfEfQQ6yVo7G86tnAu\nD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0KcSDqKQFCJAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUKFae\n8nV3oXtGPikFvG5rjs9y7tQwHwYDVR0jBBgwFoAUM+NI3TjQPZIKR3shLZz+6qRO\ndzMwDQYJKoZIhvcNAQELBQADggEBABZ5ZME3OAIG53xtlj4da/3wEccNcodaqc3U\ntBQQj1fcKSbFb2koYx/FIaUyADQiqH4ZXc7aU/XPI08VJ97LSpvOq2zdWPhPHP32\ndgNTRjnEwxNw/lGaxSIBqaPOgREu6a0cXyuhooQGC+nhKpLxSX3XxXKl69IbE39Z\ngHiUrHVYAGBkol72eKt2ZKngfbAdICHjvQUOvvgeI1+jZ5iLWOgx3KAWRuo301Xn\nl0aKN1JACIicdhBLhXxL5XLllnOllvHmzXtC0oWqYf9iHHvAjftQvfK98jN6EU8V\nyuoJAEvarheb+RJZGlfCXGpNAkz1gFQdUJLVHczCMwmy+2uWCUw=\n-----END CERTIFICATE-----\n", "key": "/etc/pki/tls/private/quadlet_demo.key", "key_content": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCUEkHy8/ftRZCR\ngO7im1IegaySdy6s+uI2PsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj\n8w63G604Z/8+UG+ClaTDGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2\nBSD4KwxKifXRM6Piix/CaRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95R\nTpMewhEWzxjzc2XVDGDcWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7o\nIlYfEfQQ6yVo7G86tnAuD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0Kc\nSDqKQFCJAgMBAAECggEALvaDeD0s21J/Acc0/SZqK0bRpzqp0S9ZY/8PacRzJje6\nvMuz4sBjE8Fu8YbsFfnyVarIvll4ubDzSBolBqp+jC9f4zG7zF0N/tq4+sRMqI7H\n3BrDIrX8Rvcyjp5d2a1qFJviTxoecINBbF/ADzYIFfQfxRRt3EJn9k9BusWj8Vk3\nov9hC/NaHv7/YntD9H2Odf36QhlGheOVTGLXM7OzwPtGyyecaOFcnEndrasjiS35\nKqashx9YHvpZpogVqiht7kACLe2ddCOUErUotATcZcaKCmWT5bphvUzGZUelkR8m\nHUITlopr5Bf40YsmnunrNk5dt2Hq153r6RVQ4JakoQKBgQDDLEngVerZHtBxPECd\nMTpy/M83Er+jO5ADjxdPe0YjdzFqN9kq9E1QVlQaHFA054h6nh6ilHoGXhQK/MBT\nnZaAqUGkX+ppJrx65ywepbagjHaBAogxi2qAlwC/SwKRAUQhdW62A7N4FeAqnXZk\njt4/bhsaLj5RISqxb3yju3+ztQKBgQDCOABc2ZBgBe1BtmDwB973AYH2XEyRcg32\nMvRkvL7Pihl8Tk4yTMjyv24uMmoum4SjkKXFvHPJakzWwwWatQ9aeTSokXrdWVbr\nDE1MRgdzA4uHDUDRCq2CeXF9/Qlb3NB3AW8Sul9vrz1IA7PwVaHhJh8aAHcZg3wZ\nedQ19LdWBQKBgQC+XpCglK1BomDGUnLj4ISWbCjiGxN5cDuEfSnLiP5c0YINjPPy\nheBzoADghwVk1QG2OEzBX/m2BEWggfBGnsuSK4Wvgy7t6a6mYp4SNqjv6JIePA4D\n5Wy4iJFeBRg3v/hoelbGis2fMBc6+ePlKcV2MTuWSczW7XbrI0d7nQNukQKBgQCj\n6f2sX1YxJG9j6VeQ3SO5VgVodeeNTTQpWEHZD07++6+csL3gR9vEu/lydcwVv91G\n6lpuMymJkPR+wKNnOW5Wzjq6FeYbED6CK6TFPcklsZU9itrsUlWz62j0iu0uIYOz\nHHuw06ieKsjOkYl4ydzQl4riOAhMeSLwoViPnRRqPQKBgG8K3f8Dv8YZuYsJ9G2U\nzx8QR/lZHMfqm/DjNl4r6oN9qRw0Xdb8Ijj57BsMqTMrrwQOOjM2UJuMTtHzDZpE\nlzH/wznrk/qkTVoFMMCbcqKBA7TjNOFMYBd94sWt8+mMpFb/XRb9N1kkMfnVshV9\n//hgQwMWx2tmeOlFDdW2ETMN\n-----END PRIVATE KEY-----\n" } } }, "ansible_loop_var": "cert_name", "cert_name": "quadlet_demo", "changed": false } TASK [fedora.linux_system_roles.certificate : Stop tracking certificates] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:169 Saturday 04 October 2025 12:36:23 -0400 (0:00:00.057) 0:00:13.186 ****** ok: [managed-node2] => (item={'cert': '/etc/pki/tls/certs/quadlet_demo.crt', 'key': '/etc/pki/tls/private/quadlet_demo.key', 'ca': '/etc/pki/tls/certs/quadlet_demo.crt', 'cert_content': '-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQW/+R15TURMK45Gp+wcWqkDANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNWJm\nZjkxZDctOTRkNDQ0YzItYjhlNDZhN2UtYzFjNWFhOGYwHhcNMjUxMDA0MTYzNjIy\nWhcNMjYxMDA0MTYzNjIxWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCUEkHy8/ftRZCRgO7im1IegaySdy6s+uI2\nPsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj8w63G604Z/8+UG+ClaTD\nGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2BSD4KwxKifXRM6Piix/C\naRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95RTpMewhEWzxjzc2XVDGDc\nWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7oIlYfEfQQ6yVo7G86tnAu\nD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0KcSDqKQFCJAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUKFae\n8nV3oXtGPikFvG5rjs9y7tQwHwYDVR0jBBgwFoAUM+NI3TjQPZIKR3shLZz+6qRO\ndzMwDQYJKoZIhvcNAQELBQADggEBABZ5ZME3OAIG53xtlj4da/3wEccNcodaqc3U\ntBQQj1fcKSbFb2koYx/FIaUyADQiqH4ZXc7aU/XPI08VJ97LSpvOq2zdWPhPHP32\ndgNTRjnEwxNw/lGaxSIBqaPOgREu6a0cXyuhooQGC+nhKpLxSX3XxXKl69IbE39Z\ngHiUrHVYAGBkol72eKt2ZKngfbAdICHjvQUOvvgeI1+jZ5iLWOgx3KAWRuo301Xn\nl0aKN1JACIicdhBLhXxL5XLllnOllvHmzXtC0oWqYf9iHHvAjftQvfK98jN6EU8V\nyuoJAEvarheb+RJZGlfCXGpNAkz1gFQdUJLVHczCMwmy+2uWCUw=\n-----END CERTIFICATE-----\n', 'key_content': '-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCUEkHy8/ftRZCR\ngO7im1IegaySdy6s+uI2PsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj\n8w63G604Z/8+UG+ClaTDGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2\nBSD4KwxKifXRM6Piix/CaRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95R\nTpMewhEWzxjzc2XVDGDcWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7o\nIlYfEfQQ6yVo7G86tnAuD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0Kc\nSDqKQFCJAgMBAAECggEALvaDeD0s21J/Acc0/SZqK0bRpzqp0S9ZY/8PacRzJje6\nvMuz4sBjE8Fu8YbsFfnyVarIvll4ubDzSBolBqp+jC9f4zG7zF0N/tq4+sRMqI7H\n3BrDIrX8Rvcyjp5d2a1qFJviTxoecINBbF/ADzYIFfQfxRRt3EJn9k9BusWj8Vk3\nov9hC/NaHv7/YntD9H2Odf36QhlGheOVTGLXM7OzwPtGyyecaOFcnEndrasjiS35\nKqashx9YHvpZpogVqiht7kACLe2ddCOUErUotATcZcaKCmWT5bphvUzGZUelkR8m\nHUITlopr5Bf40YsmnunrNk5dt2Hq153r6RVQ4JakoQKBgQDDLEngVerZHtBxPECd\nMTpy/M83Er+jO5ADjxdPe0YjdzFqN9kq9E1QVlQaHFA054h6nh6ilHoGXhQK/MBT\nnZaAqUGkX+ppJrx65ywepbagjHaBAogxi2qAlwC/SwKRAUQhdW62A7N4FeAqnXZk\njt4/bhsaLj5RISqxb3yju3+ztQKBgQDCOABc2ZBgBe1BtmDwB973AYH2XEyRcg32\nMvRkvL7Pihl8Tk4yTMjyv24uMmoum4SjkKXFvHPJakzWwwWatQ9aeTSokXrdWVbr\nDE1MRgdzA4uHDUDRCq2CeXF9/Qlb3NB3AW8Sul9vrz1IA7PwVaHhJh8aAHcZg3wZ\nedQ19LdWBQKBgQC+XpCglK1BomDGUnLj4ISWbCjiGxN5cDuEfSnLiP5c0YINjPPy\nheBzoADghwVk1QG2OEzBX/m2BEWggfBGnsuSK4Wvgy7t6a6mYp4SNqjv6JIePA4D\n5Wy4iJFeBRg3v/hoelbGis2fMBc6+ePlKcV2MTuWSczW7XbrI0d7nQNukQKBgQCj\n6f2sX1YxJG9j6VeQ3SO5VgVodeeNTTQpWEHZD07++6+csL3gR9vEu/lydcwVv91G\n6lpuMymJkPR+wKNnOW5Wzjq6FeYbED6CK6TFPcklsZU9itrsUlWz62j0iu0uIYOz\nHHuw06ieKsjOkYl4ydzQl4riOAhMeSLwoViPnRRqPQKBgG8K3f8Dv8YZuYsJ9G2U\nzx8QR/lZHMfqm/DjNl4r6oN9qRw0Xdb8Ijj57BsMqTMrrwQOOjM2UJuMTtHzDZpE\nlzH/wznrk/qkTVoFMMCbcqKBA7TjNOFMYBd94sWt8+mMpFb/XRb9N1kkMfnVshV9\n//hgQwMWx2tmeOlFDdW2ETMN\n-----END PRIVATE KEY-----\n', 'ca_content': '-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQW/+R15TURMK45Gp+wcWqkDANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNWJm\nZjkxZDctOTRkNDQ0YzItYjhlNDZhN2UtYzFjNWFhOGYwHhcNMjUxMDA0MTYzNjIy\nWhcNMjYxMDA0MTYzNjIxWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCUEkHy8/ftRZCRgO7im1IegaySdy6s+uI2\nPsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj8w63G604Z/8+UG+ClaTD\nGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2BSD4KwxKifXRM6Piix/C\naRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95RTpMewhEWzxjzc2XVDGDc\nWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7oIlYfEfQQ6yVo7G86tnAu\nD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0KcSDqKQFCJAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUKFae\n8nV3oXtGPikFvG5rjs9y7tQwHwYDVR0jBBgwFoAUM+NI3TjQPZIKR3shLZz+6qRO\ndzMwDQYJKoZIhvcNAQELBQADggEBABZ5ZME3OAIG53xtlj4da/3wEccNcodaqc3U\ntBQQj1fcKSbFb2koYx/FIaUyADQiqH4ZXc7aU/XPI08VJ97LSpvOq2zdWPhPHP32\ndgNTRjnEwxNw/lGaxSIBqaPOgREu6a0cXyuhooQGC+nhKpLxSX3XxXKl69IbE39Z\ngHiUrHVYAGBkol72eKt2ZKngfbAdICHjvQUOvvgeI1+jZ5iLWOgx3KAWRuo301Xn\nl0aKN1JACIicdhBLhXxL5XLllnOllvHmzXtC0oWqYf9iHHvAjftQvfK98jN6EU8V\nyuoJAEvarheb+RJZGlfCXGpNAkz1gFQdUJLVHczCMwmy+2uWCUw=\n-----END CERTIFICATE-----\n'}) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "getcert", "stop-tracking", "-f", "/etc/pki/tls/certs/quadlet_demo.crt" ], "delta": "0:00:00.031777", "end": "2025-10-04 12:36:24.108666", "item": { "ca": "/etc/pki/tls/certs/quadlet_demo.crt", "ca_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQW/+R15TURMK45Gp+wcWqkDANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNWJm\nZjkxZDctOTRkNDQ0YzItYjhlNDZhN2UtYzFjNWFhOGYwHhcNMjUxMDA0MTYzNjIy\nWhcNMjYxMDA0MTYzNjIxWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCUEkHy8/ftRZCRgO7im1IegaySdy6s+uI2\nPsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj8w63G604Z/8+UG+ClaTD\nGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2BSD4KwxKifXRM6Piix/C\naRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95RTpMewhEWzxjzc2XVDGDc\nWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7oIlYfEfQQ6yVo7G86tnAu\nD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0KcSDqKQFCJAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUKFae\n8nV3oXtGPikFvG5rjs9y7tQwHwYDVR0jBBgwFoAUM+NI3TjQPZIKR3shLZz+6qRO\ndzMwDQYJKoZIhvcNAQELBQADggEBABZ5ZME3OAIG53xtlj4da/3wEccNcodaqc3U\ntBQQj1fcKSbFb2koYx/FIaUyADQiqH4ZXc7aU/XPI08VJ97LSpvOq2zdWPhPHP32\ndgNTRjnEwxNw/lGaxSIBqaPOgREu6a0cXyuhooQGC+nhKpLxSX3XxXKl69IbE39Z\ngHiUrHVYAGBkol72eKt2ZKngfbAdICHjvQUOvvgeI1+jZ5iLWOgx3KAWRuo301Xn\nl0aKN1JACIicdhBLhXxL5XLllnOllvHmzXtC0oWqYf9iHHvAjftQvfK98jN6EU8V\nyuoJAEvarheb+RJZGlfCXGpNAkz1gFQdUJLVHczCMwmy+2uWCUw=\n-----END CERTIFICATE-----\n", "cert": "/etc/pki/tls/certs/quadlet_demo.crt", "cert_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQW/+R15TURMK45Gp+wcWqkDANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjNWJm\nZjkxZDctOTRkNDQ0YzItYjhlNDZhN2UtYzFjNWFhOGYwHhcNMjUxMDA0MTYzNjIy\nWhcNMjYxMDA0MTYzNjIxWjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCUEkHy8/ftRZCRgO7im1IegaySdy6s+uI2\nPsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj8w63G604Z/8+UG+ClaTD\nGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2BSD4KwxKifXRM6Piix/C\naRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95RTpMewhEWzxjzc2XVDGDc\nWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7oIlYfEfQQ6yVo7G86tnAu\nD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0KcSDqKQFCJAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUKFae\n8nV3oXtGPikFvG5rjs9y7tQwHwYDVR0jBBgwFoAUM+NI3TjQPZIKR3shLZz+6qRO\ndzMwDQYJKoZIhvcNAQELBQADggEBABZ5ZME3OAIG53xtlj4da/3wEccNcodaqc3U\ntBQQj1fcKSbFb2koYx/FIaUyADQiqH4ZXc7aU/XPI08VJ97LSpvOq2zdWPhPHP32\ndgNTRjnEwxNw/lGaxSIBqaPOgREu6a0cXyuhooQGC+nhKpLxSX3XxXKl69IbE39Z\ngHiUrHVYAGBkol72eKt2ZKngfbAdICHjvQUOvvgeI1+jZ5iLWOgx3KAWRuo301Xn\nl0aKN1JACIicdhBLhXxL5XLllnOllvHmzXtC0oWqYf9iHHvAjftQvfK98jN6EU8V\nyuoJAEvarheb+RJZGlfCXGpNAkz1gFQdUJLVHczCMwmy+2uWCUw=\n-----END CERTIFICATE-----\n", "key": "/etc/pki/tls/private/quadlet_demo.key", "key_content": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCUEkHy8/ftRZCR\ngO7im1IegaySdy6s+uI2PsFFx9m0nMUDTlzX7Vn7eVh9ZlEAPykB6RKccLRYTTVj\n8w63G604Z/8+UG+ClaTDGBA55p87SEAtWug/kHQ5rnyo/JT5g0LAvqNYwRWV22u2\nBSD4KwxKifXRM6Piix/CaRSXuaKh5shrAJ1XJMpgu86aYftfykI1DgJaeWgsX95R\nTpMewhEWzxjzc2XVDGDcWQFJsEeh6LdTyX8GN/gb9eNpCm9Sy/zD/26W5S57KX7o\nIlYfEfQQ6yVo7G86tnAuD2mFZ8N+ycdkqmmMK2tiiUXqpGKQaeLapRtZr9o+r0Kc\nSDqKQFCJAgMBAAECggEALvaDeD0s21J/Acc0/SZqK0bRpzqp0S9ZY/8PacRzJje6\nvMuz4sBjE8Fu8YbsFfnyVarIvll4ubDzSBolBqp+jC9f4zG7zF0N/tq4+sRMqI7H\n3BrDIrX8Rvcyjp5d2a1qFJviTxoecINBbF/ADzYIFfQfxRRt3EJn9k9BusWj8Vk3\nov9hC/NaHv7/YntD9H2Odf36QhlGheOVTGLXM7OzwPtGyyecaOFcnEndrasjiS35\nKqashx9YHvpZpogVqiht7kACLe2ddCOUErUotATcZcaKCmWT5bphvUzGZUelkR8m\nHUITlopr5Bf40YsmnunrNk5dt2Hq153r6RVQ4JakoQKBgQDDLEngVerZHtBxPECd\nMTpy/M83Er+jO5ADjxdPe0YjdzFqN9kq9E1QVlQaHFA054h6nh6ilHoGXhQK/MBT\nnZaAqUGkX+ppJrx65ywepbagjHaBAogxi2qAlwC/SwKRAUQhdW62A7N4FeAqnXZk\njt4/bhsaLj5RISqxb3yju3+ztQKBgQDCOABc2ZBgBe1BtmDwB973AYH2XEyRcg32\nMvRkvL7Pihl8Tk4yTMjyv24uMmoum4SjkKXFvHPJakzWwwWatQ9aeTSokXrdWVbr\nDE1MRgdzA4uHDUDRCq2CeXF9/Qlb3NB3AW8Sul9vrz1IA7PwVaHhJh8aAHcZg3wZ\nedQ19LdWBQKBgQC+XpCglK1BomDGUnLj4ISWbCjiGxN5cDuEfSnLiP5c0YINjPPy\nheBzoADghwVk1QG2OEzBX/m2BEWggfBGnsuSK4Wvgy7t6a6mYp4SNqjv6JIePA4D\n5Wy4iJFeBRg3v/hoelbGis2fMBc6+ePlKcV2MTuWSczW7XbrI0d7nQNukQKBgQCj\n6f2sX1YxJG9j6VeQ3SO5VgVodeeNTTQpWEHZD07++6+csL3gR9vEu/lydcwVv91G\n6lpuMymJkPR+wKNnOW5Wzjq6FeYbED6CK6TFPcklsZU9itrsUlWz62j0iu0uIYOz\nHHuw06ieKsjOkYl4ydzQl4riOAhMeSLwoViPnRRqPQKBgG8K3f8Dv8YZuYsJ9G2U\nzx8QR/lZHMfqm/DjNl4r6oN9qRw0Xdb8Ijj57BsMqTMrrwQOOjM2UJuMTtHzDZpE\nlzH/wznrk/qkTVoFMMCbcqKBA7TjNOFMYBd94sWt8+mMpFb/XRb9N1kkMfnVshV9\n//hgQwMWx2tmeOlFDdW2ETMN\n-----END PRIVATE KEY-----\n" }, "rc": 0, "start": "2025-10-04 12:36:24.076889" } STDOUT: Request "20251004163622" removed. TASK [fedora.linux_system_roles.certificate : Remove files] ******************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:174 Saturday 04 October 2025 12:36:24 -0400 (0:00:00.414) 0:00:13.601 ****** changed: [managed-node2] => (item=/etc/pki/tls/certs/quadlet_demo.crt) => { "ansible_loop_var": "item", "changed": true, "item": "/etc/pki/tls/certs/quadlet_demo.crt", "path": "/etc/pki/tls/certs/quadlet_demo.crt", "state": "absent" } changed: [managed-node2] => (item=/etc/pki/tls/private/quadlet_demo.key) => { "ansible_loop_var": "item", "changed": true, "item": "/etc/pki/tls/private/quadlet_demo.key", "path": "/etc/pki/tls/private/quadlet_demo.key", "state": "absent" } ok: [managed-node2] => (item=/etc/pki/tls/certs/quadlet_demo.crt) => { "ansible_loop_var": "item", "changed": false, "item": "/etc/pki/tls/certs/quadlet_demo.crt", "path": "/etc/pki/tls/certs/quadlet_demo.crt", "state": "absent" } TASK [Run the role] ************************************************************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:62 Saturday 04 October 2025 12:36:25 -0400 (0:00:01.087) 0:00:14.689 ****** TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 04 October 2025 12:36:25 -0400 (0:00:00.137) 0:00:14.827 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 04 October 2025 12:36:25 -0400 (0:00:00.035) 0:00:14.862 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 04 October 2025 12:36:25 -0400 (0:00:00.024) 0:00:14.887 ****** ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 04 October 2025 12:36:25 -0400 (0:00:00.368) 0:00:15.255 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 04 October 2025 12:36:25 -0400 (0:00:00.035) 0:00:15.291 ****** ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 04 October 2025 12:36:26 -0400 (0:00:00.330) 0:00:15.621 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_is_transactional": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 04 October 2025 12:36:26 -0400 (0:00:00.020) 0:00:15.641 ****** ok: [managed-node2] => (item=RedHat.yml) => { "ansible_facts": { "__podman_packages": [ "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed-node2] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node2] => (item=CentOS_8.yml) => { "ansible_facts": { "__podman_packages": [ "crun", "podman", "podman-plugins", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } ok: [managed-node2] => (item=CentOS_8.yml) => { "ansible_facts": { "__podman_packages": [ "crun", "podman", "podman-plugins", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 04 October 2025 12:36:26 -0400 (0:00:00.044) 0:00:15.686 ****** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 04 October 2025 12:36:27 -0400 (0:00:01.725) 0:00:17.412 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.047) 0:00:17.459 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.064) 0:00:17.524 ****** skipping: [managed-node2] => {} TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.051) 0:00:17.575 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.049) 0:00:17.625 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.048) 0:00:17.674 ****** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.030066", "end": "2025-10-04 12:36:28.587018", "rc": 0, "start": "2025-10-04 12:36:28.556952" } STDOUT: podman version 4.9.4-dev TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.418) 0:00:18.092 ****** ok: [managed-node2] => { "ansible_facts": { "podman_version": "4.9.4-dev" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.037) 0:00:18.130 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.038) 0:00:18.168 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } META: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.147) 0:00:18.315 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } META: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 04 October 2025 12:36:28 -0400 (0:00:00.075) 0:00:18.391 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 04 October 2025 12:36:29 -0400 (0:00:00.061) 0:00:18.452 ****** ok: [managed-node2] => { "ansible_facts": { "getent_passwd": { "root": [ "x", "0", "0", "root", "/root", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 04 October 2025 12:36:29 -0400 (0:00:00.456) 0:00:18.909 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 04 October 2025 12:36:29 -0400 (0:00:00.039) 0:00:18.948 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 04 October 2025 12:36:29 -0400 (0:00:00.042) 0:00:18.990 ****** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1759595444.223306, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "bb5b46ffbafcaa8c4021f3c8b3cb8594f48ef34b", "ctime": 1759595415.692989, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6884013, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-sharedlib", "mode": "0755", "mtime": 1700557386.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 12640, "uid": 0, "version": "2755563640", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 04 October 2025 12:36:29 -0400 (0:00:00.350) 0:00:19.340 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 04 October 2025 12:36:29 -0400 (0:00:00.033) 0:00:19.374 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 04 October 2025 12:36:29 -0400 (0:00:00.033) 0:00:19.407 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.046) 0:00:19.454 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.043) 0:00:19.497 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.052) 0:00:19.549 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.053) 0:00:19.603 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.052) 0:00:19.656 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.053) 0:00:19.709 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.115) 0:00:19.825 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.097) 0:00:19.923 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.039) 0:00:19.963 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.074) 0:00:20.037 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.065) 0:00:20.102 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.033) 0:00:20.136 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.030) 0:00:20.166 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.060) 0:00:20.227 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.034) 0:00:20.262 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.044) 0:00:20.306 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 04 October 2025 12:36:30 -0400 (0:00:00.103) 0:00:20.409 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 04 October 2025 12:36:31 -0400 (0:00:00.054) 0:00:20.464 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 04 October 2025 12:36:31 -0400 (0:00:00.048) 0:00:20.512 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 04 October 2025 12:36:31 -0400 (0:00:00.051) 0:00:20.564 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 04 October 2025 12:36:31 -0400 (0:00:00.050) 0:00:20.615 ****** TASK [fedora.linux_system_roles.firewall : Setup firewalld] ******************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:2 Saturday 04 October 2025 12:36:31 -0400 (0:00:00.165) 0:00:20.780 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml for managed-node2 TASK [fedora.linux_system_roles.firewall : Ensure ansible_facts used by role] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:2 Saturday 04 October 2025 12:36:31 -0400 (0:00:00.141) 0:00:20.921 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if system is ostree] ********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:10 Saturday 04 October 2025 12:36:31 -0400 (0:00:00.068) 0:00:20.990 ****** ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.firewall : Set flag to indicate system is ostree] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:15 Saturday 04 October 2025 12:36:31 -0400 (0:00:00.382) 0:00:21.372 ****** ok: [managed-node2] => { "ansible_facts": { "__firewall_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.firewall : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:22 Saturday 04 October 2025 12:36:32 -0400 (0:00:00.056) 0:00:21.429 ****** ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.firewall : Set flag if transactional-update exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:27 Saturday 04 October 2025 12:36:32 -0400 (0:00:00.393) 0:00:21.822 ****** ok: [managed-node2] => { "ansible_facts": { "__firewall_is_transactional": false }, "changed": false } TASK [fedora.linux_system_roles.firewall : Run systemctl] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:34 Saturday 04 October 2025 12:36:32 -0400 (0:00:00.058) 0:00:21.880 ****** ok: [managed-node2] => { "changed": false, "cmd": [ "systemctl", "is-system-running" ], "delta": "0:00:00.007110", "end": "2025-10-04 12:36:32.782109", "failed_when_result": false, "rc": 0, "start": "2025-10-04 12:36:32.774999" } STDOUT: running TASK [fedora.linux_system_roles.firewall : Require installed systemd] ********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:41 Saturday 04 October 2025 12:36:32 -0400 (0:00:00.407) 0:00:22.288 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:46 Saturday 04 October 2025 12:36:32 -0400 (0:00:00.052) 0:00:22.341 ****** ok: [managed-node2] => { "ansible_facts": { "__firewall_is_booted": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Install firewalld] ****************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:51 Saturday 04 October 2025 12:36:32 -0400 (0:00:00.053) 0:00:22.394 ****** ok: [managed-node2] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: firewalld TASK [fedora.linux_system_roles.firewall : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:63 Saturday 04 October 2025 12:36:35 -0400 (0:00:02.441) 0:00:24.835 ****** skipping: [managed-node2] => {} TASK [fedora.linux_system_roles.firewall : Reboot transactional update systems] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:68 Saturday 04 October 2025 12:36:35 -0400 (0:00:00.031) 0:00:24.866 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Fail if reboot is needed and not set] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:73 Saturday 04 October 2025 12:36:35 -0400 (0:00:00.036) 0:00:24.902 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check which conflicting services are enabled] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:5 Saturday 04 October 2025 12:36:35 -0400 (0:00:00.048) 0:00:24.951 ****** skipping: [managed-node2] => (item=nftables) => { "ansible_loop_var": "item", "changed": false, "item": "nftables", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=iptables) => { "ansible_loop_var": "item", "changed": false, "item": "iptables", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=ufw) => { "ansible_loop_var": "item", "changed": false, "item": "ufw", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Attempt to stop and disable conflicting services] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:14 Saturday 04 October 2025 12:36:35 -0400 (0:00:00.055) 0:00:25.006 ****** skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'nftables', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "nftables", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'iptables', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "iptables", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'ufw', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "ufw", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Unmask firewalld service] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:24 Saturday 04 October 2025 12:36:35 -0400 (0:00:00.054) 0:00:25.060 ****** ok: [managed-node2] => { "changed": false, "name": "firewalld", "status": { "ActiveEnterTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ActiveEnterTimestampMonotonic": "277910710", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.service polkit.service dbus.socket basic.target system.slice sysinit.target", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-10-04 12:30:50 EDT", "AssertTimestampMonotonic": "277605912", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ConditionTimestampMonotonic": "277605911", "ConfigurationDirectoryMode": "0755", "Conflicts": "iptables.service ebtables.service ip6tables.service ipset.service nftables.service shutdown.target", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "12889", "ExecMainStartTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ExecMainStartTimestampMonotonic": "277607664", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-10-04 12:30:50 EDT", "InactiveExitTimestampMonotonic": "277607697", "InvocationID": "ad22fbe355574cf4b89374213cad5726", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "12889", "MemoryAccounting": "yes", "MemoryCurrent": "42958848", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-10-04 12:30:50 EDT", "StateChangeTimestampMonotonic": "277910710", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Sat 2025-10-04 12:30:50 EDT", "WatchdogTimestampMonotonic": "277910707", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Enable and start firewalld service] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:30 Saturday 04 October 2025 12:36:36 -0400 (0:00:00.510) 0:00:25.570 ****** ok: [managed-node2] => { "changed": false, "enabled": true, "name": "firewalld", "state": "started", "status": { "ActiveEnterTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ActiveEnterTimestampMonotonic": "277910710", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.service polkit.service dbus.socket basic.target system.slice sysinit.target", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-10-04 12:30:50 EDT", "AssertTimestampMonotonic": "277605912", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ConditionTimestampMonotonic": "277605911", "ConfigurationDirectoryMode": "0755", "Conflicts": "iptables.service ebtables.service ip6tables.service ipset.service nftables.service shutdown.target", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "12889", "ExecMainStartTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ExecMainStartTimestampMonotonic": "277607664", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-10-04 12:30:50 EDT", "InactiveExitTimestampMonotonic": "277607697", "InvocationID": "ad22fbe355574cf4b89374213cad5726", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "12889", "MemoryAccounting": "yes", "MemoryCurrent": "42958848", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-10-04 12:30:50 EDT", "StateChangeTimestampMonotonic": "277910710", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Sat 2025-10-04 12:30:50 EDT", "WatchdogTimestampMonotonic": "277910707", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Check if previous replaced is defined] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:36 Saturday 04 October 2025 12:36:36 -0400 (0:00:00.508) 0:00:26.078 ****** ok: [managed-node2] => { "ansible_facts": { "__firewall_previous_replaced": false, "__firewall_python_cmd": "/usr/libexec/platform-python", "__firewall_report_changed": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Get config files, checksums before and remove] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:45 Saturday 04 October 2025 12:36:36 -0400 (0:00:00.051) 0:00:26.130 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Tell firewall module it is able to report changed] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:58 Saturday 04 October 2025 12:36:36 -0400 (0:00:00.040) 0:00:26.171 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Configure firewall] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:74 Saturday 04 October 2025 12:36:36 -0400 (0:00:00.035) 0:00:26.207 ****** changed: [managed-node2] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "__firewall_changed": true, "ansible_loop_var": "item", "changed": true, "item": { "port": "8000/tcp", "state": "enabled" } } changed: [managed-node2] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "__firewall_changed": true, "ansible_loop_var": "item", "changed": true, "item": { "port": "9000/tcp", "state": "enabled" } } TASK [fedora.linux_system_roles.firewall : Gather firewall config information] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:126 Saturday 04 October 2025 12:36:38 -0400 (0:00:01.313) 0:00:27.520 ****** skipping: [managed-node2] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "item": { "port": "8000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "item": { "port": "9000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:137 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.077) 0:00:27.598 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Gather firewall config if no arguments] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:146 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.054) 0:00:27.653 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:152 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.053) 0:00:27.707 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Get config files, checksums after] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:161 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.051) 0:00:27.758 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Calculate what has changed] ********* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:172 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.051) 0:00:27.810 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Show diffs] ************************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:178 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.048) 0:00:27.858 ****** skipping: [managed-node2] => {} TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.051) 0:00:27.910 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.050) 0:00:27.960 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.046) 0:00:28.007 ****** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.049) 0:00:28.056 ****** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.048) 0:00:28.104 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.256) 0:00:28.361 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 04 October 2025 12:36:38 -0400 (0:00:00.053) 0:00:28.415 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.094) 0:00:28.509 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.062) 0:00:28.572 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.060) 0:00:28.632 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.066) 0:00:28.699 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.049) 0:00:28.749 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.049) 0:00:28.799 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.049) 0:00:28.848 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.051) 0:00:28.900 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.048) 0:00:28.948 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.049) 0:00:28.997 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.049) 0:00:29.046 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.050) 0:00:29.097 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.045) 0:00:29.143 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.045) 0:00:29.189 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.086) 0:00:29.275 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.067) 0:00:29.342 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.030) 0:00:29.373 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 04 October 2025 12:36:39 -0400 (0:00:00.031) 0:00:29.405 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 04 October 2025 12:36:40 -0400 (0:00:00.031) 0:00:29.436 ****** fatal: [managed-node2]: FAILED! => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result" } TASK [Dump journal] ************************************************************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:142 Saturday 04 October 2025 12:36:40 -0400 (0:00:00.034) 0:00:29.471 ****** fatal: [managed-node2]: FAILED! => { "changed": false, "cmd": [ "journalctl", "-ex" ], "delta": "0:00:00.027706", "end": "2025-10-04 12:36:40.354159", "failed_when_result": true, "rc": 0, "start": "2025-10-04 12:36:40.326453" } STDOUT: -- Logs begin at Sat 2025-10-04 12:26:12 EDT, end at Sat 2025-10-04 12:36:40 EDT. -- Oct 04 12:31:06 managed-node2 platform-python[14699]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:31:10 managed-node2 platform-python[14822]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:12 managed-node2 platform-python[14947]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:13 managed-node2 platform-python[15070]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:31:13 managed-node2 platform-python[15193]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:31:13 managed-node2 platform-python[15292]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/nopull.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595473.2695348-10044-42164822916015/source _original_basename=tmp0bj2sg47 follow=False checksum=d5dc917e3cae36de03aa971a17ac473f86fdf934 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:31:14 managed-node2 platform-python[15417]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:31:14 managed-node2 kernel: evm: overlay not supported Oct 04 12:31:14 managed-node2 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck4171139467-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-metacopy\x2dcheck4171139467-merged.mount has successfully entered the 'dead' state. Oct 04 12:31:14 managed-node2 systemd[1]: Created slice machine.slice. -- Subject: Unit machine.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:31:14 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice. -- Subject: Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:31:15 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:31:19 managed-node2 platform-python[15743]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:31:20 managed-node2 platform-python[15872]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:23 managed-node2 platform-python[15997]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:26 managed-node2 platform-python[16120]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:31:26 managed-node2 platform-python[16247]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:31:27 managed-node2 platform-python[16374]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:31:29 managed-node2 platform-python[16497]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:32 managed-node2 platform-python[16620]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:34 managed-node2 platform-python[16743]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:37 managed-node2 platform-python[16866]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:31:38 managed-node2 platform-python[17014]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:31:39 managed-node2 platform-python[17137]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:31:43 managed-node2 platform-python[17260]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:46 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:31:46 managed-node2 platform-python[17523]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:47 managed-node2 platform-python[17646]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:31:47 managed-node2 platform-python[17769]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:31:47 managed-node2 platform-python[17868]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/bogus.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595507.1829739-11549-146476787522942/source _original_basename=tmpisytpwv2 follow=False checksum=f8266a972ed3be7e204d2a67883fe3a22b8dbf18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:31:48 managed-node2 platform-python[17993]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:31:48 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice. -- Subject: Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:31:48 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:31:51 managed-node2 platform-python[18280]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:31:52 managed-node2 platform-python[18409]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:55 managed-node2 platform-python[18534]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:58 managed-node2 platform-python[18657]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:31:59 managed-node2 platform-python[18784]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:31:59 managed-node2 platform-python[18911]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:32:01 managed-node2 platform-python[19034]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:04 managed-node2 platform-python[19157]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:06 managed-node2 platform-python[19280]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:09 managed-node2 platform-python[19403]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:32:11 managed-node2 platform-python[19551]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:32:11 managed-node2 platform-python[19674]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:32:15 managed-node2 platform-python[19797]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:17 managed-node2 platform-python[19922]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:17 managed-node2 platform-python[20046]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:32:18 managed-node2 platform-python[20173]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:18 managed-node2 platform-python[20298]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:32:18 managed-node2 platform-python[20298]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/nopull.yml Oct 04 12:32:18 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice. -- Subject: Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished shutting down. Oct 04 12:32:18 managed-node2 systemd[1]: machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice completed and consumed the indicated resources. Oct 04 12:32:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:32:19 managed-node2 platform-python[20436]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:32:19 managed-node2 platform-python[20559]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:22 managed-node2 platform-python[20814]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:23 managed-node2 platform-python[20942]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:27 managed-node2 platform-python[21067]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:29 managed-node2 platform-python[21190]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:32:30 managed-node2 platform-python[21317]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:32:31 managed-node2 platform-python[21444]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:32:33 managed-node2 platform-python[21567]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:35 managed-node2 platform-python[21690]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:38 managed-node2 platform-python[21813]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:40 managed-node2 platform-python[21936]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:32:42 managed-node2 platform-python[22084]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:32:43 managed-node2 platform-python[22207]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:32:47 managed-node2 platform-python[22330]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:48 managed-node2 platform-python[22455]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:49 managed-node2 platform-python[22579]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:32:50 managed-node2 platform-python[22706]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:50 managed-node2 platform-python[22831]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:32:50 managed-node2 platform-python[22831]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/bogus.yml Oct 04 12:32:50 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice. -- Subject: Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished shutting down. Oct 04 12:32:50 managed-node2 systemd[1]: machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice completed and consumed the indicated resources. Oct 04 12:32:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:32:50 managed-node2 platform-python[22970]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:32:51 managed-node2 platform-python[23093]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:32:54 managed-node2 platform-python[23349]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:56 managed-node2 platform-python[23478]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:59 managed-node2 platform-python[23603]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:33:00 managed-node2 chronyd[603]: Detected falseticker 74.208.25.46 (2.centos.pool.ntp.org) Oct 04 12:33:01 managed-node2 platform-python[23726]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:33:02 managed-node2 platform-python[23853]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:33:03 managed-node2 platform-python[23980]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:33:05 managed-node2 platform-python[24103]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:33:07 managed-node2 platform-python[24226]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:33:10 managed-node2 platform-python[24349]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:33:12 managed-node2 platform-python[24472]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:33:14 managed-node2 platform-python[24620]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:33:15 managed-node2 platform-python[24743]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:33:19 managed-node2 platform-python[24866]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Oct 04 12:33:19 managed-node2 platform-python[24990]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:20 managed-node2 platform-python[25115]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:20 managed-node2 platform-python[25239]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:21 managed-node2 platform-python[25363]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:22 managed-node2 platform-python[25487]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Oct 04 12:33:22 managed-node2 systemd[1]: Created slice User Slice of UID 3001. -- Subject: Unit user-3001.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-3001.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[1]: Starting User runtime directory /run/user/3001... -- Subject: Unit user-runtime-dir@3001.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has begun starting up. Oct 04 12:33:22 managed-node2 systemd[1]: Started User runtime directory /run/user/3001. -- Subject: Unit user-runtime-dir@3001.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[1]: Starting User Manager for UID 3001... -- Subject: Unit user@3001.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has begun starting up. Oct 04 12:33:22 managed-node2 systemd[25493]: pam_unix(systemd-user:session): session opened for user podman_basic_user by (uid=0) Oct 04 12:33:22 managed-node2 systemd[25493]: Starting D-Bus User Message Bus Socket. -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Paths. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Started Mark boot as successful after the user session has run 2 minutes. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Timers. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Listening on D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Sockets. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Basic System. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Default. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Startup finished in 26ms. -- Subject: User manager start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The user manager instance for user 3001 has been started. All services queued -- for starting have been started. Note that other services might still be starting -- up or be started at any later time. -- -- Startup of the manager took 26808 microseconds. Oct 04 12:33:22 managed-node2 systemd[1]: Started User Manager for UID 3001. -- Subject: Unit user@3001.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:23 managed-node2 platform-python[25628]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:23 managed-node2 platform-python[25751]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:23 managed-node2 sudo[25874]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sifywbsrisccwijbkunlnmxfrflsisdd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595603.7086673-15808-227803905840648/AnsiballZ_podman_image.py' Oct 04 12:33:23 managed-node2 sudo[25874]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:24 managed-node2 systemd[25493]: Started D-Bus User Message Bus. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Created slice user.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25886.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-pause-f03acc05.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25902.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25917.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25926.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:25 managed-node2 systemd[25493]: Started podman-25933.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:25 managed-node2 systemd[25493]: Started podman-25942.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:25 managed-node2 sudo[25874]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:25 managed-node2 platform-python[26071]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:25 managed-node2 platform-python[26194]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:26 managed-node2 platform-python[26317]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:33:26 managed-node2 platform-python[26416]: ansible-copy Invoked with dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml owner=podman_basic_user group=3001 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595606.1679773-15914-270785518997063/source _original_basename=tmpck_isd86 follow=False checksum=4df6e405cb1c69d6fda71fca57ba10095c6652bf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:33:26 managed-node2 sudo[26541]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhtmywupgtnbvlezrrhjughqngefyblk ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595606.8491445-15948-152734101610115/AnsiballZ_podman_play.py' Oct 04 12:33:26 managed-node2 sudo[26541]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:33:27 managed-node2 systemd[25493]: Started podman-26552.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:27 managed-node2 kernel: tun: Universal TUN/TAP device driver, 1.6 Oct 04 12:33:27 managed-node2 systemd[25493]: Started rootless-netns-edb70a77.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:27 managed-node2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth2fe45075: link is not ready Oct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered blocking state Oct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state Oct 04 12:33:27 managed-node2 kernel: device veth2fe45075 entered promiscuous mode Oct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Oct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2fe45075: link becomes ready Oct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered blocking state Oct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered forwarding state Oct 04 12:33:27 managed-node2 dnsmasq[26740]: listening on cni-podman1(#3): 10.89.0.1 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: started, version 2.79 cachesize 150 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using local addresses only for domain dns.podman Oct 04 12:33:27 managed-node2 dnsmasq[26742]: reading /etc/resolv.conf Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using local addresses only for domain dns.podman Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.0.2.3#53 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.29.169.13#53 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.29.170.12#53 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.2.32.1#53 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:27 managed-node2 conmon[26754]: conmon 978f42b0916c823a3a50 : failed to write to /proc/self/oom_score_adj: Permission denied Oct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach} Oct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : terminal_ctrl_fd: 14 Oct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : winsz read side: 17, winsz write side: 18 Oct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : container PID: 26765 Oct 04 12:33:27 managed-node2 conmon[26775]: conmon 4c95f0539eb18fb7ecd6 : failed to write to /proc/self/oom_score_adj: Permission denied Oct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach} Oct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : terminal_ctrl_fd: 13 Oct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : winsz read side: 16, winsz write side: 17 Oct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : container PID: 26786 Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d Container: 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-10-04T12:33:27-04:00" level=info msg="/bin/podman filtering at log level debug" time="2025-10-04T12:33:27-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-10-04T12:33:27-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-10-04T12:33:27-04:00" level=info msg="Using sqlite as database backend" time="2025-10-04T12:33:27-04:00" level=debug msg="Using graph driver overlay" time="2025-10-04T12:33:27-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2025-10-04T12:33:27-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2025-10-04T12:33:27-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2025-10-04T12:33:27-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2025-10-04T12:33:27-04:00" level=debug msg="Using transient store: false" time="2025-10-04T12:33:27-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2025-10-04T12:33:27-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2025-10-04T12:33:27-04:00" level=debug msg="Initializing event backend file" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-10-04T12:33:27-04:00" level=info msg="Setting parallel job count to 7" time="2025-10-04T12:33:27-04:00" level=debug msg="Successfully loaded 1 networks" time="2025-10-04T12:33:27-04:00" level=debug msg="found free device name cni-podman1" time="2025-10-04T12:33:27-04:00" level=debug msg="found free ipv4 network subnet 10.89.0.0/24" time="2025-10-04T12:33:27-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:33:27.193743577 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-10-04T12:33:27-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="reference \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" does not resolve to an image ID" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="reference \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" does not resolve to an image ID" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="FROM \"scratch\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2025-10-04T12:33:27-04:00" level=debug msg="Check for idmapped mounts support " time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="overlay: test mount indicated that volatile is being used" time="2025-10-04T12:33:27-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/empty,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/work,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c105,c564\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container ID: 9516e55badb1147a3bd380a45ee33bd293ab708eefc046d098c76c453fc83966" time="2025-10-04T12:33:27-04:00" level=debug msg="Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:copy Args:[/usr/libexec/podman/catatonit /catatonit] Flags:[] Attrs:map[] Message:COPY /usr/libexec/podman/catatonit /catatonit Heredocs:[] Original:COPY /usr/libexec/podman/catatonit /catatonit}" time="2025-10-04T12:33:27-04:00" level=debug msg="COPY []string(nil), imagebuilder.Copy{FromFS:false, From:\"\", Src:[]string{\"/usr/libexec/podman/catatonit\"}, Dest:\"/catatonit\", Download:false, Chown:\"\", Chmod:\"\", Checksum:\"\", Files:[]imagebuilder.File(nil)}" time="2025-10-04T12:33:27-04:00" level=debug msg="added content file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd" time="2025-10-04T12:33:27-04:00" level=debug msg="Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:entrypoint Args:[/catatonit -P] Flags:[] Attrs:map[json:true] Message:ENTRYPOINT /catatonit -P Heredocs:[] Original:ENTRYPOINT [\"/catatonit\", \"-P\"]}" time="2025-10-04T12:33:27-04:00" level=debug msg="COMMIT localhost/podman-pause:4.9.4-dev-1708535009" time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-10-04T12:33:27-04:00" level=debug msg="COMMIT \"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-10-04T12:33:27-04:00" level=debug msg="committing image with reference \"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" is allowed by policy" time="2025-10-04T12:33:27-04:00" level=debug msg="layer list: [\"768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb\"]" time="2025-10-04T12:33:27-04:00" level=debug msg="using \"/var/tmp/buildah340804419\" to hold temporary data" time="2025-10-04T12:33:27-04:00" level=debug msg="Tar with options on /home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/diff" time="2025-10-04T12:33:27-04:00" level=debug msg="layer \"768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb\" size is 767488 bytes, uncompressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690, possibly-compressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690" time="2025-10-04T12:33:27-04:00" level=debug msg="OCIv1 config = {\"created\":\"2025-10-04T16:33:27.33236731Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/catatonit\",\"-P\"],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"]},\"history\":[{\"created\":\"2025-10-04T16:33:27.331845264Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \",\"empty_layer\":true},{\"created\":\"2025-10-04T16:33:27.335420758Z\",\"created_by\":\"/bin/sh -c #(nop) ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]\"}]}" time="2025-10-04T12:33:27-04:00" level=debug msg="OCIv1 manifest = {\"schemaVersion\":2,\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"config\":{\"mediaType\":\"application/vnd.oci.image.config.v1+json\",\"digest\":\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\",\"size\":667},\"layers\":[{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar\",\"digest\":\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\",\"size\":767488}],\"annotations\":{\"org.opencontainers.image.base.digest\":\"\",\"org.opencontainers.image.base.name\":\"\"}}" time="2025-10-04T12:33:27-04:00" level=debug msg="Docker v2s2 config = {\"created\":\"2025-10-04T16:33:27.33236731Z\",\"container\":\"9516e55badb1147a3bd380a45ee33bd293ab708eefc046d098c76c453fc83966\",\"container_config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[],\"Image\":\"\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":[\"/catatonit\",\"-P\"],\"OnBuild\":[],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[],\"Image\":\"\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":[\"/catatonit\",\"-P\"],\"OnBuild\":[],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"architecture\":\"amd64\",\"os\":\"linux\",\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"]},\"history\":[{\"created\":\"2025-10-04T16:33:27.331845264Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \",\"empty_layer\":true},{\"created\":\"2025-10-04T16:33:27.335420758Z\",\"created_by\":\"/bin/sh -c #(nop) ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]\"}]}" time="2025-10-04T12:33:27-04:00" level=debug msg="Docker v2s2 manifest = {\"schemaVersion\":2,\"mediaType\":\"application/vnd.docker.distribution.manifest.v2+json\",\"config\":{\"mediaType\":\"application/vnd.docker.container.image.v1+json\",\"size\":1341,\"digest\":\"sha256:cc08d8f0e313f02451a20252b1d70f6f69284663aede171c80a5525e2a51ba5b\"},\"layers\":[{\"mediaType\":\"application/vnd.docker.image.rootfs.diff.tar\",\"size\":767488,\"digest\":\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"}]}" time="2025-10-04T12:33:27-04:00" level=debug msg="Using SQLite blob info cache at /home/podman_basic_user/.local/share/containers/cache/blob-info-cache-v1.sqlite" time="2025-10-04T12:33:27-04:00" level=debug msg="IsRunningImageAllowed for image containers-storage:" time="2025-10-04T12:33:27-04:00" level=debug msg=" Using transport \"containers-storage\" policy section " time="2025-10-04T12:33:27-04:00" level=debug msg=" Requirement 0: allowed" time="2025-10-04T12:33:27-04:00" level=debug msg="Overall: allowed" time="2025-10-04T12:33:27-04:00" level=debug msg="start reading config" time="2025-10-04T12:33:27-04:00" level=debug msg="finished reading config" time="2025-10-04T12:33:27-04:00" level=debug msg="Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]" time="2025-10-04T12:33:27-04:00" level=debug msg="... will first try using the original manifest unmodified" time="2025-10-04T12:33:27-04:00" level=debug msg="Checking if we can reuse blob sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690: general substitution = true, compression for MIME type \"application/vnd.oci.image.layer.v1.tar\" = true" time="2025-10-04T12:33:27-04:00" level=debug msg="reading layer \"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"" time="2025-10-04T12:33:27-04:00" level=debug msg="No compression detected" time="2025-10-04T12:33:27-04:00" level=debug msg="Using original blob without modification" time="2025-10-04T12:33:27-04:00" level=debug msg="Applying tar in /home/podman_basic_user/.local/share/containers/storage/overlay/d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690/diff" time="2025-10-04T12:33:27-04:00" level=debug msg="finished reading layer \"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"" time="2025-10-04T12:33:27-04:00" level=debug msg="No compression detected" time="2025-10-04T12:33:27-04:00" level=debug msg="Compression change for blob sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05 (\"application/vnd.oci.image.config.v1+json\") not supported" time="2025-10-04T12:33:27-04:00" level=debug msg="Using original blob without modification" time="2025-10-04T12:33:27-04:00" level=debug msg="setting image creation date to 2025-10-04 16:33:27.33236731 +0000 UTC" time="2025-10-04T12:33:27-04:00" level=debug msg="created new image ID \"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\" with metadata \"{}\"" time="2025-10-04T12:33:27-04:00" level=debug msg="added name \"localhost/podman-pause:4.9.4-dev-1708535009\" to image \"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-10-04T12:33:27-04:00" level=debug msg="printing final image id \"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Pod using bridge network mode" time="2025-10-04T12:33:27-04:00" level=debug msg="Got pod cgroup as /libpod_parent/4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05" time="2025-10-04T12:33:27-04:00" level=debug msg="using systemd mode: false" time="2025-10-04T12:33:27-04:00" level=debug msg="setting container name 4bfdec19f3e3-infra" time="2025-10-04T12:33:27-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Allocated lock 1 for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1" time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created container \"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container \"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\" has work directory \"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container \"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\" has run directory \"/run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:27-04:00" level=debug msg="using systemd mode: false" time="2025-10-04T12:33:27-04:00" level=debug msg="adding container to pod httpd1" time="2025-10-04T12:33:27-04:00" level=debug msg="setting container name httpd1-httpd1" time="2025-10-04T12:33:27-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-10-04T12:33:27-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /proc" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /dev" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /dev/pts" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /dev/mqueue" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /sys" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2025-10-04T12:33:27-04:00" level=debug msg="Allocated lock 2 for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8" time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created container \"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container \"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\" has work directory \"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container \"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\" has run directory \"/run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Strongconnecting node 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1" time="2025-10-04T12:33:27-04:00" level=debug msg="Pushed 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 onto stack" time="2025-10-04T12:33:27-04:00" level=debug msg="Finishing node 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1. Popped 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 off stack" time="2025-10-04T12:33:27-04:00" level=debug msg="Strongconnecting node 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8" time="2025-10-04T12:33:27-04:00" level=debug msg="Pushed 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 onto stack" time="2025-10-04T12:33:27-04:00" level=debug msg="Finishing node 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8. Popped 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 off stack" time="2025-10-04T12:33:27-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/3P7PWYNTG5QJZJOWQ6XDK4NETN,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/work,userxattr,context=\"system_u:object_r:container_file_t:s0:c285,c421\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Made network namespace at /run/user/3001/netns/netns-f5551a3d-13a6-81b5-6f62-8de155b907e3 for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1" time="2025-10-04T12:33:27-04:00" level=debug msg="Mounted container \"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\" at \"/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created root filesystem for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 at /home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged" time="2025-10-04T12:33:27-04:00" level=debug msg="creating rootless network namespace with name \"rootless-netns-d22c9f230d0691b8f418\"" time="2025-10-04T12:33:27-04:00" level=debug msg="slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/3001/netns/rootless-netns-d22c9f230d0691b8f418 tap0" time="2025-10-04T12:33:27-04:00" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/etc/resolv.conf\"" time="2025-10-04T12:33:27-04:00" level=debug msg="cni result for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:e2:98:f4:5f:02:10 Sandbox:} {Name:veth2fe45075 Mac:16:b6:29:b0:6d:39 Sandbox:} {Name:eth0 Mac:2a:18:12:08:ad:32 Sandbox:/run/user/3001/netns/netns-f5551a3d-13a6-81b5-6f62-8de155b907e3}] [{Version:4 Interface:0xc000c3e028 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Starting parent driver\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"opaque=map[builtin.readypipepath:/run/user/3001/libpod/tmp/rootlessport4177357533/.bp-ready.pipe builtin.socketpath:/run/user/3001/libpod/tmp/rootlessport4177357533/.bp.sock]\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Starting child driver in child netns (\\\"/proc/self/exe\\\" [rootlessport-child])\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Waiting for initComplete\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"initComplete is closed; parent and child established the communication channel\"\ntime=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Exposing ports [{ 80 15001 1 tcp}]\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=Ready\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport is ready" time="2025-10-04T12:33:27-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-10-04T12:33:27-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-10-04T12:33:27-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created OCI spec for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/config.json" time="2025-10-04T12:33:27-04:00" level=debug msg="Got pod cgroup as " time="2025-10-04T12:33:27-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-10-04T12:33:27-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 -u 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata -p /run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/pidfile -n 4bfdec19f3e3-infra --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1]" time="2025-10-04T12:33:27-04:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied time="2025-10-04T12:33:27-04:00" level=debug msg="Received: 26765" time="2025-10-04T12:33:27-04:00" level=info msg="Got Conmon PID as 26755" time="2025-10-04T12:33:27-04:00" level=debug msg="Created container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 in OCI runtime" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2025-10-04T12:33:27-04:00" level=debug msg="Starting container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 with command [/catatonit -P]" time="2025-10-04T12:33:27-04:00" level=debug msg="Started container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1" time="2025-10-04T12:33:27-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/FD7XHZOTU3ZCOHOMS6WJGARUCE,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/work,userxattr,context=\"system_u:object_r:container_file_t:s0:c285,c421\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Mounted container \"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\" at \"/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/merged\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created root filesystem for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 at /home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/merged" time="2025-10-04T12:33:27-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-10-04T12:33:27-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-10-04T12:33:27-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2025-10-04T12:33:27-04:00" level=debug msg="Created OCI spec for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/config.json" time="2025-10-04T12:33:27-04:00" level=debug msg="Got pod cgroup as " time="2025-10-04T12:33:27-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-10-04T12:33:27-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 -u 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata -p /run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/pidfile -n httpd1-httpd1 --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8]" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied time="2025-10-04T12:33:27-04:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied" time="2025-10-04T12:33:27-04:00" level=debug msg="Received: 26786" time="2025-10-04T12:33:27-04:00" level=info msg="Got Conmon PID as 26776" time="2025-10-04T12:33:27-04:00" level=debug msg="Created container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 in OCI runtime" time="2025-10-04T12:33:27-04:00" level=debug msg="Starting container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 with command [/bin/busybox-extras httpd -f -p 80]" time="2025-10-04T12:33:27-04:00" level=debug msg="Started container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8" time="2025-10-04T12:33:27-04:00" level=debug msg="Called kube.PersistentPostRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-10-04T12:33:27-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Oct 04 12:33:27 managed-node2 sudo[26541]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:28 managed-node2 sudo[26917]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqgeablflvziirakssvhgovxnyqlazn ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595608.143278-15965-116724055097581/AnsiballZ_systemd.py' Oct 04 12:33:28 managed-node2 sudo[26917]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:28 managed-node2 platform-python[26920]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Oct 04 12:33:28 managed-node2 systemd[25493]: Reloading. Oct 04 12:33:28 managed-node2 sudo[26917]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:28 managed-node2 sudo[27054]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcawqykrekhxzainagfjkqbithxyjltw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595608.6875198-15999-90547985485461/AnsiballZ_systemd.py' Oct 04 12:33:28 managed-node2 sudo[27054]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:29 managed-node2 platform-python[27057]: ansible-systemd Invoked with name= scope=user enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Oct 04 12:33:29 managed-node2 systemd[25493]: Reloading. Oct 04 12:33:29 managed-node2 sudo[27054]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:29 managed-node2 sudo[27193]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sugdwogvzmuboswwbcwdjoyqjtijmrjd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595609.2593784-16020-169137665275210/AnsiballZ_systemd.py' Oct 04 12:33:29 managed-node2 sudo[27193]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:29 managed-node2 dnsmasq[26742]: listening on cni-podman1(#3): fe80::e098:f4ff:fe5f:210%cni-podman1 Oct 04 12:33:29 managed-node2 platform-python[27196]: ansible-systemd Invoked with name= scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Oct 04 12:33:29 managed-node2 systemd[25493]: Created slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:29 managed-node2 systemd[25493]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Oct 04 12:33:29 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : container 26765 exited with status 137 Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1)" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Oct 04 12:33:29 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : container 26786 exited with status 137 Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=info msg="Using sqlite as database backend" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using graph driver overlay" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using run root /run/user/3001/containers" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using transient store: false" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that metacopy is not being used" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that native-diff is usable" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Initializing event backend file" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=info msg="Setting parallel job count to 7" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8)" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=info msg="Using sqlite as database backend" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using graph driver overlay" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using run root /run/user/3001/containers" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using transient store: false" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that metacopy is not being used" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that native-diff is usable" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Initializing event backend file" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=info msg="Setting parallel job count to 7" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8)" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:29 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state Oct 04 12:33:29 managed-node2 kernel: device veth2fe45075 left promiscuous mode Oct 04 12:33:29 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1)" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:29 managed-node2 podman[27202]: Pods stopped: Oct 04 12:33:29 managed-node2 podman[27202]: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d Oct 04 12:33:29 managed-node2 podman[27202]: Pods removed: Oct 04 12:33:29 managed-node2 podman[27202]: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d Oct 04 12:33:29 managed-node2 podman[27202]: Secrets removed: Oct 04 12:33:29 managed-node2 podman[27202]: Volumes removed: Oct 04 12:33:30 managed-node2 systemd[25493]: Started rootless-netns-d4627493.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:30 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth938ef76c: link is not ready Oct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered blocking state Oct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state Oct 04 12:33:30 managed-node2 kernel: device veth938ef76c entered promiscuous mode Oct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered blocking state Oct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered forwarding state Oct 04 12:33:30 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth938ef76c: link becomes ready Oct 04 12:33:30 managed-node2 dnsmasq[27452]: listening on cni-podman1(#3): 10.89.0.1 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: started, version 2.79 cachesize 150 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using local addresses only for domain dns.podman Oct 04 12:33:30 managed-node2 dnsmasq[27454]: reading /etc/resolv.conf Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using local addresses only for domain dns.podman Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.0.2.3#53 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.29.169.13#53 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.29.170.12#53 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.2.32.1#53 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:30 managed-node2 podman[27202]: Pod: Oct 04 12:33:30 managed-node2 podman[27202]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb Oct 04 12:33:30 managed-node2 podman[27202]: Container: Oct 04 12:33:30 managed-node2 podman[27202]: e74648d47617035a35842176c0cd197e876af20efb66c9a6fbb560c1ba4c6833 Oct 04 12:33:30 managed-node2 systemd[25493]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:30 managed-node2 sudo[27193]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:31 managed-node2 platform-python[27630]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:33:32 managed-node2 dnsmasq[27454]: listening on cni-podman1(#3): fe80::f8fb:d3ff:fe6b:28b6%cni-podman1 Oct 04 12:33:32 managed-node2 platform-python[27754]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:33 managed-node2 platform-python[27879]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:34 managed-node2 platform-python[28003]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:34 managed-node2 platform-python[28126]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:35 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:33:36 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:33:36 managed-node2 platform-python[28426]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:36 managed-node2 platform-python[28549]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:37 managed-node2 platform-python[28672]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:33:37 managed-node2 platform-python[28771]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595616.9605668-16444-215555946645887/source _original_basename=tmp7zrtpb5n follow=False checksum=65edd58cfda8e78be7cf81993b5521acb64e8edf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:33:37 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:33:38 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice. -- Subject: Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1056] manager: (cni-podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3) Oct 04 12:33:38 managed-node2 systemd-udevd[28943]: Using default interface naming scheme 'rhel-8.0'. Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1123] manager: (veth58b4002b): new Veth device (/org/freedesktop/NetworkManager/Devices/4) Oct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth58b4002b: link is not ready Oct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered blocking state Oct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state Oct 04 12:33:38 managed-node2 kernel: device veth58b4002b entered promiscuous mode Oct 04 12:33:38 managed-node2 systemd-udevd[28944]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:38 managed-node2 systemd-udevd[28944]: Could not generate persistent MAC address for veth58b4002b: No such file or directory Oct 04 12:33:38 managed-node2 systemd-udevd[28943]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:38 managed-node2 systemd-udevd[28943]: Could not generate persistent MAC address for cni-podman1: No such file or directory Oct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1326] device (cni-podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1331] device (cni-podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1338] device (cni-podman1): Activation: starting connection 'cni-podman1' (f4b0bed9-ed1a-4daa-9776-1b7c64cb04df) Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1339] device (cni-podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1342] device (cni-podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1344] device (cni-podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1345] device (cni-podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth58b4002b: link becomes ready Oct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered blocking state Oct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered forwarding state Oct 04 12:33:38 managed-node2 dbus-daemon[595]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=660 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Oct 04 12:33:38 managed-node2 systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has begun starting up. Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1545] device (veth58b4002b): carrier: link connected Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1548] device (cni-podman1): carrier: link connected Oct 04 12:33:38 managed-node2 dbus-daemon[595]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Oct 04 12:33:38 managed-node2 systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1968] device (cni-podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1970] device (cni-podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1979] device (cni-podman1): Activation: successful, device activated. Oct 04 12:33:38 managed-node2 dnsmasq[29065]: listening on cni-podman1(#3): 10.89.0.1 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: started, version 2.79 cachesize 150 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using local addresses only for domain dns.podman Oct 04 12:33:38 managed-node2 dnsmasq[29069]: reading /etc/resolv.conf Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using local addresses only for domain dns.podman Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.29.169.13#53 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.29.170.12#53 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.2.32.1#53 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:38 managed-node2 systemd[1]: Started libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope. -- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach} Oct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : terminal_ctrl_fd: 13 Oct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : winsz read side: 17, winsz write side: 18 Oct 04 12:33:38 managed-node2 systemd[1]: Started libcontainer container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7. -- Subject: Unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : container PID: 29081 Oct 04 12:33:38 managed-node2 systemd[1]: Started libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope. -- Subject: Unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach} Oct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : terminal_ctrl_fd: 12 Oct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : winsz read side: 16, winsz write side: 17 Oct 04 12:33:38 managed-node2 systemd[1]: Started libcontainer container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a. -- Subject: Unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : container PID: 29103 Oct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Oct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a Container: b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a Oct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-10-04T12:33:37-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2025-10-04T12:33:37-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-10-04T12:33:37-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-10-04T12:33:37-04:00" level=info msg="Using sqlite as database backend" time="2025-10-04T12:33:37-04:00" level=debug msg="Using graph driver overlay" time="2025-10-04T12:33:37-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2025-10-04T12:33:37-04:00" level=debug msg="Using run root /run/containers/storage" time="2025-10-04T12:33:37-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2025-10-04T12:33:37-04:00" level=debug msg="Using tmp dir /run/libpod" time="2025-10-04T12:33:37-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2025-10-04T12:33:37-04:00" level=debug msg="Using transient store: false" time="2025-10-04T12:33:37-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-10-04T12:33:37-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:33:37-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:33:37-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2025-10-04T12:33:37-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2025-10-04T12:33:37-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2025-10-04T12:33:37-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2025-10-04T12:33:37-04:00" level=debug msg="Initializing event backend file" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-10-04T12:33:37-04:00" level=info msg="Setting parallel job count to 7" time="2025-10-04T12:33:37-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:31:14.473584587 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-10-04T12:33:37-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-10-04T12:33:37-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:37-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:37-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:37-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:37-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:37-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)" time="2025-10-04T12:33:37-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:37-04:00" level=debug msg="Pod using bridge network mode" time="2025-10-04T12:33:37-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751" time="2025-10-04T12:33:38-04:00" level=debug msg="using systemd mode: false" time="2025-10-04T12:33:38-04:00" level=debug msg="setting container name f7eedbe6e6e1-infra" time="2025-10-04T12:33:38-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Allocated lock 1 for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2025-10-04T12:33:38-04:00" level=debug msg="Check for idmapped mounts support " time="2025-10-04T12:33:38-04:00" level=debug msg="Created container \"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Container \"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\" has work directory \"/var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Container \"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\" has run directory \"/run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:38-04:00" level=debug msg="using systemd mode: false" time="2025-10-04T12:33:38-04:00" level=debug msg="adding container to pod httpd2" time="2025-10-04T12:33:38-04:00" level=debug msg="setting container name httpd2-httpd2" time="2025-10-04T12:33:38-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-10-04T12:33:38-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /proc" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /dev" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /dev/pts" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /dev/mqueue" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /sys" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2025-10-04T12:33:38-04:00" level=debug msg="Allocated lock 2 for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a" time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Created container \"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Container \"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\" has work directory \"/var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Container \"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\" has run directory \"/run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Strongconnecting node acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="Pushed acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 onto stack" time="2025-10-04T12:33:38-04:00" level=debug msg="Finishing node acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7. Popped acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 off stack" time="2025-10-04T12:33:38-04:00" level=debug msg="Strongconnecting node b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a" time="2025-10-04T12:33:38-04:00" level=debug msg="Pushed b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a onto stack" time="2025-10-04T12:33:38-04:00" level=debug msg="Finishing node b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a. Popped b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a off stack" time="2025-10-04T12:33:38-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/CLVCQDNEL47VMN42Y3O6VVBSEK,upperdir=/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/diff,workdir=/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c321,c454\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Mounted container \"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\" at \"/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Created root filesystem for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 at /var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged" time="2025-10-04T12:33:38-04:00" level=debug msg="Made network namespace at /run/netns/netns-4bb92ac6-c391-8230-0912-824e2a801d42 for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="cni result for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:92:f8:b0:67:7f:78 Sandbox:} {Name:veth58b4002b Mac:9e:e6:53:58:c5:ef Sandbox:} {Name:eth0 Mac:9a:79:68:03:db:b9 Sandbox:/run/netns/netns-4bb92ac6-c391-8230-0912-824e2a801d42}] [{Version:4 Interface:0xc0006223b8 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}" time="2025-10-04T12:33:38-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-10-04T12:33:38-04:00" level=debug msg="Setting Cgroups for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 to machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice:libpod:acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-10-04T12:33:38-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Created OCI spec for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 at /var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/config.json" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-10-04T12:33:38-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 -u acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata -p /run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/pidfile -n f7eedbe6e6e1-infra --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7]" time="2025-10-04T12:33:38-04:00" level=info msg="Running conmon under slice machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice and unitName libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope" time="2025-10-04T12:33:38-04:00" level=debug msg="Received: 29081" time="2025-10-04T12:33:38-04:00" level=info msg="Got Conmon PID as 29071" time="2025-10-04T12:33:38-04:00" level=debug msg="Created container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 in OCI runtime" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2025-10-04T12:33:38-04:00" level=debug msg="Starting container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 with command [/catatonit -P]" time="2025-10-04T12:33:38-04:00" level=debug msg="Started container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/LBBH4VMJZF2KPCTZG3NWOHXUKQ,upperdir=/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/diff,workdir=/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c321,c454\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Mounted container \"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\" at \"/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/merged\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Created root filesystem for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a at /var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/merged" time="2025-10-04T12:33:38-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-10-04T12:33:38-04:00" level=debug msg="Setting Cgroups for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a to machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice:libpod:b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a" time="2025-10-04T12:33:38-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-10-04T12:33:38-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2025-10-04T12:33:38-04:00" level=debug msg="Created OCI spec for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a at /var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/config.json" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-10-04T12:33:38-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a -u b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata -p /run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a]" time="2025-10-04T12:33:38-04:00" level=info msg="Running conmon under slice machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice and unitName libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope" time="2025-10-04T12:33:38-04:00" level=debug msg="Received: 29103" time="2025-10-04T12:33:38-04:00" level=info msg="Got Conmon PID as 29092" time="2025-10-04T12:33:38-04:00" level=debug msg="Created container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a in OCI runtime" time="2025-10-04T12:33:38-04:00" level=debug msg="Starting container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a with command [/bin/busybox-extras httpd -f -p 80]" time="2025-10-04T12:33:38-04:00" level=debug msg="Started container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a" time="2025-10-04T12:33:38-04:00" level=debug msg="Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-10-04T12:33:38-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Oct 04 12:33:39 managed-node2 platform-python[29234]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Oct 04 12:33:39 managed-node2 systemd[1]: Reloading. Oct 04 12:33:39 managed-node2 dnsmasq[29069]: listening on cni-podman1(#3): fe80::90f8:b0ff:fe67:7f78%cni-podman1 Oct 04 12:33:39 managed-node2 platform-python[29403]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Oct 04 12:33:39 managed-node2 systemd[1]: Reloading. Oct 04 12:33:40 managed-node2 platform-python[29558]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Oct 04 12:33:40 managed-node2 systemd[1]: Created slice system-podman\x2dkube.slice. -- Subject: Unit system-podman\x2dkube.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit system-podman\x2dkube.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:40 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun starting up. Oct 04 12:33:40 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : container 29081 exited with status 137 Oct 04 12:33:40 managed-node2 systemd[1]: libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope completed and consumed the indicated resources. Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7)" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=info msg="Using sqlite as database backend" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using graph driver overlay" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using run root /run/containers/storage" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using tmp dir /run/libpod" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using transient store: false" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that metacopy is being used" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Initializing event backend file" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=info msg="Setting parallel job count to 7" Oct 04 12:33:40 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : container 29103 exited with status 137 Oct 04 12:33:40 managed-node2 systemd[1]: libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope completed and consumed the indicated resources. Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a)" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=info msg="Using sqlite as database backend" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using graph driver overlay" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using run root /run/containers/storage" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using tmp dir /run/libpod" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using transient store: false" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that metacopy is being used" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Initializing event backend file" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=info msg="Setting parallel job count to 7" Oct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0-merged.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a)" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:40 managed-node2 systemd[1]: libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state Oct 04 12:33:40 managed-node2 kernel: device veth58b4002b left promiscuous mode Oct 04 12:33:40 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state Oct 04 12:33:40 managed-node2 systemd[1]: run-netns-netns\x2d4bb92ac6\x2dc391\x2d8230\x2d0912\x2d824e2a801d42.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d4bb92ac6\x2dc391\x2d8230\x2d0912\x2d824e2a801d42.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522-merged.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7)" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: Stopping libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope. -- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has begun shutting down. Oct 04 12:33:40 managed-node2 systemd[1]: libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: Stopped libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope. -- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished shutting down. Oct 04 12:33:40 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice. -- Subject: Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished shutting down. Oct 04 12:33:40 managed-node2 systemd[1]: machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice: Consumed 193ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice completed and consumed the indicated resources. Oct 04 12:33:40 managed-node2 podman[29565]: Pods stopped: Oct 04 12:33:40 managed-node2 podman[29565]: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a Oct 04 12:33:40 managed-node2 podman[29565]: Pods removed: Oct 04 12:33:40 managed-node2 podman[29565]: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a Oct 04 12:33:40 managed-node2 podman[29565]: Secrets removed: Oct 04 12:33:40 managed-node2 podman[29565]: Volumes removed: Oct 04 12:33:40 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice. -- Subject: Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:40 managed-node2 systemd[1]: Started libcontainer container 2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1. -- Subject: Unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:41 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth44fc3814: link is not ready Oct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0690] manager: (veth44fc3814): new Veth device (/org/freedesktop/NetworkManager/Devices/5) Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state Oct 04 12:33:41 managed-node2 kernel: device veth44fc3814 entered promiscuous mode Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered forwarding state Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state Oct 04 12:33:41 managed-node2 systemd-udevd[29722]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:41 managed-node2 systemd-udevd[29722]: Could not generate persistent MAC address for veth44fc3814: No such file or directory Oct 04 12:33:41 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth44fc3814: link becomes ready Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered forwarding state Oct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0827] device (veth44fc3814): carrier: link connected Oct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0829] device (cni-podman1): carrier: link connected Oct 04 12:33:41 managed-node2 dnsmasq[29793]: listening on cni-podman1(#3): 10.89.0.1 Oct 04 12:33:41 managed-node2 dnsmasq[29793]: listening on cni-podman1(#3): fe80::90f8:b0ff:fe67:7f78%cni-podman1 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: started, version 2.79 cachesize 150 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using local addresses only for domain dns.podman Oct 04 12:33:41 managed-node2 dnsmasq[29797]: reading /etc/resolv.conf Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using local addresses only for domain dns.podman Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.29.169.13#53 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.29.170.12#53 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.2.32.1#53 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:41 managed-node2 systemd[1]: Started libcontainer container d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782. -- Subject: Unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:41 managed-node2 systemd[1]: Started libcontainer container c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28. -- Subject: Unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:41 managed-node2 podman[29565]: Pod: Oct 04 12:33:41 managed-node2 podman[29565]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a Oct 04 12:33:41 managed-node2 podman[29565]: Container: Oct 04 12:33:41 managed-node2 podman[29565]: c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28 Oct 04 12:33:41 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:42 managed-node2 platform-python[29963]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:43 managed-node2 platform-python[30096]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:44 managed-node2 platform-python[30220]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:45 managed-node2 platform-python[30343]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:46 managed-node2 platform-python[30638]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:47 managed-node2 platform-python[30761]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:47 managed-node2 platform-python[30884]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:33:47 managed-node2 platform-python[30983]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595627.3886073-16945-54933471056529/source _original_basename=tmpukku_qg2 follow=False checksum=e89a97ee50e2e2344cd04b5ef33140ac4f197bf8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:33:48 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Oct 04 12:33:48 managed-node2 platform-python[31108]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:33:48 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice. -- Subject: Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:48 managed-node2 NetworkManager[660]: [1759595628.5733] manager: (vethca854251): new Veth device (/org/freedesktop/NetworkManager/Devices/6) Oct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethca854251: link is not ready Oct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered blocking state Oct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state Oct 04 12:33:48 managed-node2 kernel: device vethca854251 entered promiscuous mode Oct 04 12:33:48 managed-node2 systemd-udevd[31155]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:48 managed-node2 systemd-udevd[31155]: Could not generate persistent MAC address for vethca854251: No such file or directory Oct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Oct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethca854251: link becomes ready Oct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered blocking state Oct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered forwarding state Oct 04 12:33:48 managed-node2 NetworkManager[660]: [1759595628.6066] device (vethca854251): carrier: link connected Oct 04 12:33:48 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses Oct 04 12:33:48 managed-node2 systemd[1]: Started libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope. -- Subject: Unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:48 managed-node2 systemd[1]: Started libcontainer container 59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca. -- Subject: Unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:48 managed-node2 systemd[1]: Started libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope. -- Subject: Unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:48 managed-node2 systemd[1]: Started libcontainer container 0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac. -- Subject: Unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:49 managed-node2 platform-python[31388]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Oct 04 12:33:49 managed-node2 systemd[1]: Reloading. Oct 04 12:33:50 managed-node2 platform-python[31549]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Oct 04 12:33:50 managed-node2 systemd[1]: Reloading. Oct 04 12:33:50 managed-node2 platform-python[31704]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Oct 04 12:33:50 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun starting up. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has successfully entered the 'dead' state. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope completed and consumed the indicated resources. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has successfully entered the 'dead' state. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope completed and consumed the indicated resources. Oct 04 12:33:50 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay-96a34ad4fa258979f69c8abe553376ab173aebc4813555f0aa72e1d24059a836-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-96a34ad4fa258979f69c8abe553376ab173aebc4813555f0aa72e1d24059a836-merged.mount has successfully entered the 'dead' state. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has successfully entered the 'dead' state. Oct 04 12:33:50 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state Oct 04 12:33:50 managed-node2 kernel: device vethca854251 left promiscuous mode Oct 04 12:33:50 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state Oct 04 12:33:50 managed-node2 systemd[1]: run-netns-netns\x2d04fac8f5\x2d669a\x2d2b56\x2d8dc1\x2d2c27fe482b75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d04fac8f5\x2d669a\x2d2b56\x2d8dc1\x2d2c27fe482b75.mount has successfully entered the 'dead' state. Oct 04 12:33:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:33:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay-bf4340b80dd987c0b14d9ab53281fd43797b6665f7cf0be1b6e809f99681d28d-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-bf4340b80dd987c0b14d9ab53281fd43797b6665f7cf0be1b6e809f99681d28d-merged.mount has successfully entered the 'dead' state. Oct 04 12:33:51 managed-node2 systemd[1]: libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has successfully entered the 'dead' state. Oct 04 12:33:51 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice. -- Subject: Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished shutting down. Oct 04 12:33:51 managed-node2 systemd[1]: machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice: Consumed 194ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice completed and consumed the indicated resources. Oct 04 12:33:51 managed-node2 podman[31711]: Pods stopped: Oct 04 12:33:51 managed-node2 podman[31711]: 668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3 Oct 04 12:33:51 managed-node2 podman[31711]: Pods removed: Oct 04 12:33:51 managed-node2 podman[31711]: 668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3 Oct 04 12:33:51 managed-node2 podman[31711]: Secrets removed: Oct 04 12:33:51 managed-node2 podman[31711]: Volumes removed: Oct 04 12:33:51 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice. -- Subject: Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container 33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12. -- Subject: Unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:51 managed-node2 NetworkManager[660]: [1759595631.3224] manager: (vethe1bf25d0): new Veth device (/org/freedesktop/NetworkManager/Devices/7) Oct 04 12:33:51 managed-node2 systemd-udevd[31876]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:51 managed-node2 systemd-udevd[31876]: Could not generate persistent MAC address for vethe1bf25d0: No such file or directory Oct 04 12:33:51 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethe1bf25d0: link is not ready Oct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered blocking state Oct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state Oct 04 12:33:51 managed-node2 kernel: device vethe1bf25d0 entered promiscuous mode Oct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered blocking state Oct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered forwarding state Oct 04 12:33:51 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe1bf25d0: link becomes ready Oct 04 12:33:51 managed-node2 NetworkManager[660]: [1759595631.3521] device (vethe1bf25d0): carrier: link connected Oct 04 12:33:51 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses Oct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container 8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46. -- Subject: Unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a. -- Subject: Unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:51 managed-node2 podman[31711]: Pod: Oct 04 12:33:51 managed-node2 podman[31711]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c Oct 04 12:33:51 managed-node2 podman[31711]: Container: Oct 04 12:33:51 managed-node2 podman[31711]: d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a Oct 04 12:33:51 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:52 managed-node2 sudo[32110]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrjtqrmhbkjdxpmvsixtkgxksntzspm ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595632.0315228-17132-238240516154633/AnsiballZ_command.py' Oct 04 12:33:52 managed-node2 sudo[32110]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:52 managed-node2 platform-python[32113]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:52 managed-node2 systemd[25493]: Started podman-32122.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:52 managed-node2 sudo[32110]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:52 managed-node2 platform-python[32260]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:53 managed-node2 platform-python[32391]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:53 managed-node2 sudo[32521]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpttuiaoavnpntacugmnwvpgddxwnpay ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595633.3541243-17183-26845545367359/AnsiballZ_command.py' Oct 04 12:33:53 managed-node2 sudo[32521]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:53 managed-node2 platform-python[32524]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:53 managed-node2 sudo[32521]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:53 managed-node2 platform-python[32650]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:54 managed-node2 platform-python[32776]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:54 managed-node2 platform-python[32902]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:55 managed-node2 platform-python[33027]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:55 managed-node2 rsyslogd[1019]: imjournal: journal files changed, reloading... [v8.2102.0-15.el8 try https://www.rsyslog.com/e/0 ] Oct 04 12:33:55 managed-node2 platform-python[33152]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd1-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:56 managed-node2 platform-python[33276]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd2-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:56 managed-node2 platform-python[33400]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd3-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:59 managed-node2 platform-python[33649]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:00 managed-node2 platform-python[33778]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:03 managed-node2 platform-python[33903]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:34:06 managed-node2 platform-python[34026]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:34:06 managed-node2 platform-python[34153]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:34:07 managed-node2 platform-python[34280]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:34:09 managed-node2 platform-python[34403]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:34:11 managed-node2 platform-python[34526]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:34:14 managed-node2 platform-python[34649]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:34:16 managed-node2 platform-python[34772]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:34:18 managed-node2 platform-python[34933]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:34:19 managed-node2 platform-python[35056]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:34:23 managed-node2 platform-python[35179]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Oct 04 12:34:24 managed-node2 platform-python[35303]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:24 managed-node2 platform-python[35428]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:24 managed-node2 platform-python[35552]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:25 managed-node2 platform-python[35676]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:26 managed-node2 platform-python[35800]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Oct 04 12:34:27 managed-node2 platform-python[35923]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:27 managed-node2 platform-python[36046]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:27 managed-node2 sudo[36169]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbwxxttcmtyfwkqugxtiqxlyfylyxbp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595667.7794547-18775-44408969327564/AnsiballZ_podman_image.py' Oct 04 12:34:27 managed-node2 sudo[36169]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36174.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36182.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36189.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36199.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36207.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36215.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36223.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 sudo[36169]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:34:29 managed-node2 platform-python[36352]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:29 managed-node2 platform-python[36477]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:30 managed-node2 platform-python[36600]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:34:30 managed-node2 platform-python[36664]: ansible-file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=tmp4tbrh702 recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:30 managed-node2 sudo[36787]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-recvjrutxodvbgmoimxrlbeojjenzstr ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595670.5228426-19044-7471957653983/AnsiballZ_podman_play.py' Oct 04 12:34:30 managed-node2 sudo[36787]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:34:30 managed-node2 systemd[25493]: Started podman-36798.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-10-04T12:34:30-04:00" level=info msg="/bin/podman filtering at log level debug" time="2025-10-04T12:34:30-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-10-04T12:34:30-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-10-04T12:34:30-04:00" level=info msg="Using sqlite as database backend" time="2025-10-04T12:34:30-04:00" level=debug msg="Using graph driver overlay" time="2025-10-04T12:34:30-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2025-10-04T12:34:30-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2025-10-04T12:34:30-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2025-10-04T12:34:30-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2025-10-04T12:34:30-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2025-10-04T12:34:30-04:00" level=debug msg="Using transient store: false" time="2025-10-04T12:34:30-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-10-04T12:34:30-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:34:30-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:34:30-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2025-10-04T12:34:30-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2025-10-04T12:34:30-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2025-10-04T12:34:30-04:00" level=debug msg="Initializing event backend file" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-10-04T12:34:30-04:00" level=info msg="Setting parallel job count to 7" time="2025-10-04T12:34:30-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:33:27.193743577 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-10-04T12:34:30-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-10-04T12:34:30-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:34:30-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:34:30-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:34:30-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:34:30-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:34:30-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05)" time="2025-10-04T12:34:30-04:00" level=debug msg="exporting opaque data as blob \"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:34:30-04:00" level=debug msg="Pod using bridge network mode" time="2025-10-04T12:34:30-04:00" level=debug msg="Got pod cgroup as /libpod_parent/34492a3900bc4a9b7b06bf0f56b147105736e26abab87e6881cbea1b0e369c1d" Error: adding pod to state: name "httpd1" is in use: pod already exists time="2025-10-04T12:34:30-04:00" level=debug msg="Shutting down engines" Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Oct 04 12:34:30 managed-node2 sudo[36787]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:34:31 managed-node2 platform-python[36952]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:34:32 managed-node2 platform-python[37076]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:33 managed-node2 platform-python[37201]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:34 managed-node2 platform-python[37325]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:34 managed-node2 platform-python[37448]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:36 managed-node2 platform-python[37743]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:36 managed-node2 platform-python[37868]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:37 managed-node2 platform-python[37991]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:34:37 managed-node2 platform-python[38055]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=tmpeaiobce5 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:34:37 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice. -- Subject: Unit machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-10-04T12:34:37-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2025-10-04T12:34:37-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-10-04T12:34:37-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-10-04T12:34:37-04:00" level=info msg="Using sqlite as database backend" time="2025-10-04T12:34:37-04:00" level=debug msg="Using graph driver overlay" time="2025-10-04T12:34:37-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2025-10-04T12:34:37-04:00" level=debug msg="Using run root /run/containers/storage" time="2025-10-04T12:34:37-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2025-10-04T12:34:37-04:00" level=debug msg="Using tmp dir /run/libpod" time="2025-10-04T12:34:37-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2025-10-04T12:34:37-04:00" level=debug msg="Using transient store: false" time="2025-10-04T12:34:37-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-10-04T12:34:37-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:34:37-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:34:37-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2025-10-04T12:34:37-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2025-10-04T12:34:37-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2025-10-04T12:34:37-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2025-10-04T12:34:37-04:00" level=debug msg="Initializing event backend file" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-10-04T12:34:37-04:00" level=info msg="Setting parallel job count to 7" time="2025-10-04T12:34:37-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:31:14.473584587 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-10-04T12:34:37-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-10-04T12:34:37-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:34:37-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:34:37-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:34:37-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:34:37-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:34:37-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)" time="2025-10-04T12:34:37-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:34:37-04:00" level=debug msg="Pod using bridge network mode" time="2025-10-04T12:34:37-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice for parent machine.slice and name libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3" time="2025-10-04T12:34:37-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice" time="2025-10-04T12:34:37-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice" Error: adding pod to state: name "httpd2" is in use: pod already exists time="2025-10-04T12:34:37-04:00" level=debug msg="Shutting down engines" Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Oct 04 12:34:39 managed-node2 platform-python[38339]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:40 managed-node2 platform-python[38464]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:41 managed-node2 platform-python[38588]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:41 managed-node2 platform-python[38711]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:43 managed-node2 platform-python[39006]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:43 managed-node2 platform-python[39131]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:43 managed-node2 platform-python[39254]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:34:44 managed-node2 platform-python[39318]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=tmps2by7p7f recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:44 managed-node2 platform-python[39441]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:34:44 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice. -- Subject: Unit machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:34:45 managed-node2 sudo[39603]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjcjbadzeevkrtchrfausielavpgqkug ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595685.2238288-19784-175751449856146/AnsiballZ_command.py' Oct 04 12:34:45 managed-node2 sudo[39603]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:45 managed-node2 platform-python[39606]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:45 managed-node2 systemd[25493]: Started podman-39616.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:45 managed-node2 sudo[39603]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:34:45 managed-node2 platform-python[39746]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:46 managed-node2 platform-python[39877]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:46 managed-node2 sudo[40008]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubjcpvqkodstxhlsbjwhddazysxbggp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595686.427822-19841-159760384353159/AnsiballZ_command.py' Oct 04 12:34:46 managed-node2 sudo[40008]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:46 managed-node2 platform-python[40011]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:46 managed-node2 sudo[40008]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:34:47 managed-node2 platform-python[40137]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:47 managed-node2 platform-python[40263]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:47 managed-node2 platform-python[40389]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:48 managed-node2 platform-python[40513]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:48 managed-node2 platform-python[40637]: ansible-uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:51 managed-node2 platform-python[40886]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:52 managed-node2 platform-python[41015]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:55 managed-node2 platform-python[41140]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Oct 04 12:34:56 managed-node2 platform-python[41264]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:56 managed-node2 platform-python[41389]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:57 managed-node2 platform-python[41513]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:58 managed-node2 platform-python[41637]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:58 managed-node2 platform-python[41761]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:58 managed-node2 sudo[41886]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzgxgxnqwlywntbcwmbxfvzqsvbvyldz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595698.7728405-20488-3389549821227/AnsiballZ_systemd.py' Oct 04 12:34:58 managed-node2 sudo[41886]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:59 managed-node2 platform-python[41889]: ansible-systemd Invoked with name= scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:34:59 managed-node2 systemd[25493]: Reloading. Oct 04 12:34:59 managed-node2 systemd[25493]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Oct 04 12:34:59 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state Oct 04 12:34:59 managed-node2 kernel: device veth938ef76c left promiscuous mode Oct 04 12:34:59 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state Oct 04 12:34:59 managed-node2 podman[42042]: time="2025-10-04T12:34:59-04:00" level=error msg="container not running" Oct 04 12:34:59 managed-node2 podman[41905]: Pods stopped: Oct 04 12:34:59 managed-node2 podman[41905]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb Oct 04 12:34:59 managed-node2 podman[41905]: Pods removed: Oct 04 12:34:59 managed-node2 podman[41905]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb Oct 04 12:34:59 managed-node2 podman[41905]: Secrets removed: Oct 04 12:34:59 managed-node2 podman[41905]: Volumes removed: Oct 04 12:34:59 managed-node2 systemd[25493]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:34:59 managed-node2 sudo[41886]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:00 managed-node2 platform-python[42189]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:00 managed-node2 sudo[42314]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrcjhxluknhgauyehthbegbfnykwuipf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595700.2224684-20562-98442846681196/AnsiballZ_podman_play.py' Oct 04 12:35:00 managed-node2 sudo[42314]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Oct 04 12:35:00 managed-node2 systemd[25493]: Started podman-42325.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Oct 04 12:35:00 managed-node2 sudo[42314]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:00 managed-node2 platform-python[42454]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:02 managed-node2 platform-python[42577]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:02 managed-node2 platform-python[42701]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:04 managed-node2 platform-python[42826]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:04 managed-node2 platform-python[42950]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:35:04 managed-node2 systemd[1]: Reloading. Oct 04 12:35:04 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun shutting down. Oct 04 12:35:04 managed-node2 systemd[1]: libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has successfully entered the 'dead' state. Oct 04 12:35:04 managed-node2 systemd[1]: libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope: Consumed 34ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope completed and consumed the indicated resources. Oct 04 12:35:04 managed-node2 systemd[1]: libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has successfully entered the 'dead' state. Oct 04 12:35:04 managed-node2 systemd[1]: libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope completed and consumed the indicated resources. Oct 04 12:35:04 managed-node2 systemd[1]: var-lib-containers-storage-overlay-e31deca989ff1b9cab2066515ef70e9fb506731c52bd3eea5fcc524723f3fd95-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-e31deca989ff1b9cab2066515ef70e9fb506731c52bd3eea5fcc524723f3fd95-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:35:05 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state Oct 04 12:35:05 managed-node2 kernel: device veth44fc3814 left promiscuous mode Oct 04 12:35:05 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state Oct 04 12:35:05 managed-node2 systemd[1]: run-netns-netns\x2d1f7b53eb\x2d816f\x2d29e7\x2dfe7f\x2d6eb0cf8f8502.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d1f7b53eb\x2d816f\x2d29e7\x2dfe7f\x2d6eb0cf8f8502.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay-db61c157c6f5df8b7c955b07912536ab089af634fcd5e6ba324c673952746b22-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-db61c157c6f5df8b7c955b07912536ab089af634fcd5e6ba324c673952746b22-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice. -- Subject: Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished shutting down. Oct 04 12:35:05 managed-node2 systemd[1]: machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice: Consumed 67ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice completed and consumed the indicated resources. Oct 04 12:35:05 managed-node2 podman[42986]: Pods stopped: Oct 04 12:35:05 managed-node2 podman[42986]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a Oct 04 12:35:05 managed-node2 podman[42986]: Pods removed: Oct 04 12:35:05 managed-node2 podman[42986]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a Oct 04 12:35:05 managed-node2 podman[42986]: Secrets removed: Oct 04 12:35:05 managed-node2 podman[42986]: Volumes removed: Oct 04 12:35:05 managed-node2 systemd[1]: libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope completed and consumed the indicated resources. Oct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 dnsmasq[29797]: exiting on receipt of SIGTERM Oct 04 12:35:05 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished shutting down. Oct 04 12:35:05 managed-node2 platform-python[43261]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay-4f2be2d0d065dff6d155c9d9c0fcaafe14b4a59e471bfad994f4271172df41f6-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-4f2be2d0d065dff6d155c9d9c0fcaafe14b4a59e471bfad994f4271172df41f6-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Oct 04 12:35:06 managed-node2 platform-python[43522]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:08 managed-node2 platform-python[43645]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:09 managed-node2 platform-python[43770]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:10 managed-node2 platform-python[43894]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:35:10 managed-node2 systemd[1]: Reloading. Oct 04 12:35:10 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun shutting down. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope completed and consumed the indicated resources. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope completed and consumed the indicated resources. Oct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay-41fd0b8b2abd57633d991528a5058698098ba238146a9b3301e23d7fc73f3208-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-41fd0b8b2abd57633d991528a5058698098ba238146a9b3301e23d7fc73f3208-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state Oct 04 12:35:10 managed-node2 kernel: device vethe1bf25d0 left promiscuous mode Oct 04 12:35:10 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state Oct 04 12:35:10 managed-node2 systemd[1]: run-netns-netns\x2d027c972b\x2d4f60\x2dd6f9\x2d5e22\x2d75c001071f96.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d027c972b\x2d4f60\x2dd6f9\x2d5e22\x2d75c001071f96.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay-4d023b27109b3ccd34a210cfed9c7806ca6f6666b5b632942f94ae65bcc6121b-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-4d023b27109b3ccd34a210cfed9c7806ca6f6666b5b632942f94ae65bcc6121b-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice. -- Subject: Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished shutting down. Oct 04 12:35:10 managed-node2 systemd[1]: machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice: Consumed 65ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice completed and consumed the indicated resources. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope: Consumed 34ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope completed and consumed the indicated resources. Oct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 podman[43930]: Pods stopped: Oct 04 12:35:10 managed-node2 podman[43930]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c Oct 04 12:35:10 managed-node2 podman[43930]: Pods removed: Oct 04 12:35:10 managed-node2 podman[43930]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c Oct 04 12:35:10 managed-node2 podman[43930]: Secrets removed: Oct 04 12:35:10 managed-node2 podman[43930]: Volumes removed: Oct 04 12:35:10 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished shutting down. Oct 04 12:35:11 managed-node2 platform-python[44199]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay-92a24157aa21e0a67c0987f9e002760f709781079b00f0e3baf2a1840c17ef8f-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-92a24157aa21e0a67c0987f9e002760f709781079b00f0e3baf2a1840c17ef8f-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:35:11 managed-node2 platform-python[44324]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:35:11 managed-node2 platform-python[44324]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml Oct 04 12:35:12 managed-node2 platform-python[44460]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:13 managed-node2 platform-python[44583]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Oct 04 12:35:13 managed-node2 platform-python[44707]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:14 managed-node2 sudo[44832]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yohamxjcuwwqowlxqaokqdnnkiwnlnpj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595713.8242128-21262-181373190609036/AnsiballZ_podman_container_info.py' Oct 04 12:35:14 managed-node2 sudo[44832]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:14 managed-node2 platform-python[44835]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 04 12:35:14 managed-node2 systemd[25493]: Started podman-44837.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:14 managed-node2 sudo[44832]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:14 managed-node2 sudo[44966]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkavwbxyujjrxdsuzqtbvginlcolirvx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595714.3794177-21283-159383776115316/AnsiballZ_command.py' Oct 04 12:35:14 managed-node2 sudo[44966]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:14 managed-node2 platform-python[44969]: ansible-command Invoked with _raw_params=podman network ls -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:14 managed-node2 systemd[25493]: Started podman-44971.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:14 managed-node2 sudo[44966]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:14 managed-node2 sudo[45126]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzvbfpfbqkpdwzlmyrlntccwniydmtz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595714.849909-21318-262235972613194/AnsiballZ_command.py' Oct 04 12:35:14 managed-node2 sudo[45126]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:15 managed-node2 platform-python[45129]: ansible-command Invoked with _raw_params=podman secret ls -n -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:15 managed-node2 systemd[25493]: Started podman-45131.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:15 managed-node2 sudo[45126]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:15 managed-node2 platform-python[45261]: ansible-command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Oct 04 12:35:15 managed-node2 systemd[1]: Stopping User Manager for UID 3001... -- Subject: Unit user@3001.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has begun shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopping podman-pause-f03acc05.scope. -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Default. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopping D-Bus User Message Bus... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Removed slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped D-Bus User Message Bus. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Basic System. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Timers. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped Mark boot as successful after the user session has run 2 minutes. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Paths. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Sockets. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Closed D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped podman-pause-f03acc05.scope. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Removed slice user.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Reached target Shutdown. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:15 managed-node2 systemd[25493]: Started Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:15 managed-node2 systemd[25493]: Reached target Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:15 managed-node2 systemd[1]: user@3001.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@3001.service has successfully entered the 'dead' state. Oct 04 12:35:15 managed-node2 systemd[1]: Stopped User Manager for UID 3001. -- Subject: Unit user@3001.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[1]: Stopping User runtime directory /run/user/3001... -- Subject: Unit user-runtime-dir@3001.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has begun shutting down. Oct 04 12:35:15 managed-node2 systemd[1]: run-user-3001.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-user-3001.mount has successfully entered the 'dead' state. Oct 04 12:35:15 managed-node2 systemd[1]: user-runtime-dir@3001.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@3001.service has successfully entered the 'dead' state. Oct 04 12:35:15 managed-node2 systemd[1]: Stopped User runtime directory /run/user/3001. -- Subject: Unit user-runtime-dir@3001.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[1]: Removed slice User Slice of UID 3001. -- Subject: Unit user-3001.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-3001.slice has finished shutting down. Oct 04 12:35:15 managed-node2 platform-python[45395]: ansible-command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:16 managed-node2 sudo[45519]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtaohsaclvtebxmwbqjbotdzsbchavvn ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595716.4392693-21370-112299601135578/AnsiballZ_command.py' Oct 04 12:35:16 managed-node2 sudo[45519]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:16 managed-node2 platform-python[45522]: ansible-command Invoked with _raw_params=podman pod exists httpd1 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:16 managed-node2 sudo[45519]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:17 managed-node2 platform-python[45652]: ansible-command Invoked with _raw_params=podman pod exists httpd2 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:17 managed-node2 platform-python[45782]: ansible-command Invoked with _raw_params=podman pod exists httpd3 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:17 managed-node2 sudo[45913]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmaujacgizsrobtsjzpmjnnqwleohar ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595717.6763232-21443-47795602630278/AnsiballZ_command.py' Oct 04 12:35:17 managed-node2 sudo[45913]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:17 managed-node2 platform-python[45916]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:17 managed-node2 sudo[45913]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:18 managed-node2 platform-python[46042]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:18 managed-node2 platform-python[46168]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:19 managed-node2 platform-python[46294]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:21 managed-node2 platform-python[46542]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:22 managed-node2 platform-python[46671]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:23 managed-node2 platform-python[46795]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:26 managed-node2 platform-python[46920]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Oct 04 12:35:26 managed-node2 platform-python[47044]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:27 managed-node2 platform-python[47169]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:27 managed-node2 platform-python[47293]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:28 managed-node2 platform-python[47417]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:29 managed-node2 platform-python[47541]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:29 managed-node2 platform-python[47664]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:30 managed-node2 platform-python[47787]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:31 managed-node2 platform-python[47910]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:31 managed-node2 platform-python[48034]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:33 managed-node2 platform-python[48159]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:33 managed-node2 platform-python[48283]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:35:34 managed-node2 platform-python[48410]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:34 managed-node2 platform-python[48533]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:35 managed-node2 platform-python[48656]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:37 managed-node2 platform-python[48781]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:37 managed-node2 platform-python[48905]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:35:38 managed-node2 platform-python[49032]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:38 managed-node2 platform-python[49155]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:39 managed-node2 platform-python[49278]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Oct 04 12:35:40 managed-node2 platform-python[49402]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:41 managed-node2 platform-python[49525]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:41 managed-node2 platform-python[49648]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:42 managed-node2 sshd[49669]: Accepted publickey for root from 10.31.11.222 port 49618 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:35:42 managed-node2 systemd[1]: Started Session 9 of user root. -- Subject: Unit session-9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-9.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:35:42 managed-node2 systemd-logind[598]: New session 9 of user root. -- Subject: A new session 9 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 9 has been created for the user root. -- -- The leading process of the session is 49669. Oct 04 12:35:42 managed-node2 sshd[49669]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:35:42 managed-node2 sshd[49672]: Received disconnect from 10.31.11.222 port 49618:11: disconnected by user Oct 04 12:35:42 managed-node2 sshd[49672]: Disconnected from user root 10.31.11.222 port 49618 Oct 04 12:35:42 managed-node2 sshd[49669]: pam_unix(sshd:session): session closed for user root Oct 04 12:35:42 managed-node2 systemd[1]: session-9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-9.scope has successfully entered the 'dead' state. Oct 04 12:35:42 managed-node2 systemd-logind[598]: Session 9 logged out. Waiting for processes to exit. Oct 04 12:35:42 managed-node2 systemd-logind[598]: Removed session 9. -- Subject: Session 9 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 9 has been terminated. Oct 04 12:35:44 managed-node2 platform-python[49834]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:35:44 managed-node2 platform-python[49961]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:45 managed-node2 platform-python[50084]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:47 managed-node2 platform-python[50332]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:48 managed-node2 platform-python[50461]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:48 managed-node2 platform-python[50585]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:50 managed-node2 sshd[50608]: Accepted publickey for root from 10.31.11.222 port 49628 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:35:50 managed-node2 systemd[1]: Started Session 10 of user root. -- Subject: Unit session-10.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-10.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:35:50 managed-node2 systemd-logind[598]: New session 10 of user root. -- Subject: A new session 10 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 10 has been created for the user root. -- -- The leading process of the session is 50608. Oct 04 12:35:50 managed-node2 sshd[50608]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:35:50 managed-node2 sshd[50611]: Received disconnect from 10.31.11.222 port 49628:11: disconnected by user Oct 04 12:35:50 managed-node2 sshd[50611]: Disconnected from user root 10.31.11.222 port 49628 Oct 04 12:35:50 managed-node2 sshd[50608]: pam_unix(sshd:session): session closed for user root Oct 04 12:35:50 managed-node2 systemd[1]: session-10.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-10.scope has successfully entered the 'dead' state. Oct 04 12:35:50 managed-node2 systemd-logind[598]: Session 10 logged out. Waiting for processes to exit. Oct 04 12:35:50 managed-node2 systemd-logind[598]: Removed session 10. -- Subject: Session 10 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 10 has been terminated. Oct 04 12:35:51 managed-node2 platform-python[50773]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:35:54 managed-node2 platform-python[50925]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:55 managed-node2 platform-python[51048]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:57 managed-node2 platform-python[51296]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:58 managed-node2 platform-python[51425]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:58 managed-node2 platform-python[51549]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:02 managed-node2 sshd[51572]: Accepted publickey for root from 10.31.11.222 port 39572 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:36:02 managed-node2 systemd[1]: Started Session 11 of user root. -- Subject: Unit session-11.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-11.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:36:02 managed-node2 systemd-logind[598]: New session 11 of user root. -- Subject: A new session 11 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 11 has been created for the user root. -- -- The leading process of the session is 51572. Oct 04 12:36:02 managed-node2 sshd[51572]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:36:02 managed-node2 sshd[51575]: Received disconnect from 10.31.11.222 port 39572:11: disconnected by user Oct 04 12:36:02 managed-node2 sshd[51575]: Disconnected from user root 10.31.11.222 port 39572 Oct 04 12:36:02 managed-node2 sshd[51572]: pam_unix(sshd:session): session closed for user root Oct 04 12:36:02 managed-node2 systemd[1]: session-11.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-11.scope has successfully entered the 'dead' state. Oct 04 12:36:02 managed-node2 systemd-logind[598]: Session 11 logged out. Waiting for processes to exit. Oct 04 12:36:02 managed-node2 systemd-logind[598]: Removed session 11. -- Subject: Session 11 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 11 has been terminated. Oct 04 12:36:04 managed-node2 platform-python[51737]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:36:04 managed-node2 platform-python[51889]: ansible-user Invoked with name=lsr_multiple_user1 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None Oct 04 12:36:04 managed-node2 useradd[51893]: new group: name=lsr_multiple_user1, GID=3002 Oct 04 12:36:04 managed-node2 useradd[51893]: new user: name=lsr_multiple_user1, UID=3002, GID=3002, home=/home/lsr_multiple_user1, shell=/bin/bash Oct 04 12:36:05 managed-node2 platform-python[52021]: ansible-user Invoked with name=lsr_multiple_user2 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None Oct 04 12:36:05 managed-node2 useradd[52025]: new group: name=lsr_multiple_user2, GID=3003 Oct 04 12:36:05 managed-node2 useradd[52025]: new user: name=lsr_multiple_user2, UID=3003, GID=3003, home=/home/lsr_multiple_user2, shell=/bin/bash Oct 04 12:36:06 managed-node2 platform-python[52153]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:06 managed-node2 platform-python[52276]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:09 managed-node2 platform-python[52524]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:09 managed-node2 sshd[52551]: Accepted publickey for root from 10.31.11.222 port 39576 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:36:09 managed-node2 systemd[1]: Started Session 12 of user root. -- Subject: Unit session-12.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-12.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:36:09 managed-node2 systemd-logind[598]: New session 12 of user root. -- Subject: A new session 12 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 12 has been created for the user root. -- -- The leading process of the session is 52551. Oct 04 12:36:09 managed-node2 sshd[52551]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:36:09 managed-node2 sshd[52554]: Received disconnect from 10.31.11.222 port 39576:11: disconnected by user Oct 04 12:36:09 managed-node2 sshd[52554]: Disconnected from user root 10.31.11.222 port 39576 Oct 04 12:36:09 managed-node2 sshd[52551]: pam_unix(sshd:session): session closed for user root Oct 04 12:36:09 managed-node2 systemd[1]: session-12.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-12.scope has successfully entered the 'dead' state. Oct 04 12:36:09 managed-node2 systemd-logind[598]: Session 12 logged out. Waiting for processes to exit. Oct 04 12:36:09 managed-node2 systemd-logind[598]: Removed session 12. -- Subject: Session 12 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 12 has been terminated. Oct 04 12:36:11 managed-node2 platform-python[52716]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:36:12 managed-node2 platform-python[52868]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:12 managed-node2 platform-python[52991]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:13 managed-node2 platform-python[53115]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:15 managed-node2 chronyd[603]: Source 74.208.25.46 replaced with 163.123.152.14 (2.centos.pool.ntp.org) Oct 04 12:36:16 managed-node2 platform-python[53244]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 systemd[1]: Reloading. Oct 04 12:36:18 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:18 managed-node2 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Oct 04 12:36:19 managed-node2 systemd[1]: Reloading. Oct 04 12:36:19 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Oct 04 12:36:19 managed-node2 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:19 managed-node2 systemd[1]: run-ra349d219a6fb4468acd54152311c9c85.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ra349d219a6fb4468acd54152311c9c85.service has successfully entered the 'dead' state. Oct 04 12:36:20 managed-node2 platform-python[53877]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:20 managed-node2 platform-python[54000]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:21 managed-node2 platform-python[54123]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:36:21 managed-node2 systemd[1]: Reloading. Oct 04 12:36:21 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment... -- Subject: Unit certmonger.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has begun starting up. Oct 04 12:36:21 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment. -- Subject: Unit certmonger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:22 managed-node2 platform-python[54316]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=# # Ansible managed # # system_role:certificate booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54332]: Certificate in file "/etc/pki/tls/certs/quadlet_demo.crt" issued by CA and saved. Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 platform-python[54454]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Oct 04 12:36:23 managed-node2 platform-python[54577]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key Oct 04 12:36:23 managed-node2 platform-python[54700]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Oct 04 12:36:24 managed-node2 platform-python[54823]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:24 managed-node2 certmonger[54159]: 2025-10-04 12:36:24 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:24 managed-node2 platform-python[54947]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:24 managed-node2 platform-python[55070]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:25 managed-node2 platform-python[55193]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:25 managed-node2 platform-python[55316]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:26 managed-node2 platform-python[55439]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:28 managed-node2 platform-python[55687]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:29 managed-node2 platform-python[55816]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:36:29 managed-node2 platform-python[55940]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:31 managed-node2 platform-python[56065]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:32 managed-node2 platform-python[56188]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:32 managed-node2 platform-python[56311]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:33 managed-node2 platform-python[56435]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:36 managed-node2 platform-python[56558]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:36:36 managed-node2 platform-python[56685]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:36:37 managed-node2 platform-python[56812]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:37 managed-node2 platform-python[56935]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:40 managed-node2 platform-python[57058]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Check] ******************************************************************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:148 Saturday 04 October 2025 12:36:40 -0400 (0:00:00.410) 0:00:29.881 ****** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "ps", "-a" ], "delta": "0:00:00.077182", "end": "2025-10-04 12:36:40.811571", "rc": 0, "start": "2025-10-04 12:36:40.734389" } STDOUT: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES TASK [Check pods] ************************************************************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:152 Saturday 04 October 2025 12:36:40 -0400 (0:00:00.418) 0:00:30.300 ****** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "pod", "ps", "--ctr-ids", "--ctr-names", "--ctr-status" ], "delta": "0:00:00.033507", "end": "2025-10-04 12:36:41.184549", "failed_when_result": false, "rc": 0, "start": "2025-10-04 12:36:41.151042" } STDOUT: POD ID NAME STATUS CREATED INFRA ID IDS NAMES STATUS TASK [Check systemd] *********************************************************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:157 Saturday 04 October 2025 12:36:41 -0400 (0:00:00.393) 0:00:30.694 ****** ok: [managed-node2] => { "changed": false, "cmd": "set -euo pipefail; systemctl list-units --all | grep quadlet", "delta": "0:00:00.009859", "end": "2025-10-04 12:36:41.573488", "failed_when_result": false, "rc": 1, "start": "2025-10-04 12:36:41.563629" } MSG: non-zero return code TASK [LS] ********************************************************************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:165 Saturday 04 October 2025 12:36:41 -0400 (0:00:00.366) 0:00:31.060 ****** ok: [managed-node2] => { "changed": false, "cmd": [ "ls", "-alrtF", "/etc/systemd/system" ], "delta": "0:00:00.003252", "end": "2025-10-04 12:36:41.921873", "failed_when_result": false, "rc": 0, "start": "2025-10-04 12:36:41.918621" } STDOUT: total 8 lrwxrwxrwx. 1 root root 9 May 11 2019 systemd-timedated.service -> /dev/null drwxr-xr-x. 4 root root 169 May 29 2024 ../ lrwxrwxrwx. 1 root root 39 May 29 2024 syslog.service -> /usr/lib/systemd/system/rsyslog.service drwxr-xr-x. 2 root root 32 May 29 2024 getty.target.wants/ lrwxrwxrwx. 1 root root 37 May 29 2024 ctrl-alt-del.target -> /usr/lib/systemd/system/reboot.target lrwxrwxrwx. 1 root root 57 May 29 2024 dbus-org.freedesktop.nm-dispatcher.service -> /usr/lib/systemd/system/NetworkManager-dispatcher.service drwxr-xr-x. 2 root root 48 May 29 2024 network-online.target.wants/ lrwxrwxrwx. 1 root root 41 May 29 2024 dbus-org.freedesktop.timedate1.service -> /usr/lib/systemd/system/timedatex.service drwxr-xr-x. 2 root root 61 May 29 2024 timers.target.wants/ drwxr-xr-x. 2 root root 31 May 29 2024 basic.target.wants/ drwxr-xr-x. 2 root root 38 May 29 2024 dev-virtio\x2dports-org.qemu.guest_agent.0.device.wants/ lrwxrwxrwx. 1 root root 41 May 29 2024 default.target -> /usr/lib/systemd/system/multi-user.target drwxr-xr-x. 2 root root 51 May 29 2024 sockets.target.wants/ drwxr-xr-x. 2 root root 31 May 29 2024 remote-fs.target.wants/ drwxr-xr-x. 2 root root 59 May 29 2024 sshd-keygen@.service.d/ drwxr-xr-x. 2 root root 119 May 29 2024 cloud-init.target.wants/ drwxr-xr-x. 2 root root 181 May 29 2024 sysinit.target.wants/ lrwxrwxrwx. 1 root root 41 Oct 4 12:30 dbus-org.fedoraproject.FirewallD1.service -> /usr/lib/systemd/system/firewalld.service drwxr-xr-x. 13 root root 4096 Oct 4 12:35 ./ drwxr-xr-x. 2 root root 4096 Oct 4 12:36 multi-user.target.wants/ TASK [Cleanup] ***************************************************************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:172 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.366) 0:00:31.426 ****** TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.157) 0:00:31.584 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.088) 0:00:31.672 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.059) 0:00:31.732 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.038) 0:00:31.770 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.038) 0:00:31.809 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.038) 0:00:31.848 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.037) 0:00:31.885 ****** ok: [managed-node2] => (item=RedHat.yml) => { "ansible_facts": { "__podman_packages": [ "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed-node2] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node2] => (item=CentOS_8.yml) => { "ansible_facts": { "__podman_packages": [ "crun", "podman", "podman-plugins", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } ok: [managed-node2] => (item=CentOS_8.yml) => { "ansible_facts": { "__podman_packages": [ "crun", "podman", "podman-plugins", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 04 October 2025 12:36:42 -0400 (0:00:00.080) 0:00:31.965 ****** ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 04 October 2025 12:36:44 -0400 (0:00:01.512) 0:00:33.477 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.050) 0:00:33.528 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.063) 0:00:33.591 ****** skipping: [managed-node2] => {} TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.048) 0:00:33.640 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.051) 0:00:33.691 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.051) 0:00:33.743 ****** ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.027052", "end": "2025-10-04 12:36:44.654440", "rc": 0, "start": "2025-10-04 12:36:44.627388" } STDOUT: podman version 4.9.4-dev TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.404) 0:00:34.147 ****** ok: [managed-node2] => { "ansible_facts": { "podman_version": "4.9.4-dev" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.035) 0:00:34.183 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.032) 0:00:34.215 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } META: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.059) 0:00:34.274 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } META: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 04 October 2025 12:36:44 -0400 (0:00:00.124) 0:00:34.398 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.099) 0:00:34.498 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.061) 0:00:34.560 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.064) 0:00:34.624 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.063) 0:00:34.688 ****** ok: [managed-node2] => { "changed": false, "stat": { "atime": 1759595444.223306, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "bb5b46ffbafcaa8c4021f3c8b3cb8594f48ef34b", "ctime": 1759595415.692989, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6884013, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-sharedlib", "mode": "0755", "mtime": 1700557386.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 12640, "uid": 0, "version": "2755563640", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.472) 0:00:35.160 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.051) 0:00:35.212 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.052) 0:00:35.265 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.051) 0:00:35.317 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.053) 0:00:35.371 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 04 October 2025 12:36:45 -0400 (0:00:00.050) 0:00:35.422 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.041) 0:00:35.463 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.043) 0:00:35.506 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.045) 0:00:35.551 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.081) 0:00:35.633 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.060) 0:00:35.693 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.032) 0:00:35.726 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.030) 0:00:35.757 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.065) 0:00:35.822 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.030) 0:00:35.853 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.065) 0:00:35.919 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.066) 0:00:35.985 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.039) 0:00:36.025 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.046) 0:00:36.071 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.089) 0:00:36.161 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.038) 0:00:36.199 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.038) 0:00:36.238 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.033) 0:00:36.272 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.032) 0:00:36.304 ****** TASK [fedora.linux_system_roles.firewall : Setup firewalld] ******************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:2 Saturday 04 October 2025 12:36:46 -0400 (0:00:00.098) 0:00:36.403 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml for managed-node2 TASK [fedora.linux_system_roles.firewall : Ensure ansible_facts used by role] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:2 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.092) 0:00:36.495 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if system is ostree] ********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:10 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.062) 0:00:36.558 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag to indicate system is ostree] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:15 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.049) 0:00:36.608 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:22 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.086) 0:00:36.694 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag if transactional-update exists] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:27 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.039) 0:00:36.733 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Run systemctl] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:34 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.039) 0:00:36.772 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Require installed systemd] ********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:41 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.032) 0:00:36.805 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:46 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.031) 0:00:36.836 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Install firewalld] ****************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:51 Saturday 04 October 2025 12:36:47 -0400 (0:00:00.031) 0:00:36.868 ****** ok: [managed-node2] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: firewalld TASK [fedora.linux_system_roles.firewall : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:63 Saturday 04 October 2025 12:36:49 -0400 (0:00:02.445) 0:00:39.314 ****** skipping: [managed-node2] => {} TASK [fedora.linux_system_roles.firewall : Reboot transactional update systems] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:68 Saturday 04 October 2025 12:36:49 -0400 (0:00:00.051) 0:00:39.366 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Fail if reboot is needed and not set] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:73 Saturday 04 October 2025 12:36:49 -0400 (0:00:00.036) 0:00:39.402 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check which conflicting services are enabled] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:5 Saturday 04 October 2025 12:36:50 -0400 (0:00:00.071) 0:00:39.473 ****** skipping: [managed-node2] => (item=nftables) => { "ansible_loop_var": "item", "changed": false, "item": "nftables", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=iptables) => { "ansible_loop_var": "item", "changed": false, "item": "iptables", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=ufw) => { "ansible_loop_var": "item", "changed": false, "item": "ufw", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Attempt to stop and disable conflicting services] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:14 Saturday 04 October 2025 12:36:50 -0400 (0:00:00.040) 0:00:39.514 ****** skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'nftables', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "nftables", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'iptables', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "iptables", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'ufw', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "ufw", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Unmask firewalld service] *********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:24 Saturday 04 October 2025 12:36:50 -0400 (0:00:00.057) 0:00:39.571 ****** ok: [managed-node2] => { "changed": false, "name": "firewalld", "status": { "ActiveEnterTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ActiveEnterTimestampMonotonic": "277910710", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.service polkit.service dbus.socket basic.target system.slice sysinit.target", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-10-04 12:30:50 EDT", "AssertTimestampMonotonic": "277605912", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ConditionTimestampMonotonic": "277605911", "ConfigurationDirectoryMode": "0755", "Conflicts": "iptables.service ebtables.service ip6tables.service ipset.service nftables.service shutdown.target", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "12889", "ExecMainStartTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ExecMainStartTimestampMonotonic": "277607664", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-10-04 12:30:50 EDT", "InactiveExitTimestampMonotonic": "277607697", "InvocationID": "ad22fbe355574cf4b89374213cad5726", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "12889", "MemoryAccounting": "yes", "MemoryCurrent": "43184128", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-10-04 12:30:50 EDT", "StateChangeTimestampMonotonic": "277910710", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Sat 2025-10-04 12:30:50 EDT", "WatchdogTimestampMonotonic": "277910707", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Enable and start firewalld service] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:30 Saturday 04 October 2025 12:36:50 -0400 (0:00:00.510) 0:00:40.082 ****** ok: [managed-node2] => { "changed": false, "enabled": true, "name": "firewalld", "state": "started", "status": { "ActiveEnterTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ActiveEnterTimestampMonotonic": "277910710", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.service polkit.service dbus.socket basic.target system.slice sysinit.target", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-10-04 12:30:50 EDT", "AssertTimestampMonotonic": "277605912", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ConditionTimestampMonotonic": "277605911", "ConfigurationDirectoryMode": "0755", "Conflicts": "iptables.service ebtables.service ip6tables.service ipset.service nftables.service shutdown.target", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "12889", "ExecMainStartTimestamp": "Sat 2025-10-04 12:30:50 EDT", "ExecMainStartTimestampMonotonic": "277607664", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-10-04 12:30:50 EDT", "InactiveExitTimestampMonotonic": "277607697", "InvocationID": "ad22fbe355574cf4b89374213cad5726", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "12889", "MemoryAccounting": "yes", "MemoryCurrent": "43184128", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-10-04 12:30:50 EDT", "StateChangeTimestampMonotonic": "277910710", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Sat 2025-10-04 12:30:50 EDT", "WatchdogTimestampMonotonic": "277910707", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Check if previous replaced is defined] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:36 Saturday 04 October 2025 12:36:51 -0400 (0:00:00.519) 0:00:40.601 ****** ok: [managed-node2] => { "ansible_facts": { "__firewall_previous_replaced": false, "__firewall_python_cmd": "/usr/libexec/platform-python", "__firewall_report_changed": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Get config files, checksums before and remove] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:45 Saturday 04 October 2025 12:36:51 -0400 (0:00:00.063) 0:00:40.665 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Tell firewall module it is able to report changed] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:58 Saturday 04 October 2025 12:36:51 -0400 (0:00:00.039) 0:00:40.705 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Configure firewall] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:74 Saturday 04 October 2025 12:36:51 -0400 (0:00:00.039) 0:00:40.744 ****** ok: [managed-node2] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "__firewall_changed": false, "ansible_loop_var": "item", "changed": false, "item": { "port": "8000/tcp", "state": "enabled" } } ok: [managed-node2] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "__firewall_changed": false, "ansible_loop_var": "item", "changed": false, "item": { "port": "9000/tcp", "state": "enabled" } } TASK [fedora.linux_system_roles.firewall : Gather firewall config information] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:126 Saturday 04 October 2025 12:36:52 -0400 (0:00:01.109) 0:00:41.853 ****** skipping: [managed-node2] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "item": { "port": "8000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "item": { "port": "9000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:137 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.050) 0:00:41.903 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Gather firewall config if no arguments] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:146 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.074) 0:00:41.978 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:152 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.034) 0:00:42.012 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Get config files, checksums after] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:161 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.034) 0:00:42.046 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Calculate what has changed] ********* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:172 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.030) 0:00:42.077 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Show diffs] ************************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:178 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.034) 0:00:42.111 ****** skipping: [managed-node2] => {} TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.032) 0:00:42.144 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.031) 0:00:42.176 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.031) 0:00:42.208 ****** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.029) 0:00:42.237 ****** skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.030) 0:00:42.268 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.101) 0:00:42.370 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 04 October 2025 12:36:52 -0400 (0:00:00.033) 0:00:42.404 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.057) 0:00:42.461 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.040) 0:00:42.502 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.039) 0:00:42.542 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.044) 0:00:42.587 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.081) 0:00:42.669 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.032) 0:00:42.701 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.033) 0:00:42.734 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.031) 0:00:42.766 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.033) 0:00:42.800 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.033) 0:00:42.833 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.030) 0:00:42.864 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.033) 0:00:42.897 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.032) 0:00:42.929 ****** ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.042) 0:00:42.972 ****** included: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.060) 0:00:43.033 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.032) 0:00:43.065 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.030) 0:00:43.096 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.031) 0:00:43.127 ****** skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.031) 0:00:43.158 ****** fatal: [managed-node2]: FAILED! => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result" } TASK [Debug] ******************************************************************* task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:199 Saturday 04 October 2025 12:36:53 -0400 (0:00:00.034) 0:00:43.193 ****** ok: [managed-node2] => { "changed": false, "cmd": "exec 1>&2\nset -x\nset -o pipefail\nsystemctl list-units --plain -l --all | grep quadlet || :\nsystemctl list-unit-files --all | grep quadlet || :\nsystemctl list-units --plain --failed -l --all | grep quadlet || :\n", "delta": "0:00:00.389395", "end": "2025-10-04 12:36:54.439360", "rc": 0, "start": "2025-10-04 12:36:54.049965" } STDERR: + set -o pipefail + systemctl list-units --plain -l --all + grep quadlet + : + systemctl list-unit-files --all + grep quadlet + : + systemctl list-units --plain --failed -l --all + grep quadlet + : TASK [Get journald] ************************************************************ task path: /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:209 Saturday 04 October 2025 12:36:54 -0400 (0:00:00.733) 0:00:43.926 ****** fatal: [managed-node2]: FAILED! => { "changed": false, "cmd": [ "journalctl", "-ex" ], "delta": "0:00:00.025383", "end": "2025-10-04 12:36:54.803541", "failed_when_result": true, "rc": 0, "start": "2025-10-04 12:36:54.778158" } STDOUT: -- Logs begin at Sat 2025-10-04 12:26:12 EDT, end at Sat 2025-10-04 12:36:54 EDT. -- Oct 04 12:31:26 managed-node2 platform-python[16120]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:31:26 managed-node2 platform-python[16247]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:31:27 managed-node2 platform-python[16374]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:31:29 managed-node2 platform-python[16497]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:32 managed-node2 platform-python[16620]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:34 managed-node2 platform-python[16743]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:37 managed-node2 platform-python[16866]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:31:38 managed-node2 platform-python[17014]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:31:39 managed-node2 platform-python[17137]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:31:43 managed-node2 platform-python[17260]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:46 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:31:46 managed-node2 platform-python[17523]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:47 managed-node2 platform-python[17646]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:31:47 managed-node2 platform-python[17769]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:31:47 managed-node2 platform-python[17868]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/bogus.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595507.1829739-11549-146476787522942/source _original_basename=tmpisytpwv2 follow=False checksum=f8266a972ed3be7e204d2a67883fe3a22b8dbf18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:31:48 managed-node2 platform-python[17993]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:31:48 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice. -- Subject: Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:31:48 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:31:51 managed-node2 platform-python[18280]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:31:52 managed-node2 platform-python[18409]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:31:55 managed-node2 platform-python[18534]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:31:58 managed-node2 platform-python[18657]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:31:59 managed-node2 platform-python[18784]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:31:59 managed-node2 platform-python[18911]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:32:01 managed-node2 platform-python[19034]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:04 managed-node2 platform-python[19157]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:06 managed-node2 platform-python[19280]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:09 managed-node2 platform-python[19403]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:32:11 managed-node2 platform-python[19551]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:32:11 managed-node2 platform-python[19674]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:32:15 managed-node2 platform-python[19797]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:17 managed-node2 platform-python[19922]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:17 managed-node2 platform-python[20046]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:32:18 managed-node2 platform-python[20173]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:18 managed-node2 platform-python[20298]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:32:18 managed-node2 platform-python[20298]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/nopull.yml Oct 04 12:32:18 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice. -- Subject: Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished shutting down. Oct 04 12:32:18 managed-node2 systemd[1]: machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice completed and consumed the indicated resources. Oct 04 12:32:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:32:19 managed-node2 platform-python[20436]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:32:19 managed-node2 platform-python[20559]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:22 managed-node2 platform-python[20814]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:23 managed-node2 platform-python[20942]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:27 managed-node2 platform-python[21067]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:29 managed-node2 platform-python[21190]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:32:30 managed-node2 platform-python[21317]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:32:31 managed-node2 platform-python[21444]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:32:33 managed-node2 platform-python[21567]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:35 managed-node2 platform-python[21690]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:38 managed-node2 platform-python[21813]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:32:40 managed-node2 platform-python[21936]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:32:42 managed-node2 platform-python[22084]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:32:43 managed-node2 platform-python[22207]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:32:47 managed-node2 platform-python[22330]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:48 managed-node2 platform-python[22455]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:49 managed-node2 platform-python[22579]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:32:50 managed-node2 platform-python[22706]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:50 managed-node2 platform-python[22831]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:32:50 managed-node2 platform-python[22831]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/bogus.yml Oct 04 12:32:50 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice. -- Subject: Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished shutting down. Oct 04 12:32:50 managed-node2 systemd[1]: machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice completed and consumed the indicated resources. Oct 04 12:32:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:32:50 managed-node2 platform-python[22970]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:32:51 managed-node2 platform-python[23093]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:32:54 managed-node2 platform-python[23349]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:32:56 managed-node2 platform-python[23478]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:32:59 managed-node2 platform-python[23603]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:33:00 managed-node2 chronyd[603]: Detected falseticker 74.208.25.46 (2.centos.pool.ntp.org) Oct 04 12:33:01 managed-node2 platform-python[23726]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:33:02 managed-node2 platform-python[23853]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:33:03 managed-node2 platform-python[23980]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:33:05 managed-node2 platform-python[24103]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:33:07 managed-node2 platform-python[24226]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:33:10 managed-node2 platform-python[24349]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:33:12 managed-node2 platform-python[24472]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:33:14 managed-node2 platform-python[24620]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:33:15 managed-node2 platform-python[24743]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:33:19 managed-node2 platform-python[24866]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Oct 04 12:33:19 managed-node2 platform-python[24990]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:20 managed-node2 platform-python[25115]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:20 managed-node2 platform-python[25239]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:21 managed-node2 platform-python[25363]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:22 managed-node2 platform-python[25487]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Oct 04 12:33:22 managed-node2 systemd[1]: Created slice User Slice of UID 3001. -- Subject: Unit user-3001.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-3001.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[1]: Starting User runtime directory /run/user/3001... -- Subject: Unit user-runtime-dir@3001.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has begun starting up. Oct 04 12:33:22 managed-node2 systemd[1]: Started User runtime directory /run/user/3001. -- Subject: Unit user-runtime-dir@3001.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[1]: Starting User Manager for UID 3001... -- Subject: Unit user@3001.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has begun starting up. Oct 04 12:33:22 managed-node2 systemd[25493]: pam_unix(systemd-user:session): session opened for user podman_basic_user by (uid=0) Oct 04 12:33:22 managed-node2 systemd[25493]: Starting D-Bus User Message Bus Socket. -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Paths. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Started Mark boot as successful after the user session has run 2 minutes. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Timers. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Listening on D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Sockets. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Basic System. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Reached target Default. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:22 managed-node2 systemd[25493]: Startup finished in 26ms. -- Subject: User manager start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The user manager instance for user 3001 has been started. All services queued -- for starting have been started. Note that other services might still be starting -- up or be started at any later time. -- -- Startup of the manager took 26808 microseconds. Oct 04 12:33:22 managed-node2 systemd[1]: Started User Manager for UID 3001. -- Subject: Unit user@3001.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:23 managed-node2 platform-python[25628]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:23 managed-node2 platform-python[25751]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:23 managed-node2 sudo[25874]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sifywbsrisccwijbkunlnmxfrflsisdd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595603.7086673-15808-227803905840648/AnsiballZ_podman_image.py' Oct 04 12:33:23 managed-node2 sudo[25874]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:24 managed-node2 systemd[25493]: Started D-Bus User Message Bus. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Created slice user.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25886.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-pause-f03acc05.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25902.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25917.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25926.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:25 managed-node2 systemd[25493]: Started podman-25933.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:25 managed-node2 systemd[25493]: Started podman-25942.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:25 managed-node2 sudo[25874]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:25 managed-node2 platform-python[26071]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:25 managed-node2 platform-python[26194]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:26 managed-node2 platform-python[26317]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:33:26 managed-node2 platform-python[26416]: ansible-copy Invoked with dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml owner=podman_basic_user group=3001 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595606.1679773-15914-270785518997063/source _original_basename=tmpck_isd86 follow=False checksum=4df6e405cb1c69d6fda71fca57ba10095c6652bf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:33:26 managed-node2 sudo[26541]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhtmywupgtnbvlezrrhjughqngefyblk ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595606.8491445-15948-152734101610115/AnsiballZ_podman_play.py' Oct 04 12:33:26 managed-node2 sudo[26541]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:33:27 managed-node2 systemd[25493]: Started podman-26552.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:27 managed-node2 kernel: tun: Universal TUN/TAP device driver, 1.6 Oct 04 12:33:27 managed-node2 systemd[25493]: Started rootless-netns-edb70a77.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:27 managed-node2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth2fe45075: link is not ready Oct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered blocking state Oct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state Oct 04 12:33:27 managed-node2 kernel: device veth2fe45075 entered promiscuous mode Oct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Oct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2fe45075: link becomes ready Oct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered blocking state Oct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered forwarding state Oct 04 12:33:27 managed-node2 dnsmasq[26740]: listening on cni-podman1(#3): 10.89.0.1 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: started, version 2.79 cachesize 150 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using local addresses only for domain dns.podman Oct 04 12:33:27 managed-node2 dnsmasq[26742]: reading /etc/resolv.conf Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using local addresses only for domain dns.podman Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.0.2.3#53 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.29.169.13#53 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.29.170.12#53 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.2.32.1#53 Oct 04 12:33:27 managed-node2 dnsmasq[26742]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:27 managed-node2 conmon[26754]: conmon 978f42b0916c823a3a50 : failed to write to /proc/self/oom_score_adj: Permission denied Oct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach} Oct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : terminal_ctrl_fd: 14 Oct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : winsz read side: 17, winsz write side: 18 Oct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : container PID: 26765 Oct 04 12:33:27 managed-node2 conmon[26775]: conmon 4c95f0539eb18fb7ecd6 : failed to write to /proc/self/oom_score_adj: Permission denied Oct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach} Oct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : terminal_ctrl_fd: 13 Oct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : winsz read side: 16, winsz write side: 17 Oct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : container PID: 26786 Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d Container: 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-10-04T12:33:27-04:00" level=info msg="/bin/podman filtering at log level debug" time="2025-10-04T12:33:27-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-10-04T12:33:27-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-10-04T12:33:27-04:00" level=info msg="Using sqlite as database backend" time="2025-10-04T12:33:27-04:00" level=debug msg="Using graph driver overlay" time="2025-10-04T12:33:27-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2025-10-04T12:33:27-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2025-10-04T12:33:27-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2025-10-04T12:33:27-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2025-10-04T12:33:27-04:00" level=debug msg="Using transient store: false" time="2025-10-04T12:33:27-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2025-10-04T12:33:27-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2025-10-04T12:33:27-04:00" level=debug msg="Initializing event backend file" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-10-04T12:33:27-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-10-04T12:33:27-04:00" level=info msg="Setting parallel job count to 7" time="2025-10-04T12:33:27-04:00" level=debug msg="Successfully loaded 1 networks" time="2025-10-04T12:33:27-04:00" level=debug msg="found free device name cni-podman1" time="2025-10-04T12:33:27-04:00" level=debug msg="found free ipv4 network subnet 10.89.0.0/24" time="2025-10-04T12:33:27-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:33:27.193743577 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-10-04T12:33:27-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="reference \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" does not resolve to an image ID" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="reference \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" does not resolve to an image ID" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="FROM \"scratch\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2025-10-04T12:33:27-04:00" level=debug msg="Check for idmapped mounts support " time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="overlay: test mount indicated that volatile is being used" time="2025-10-04T12:33:27-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/empty,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/work,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c105,c564\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container ID: 9516e55badb1147a3bd380a45ee33bd293ab708eefc046d098c76c453fc83966" time="2025-10-04T12:33:27-04:00" level=debug msg="Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:copy Args:[/usr/libexec/podman/catatonit /catatonit] Flags:[] Attrs:map[] Message:COPY /usr/libexec/podman/catatonit /catatonit Heredocs:[] Original:COPY /usr/libexec/podman/catatonit /catatonit}" time="2025-10-04T12:33:27-04:00" level=debug msg="COPY []string(nil), imagebuilder.Copy{FromFS:false, From:\"\", Src:[]string{\"/usr/libexec/podman/catatonit\"}, Dest:\"/catatonit\", Download:false, Chown:\"\", Chmod:\"\", Checksum:\"\", Files:[]imagebuilder.File(nil)}" time="2025-10-04T12:33:27-04:00" level=debug msg="added content file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd" time="2025-10-04T12:33:27-04:00" level=debug msg="Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:entrypoint Args:[/catatonit -P] Flags:[] Attrs:map[json:true] Message:ENTRYPOINT /catatonit -P Heredocs:[] Original:ENTRYPOINT [\"/catatonit\", \"-P\"]}" time="2025-10-04T12:33:27-04:00" level=debug msg="COMMIT localhost/podman-pause:4.9.4-dev-1708535009" time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-10-04T12:33:27-04:00" level=debug msg="COMMIT \"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-10-04T12:33:27-04:00" level=debug msg="committing image with reference \"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" is allowed by policy" time="2025-10-04T12:33:27-04:00" level=debug msg="layer list: [\"768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb\"]" time="2025-10-04T12:33:27-04:00" level=debug msg="using \"/var/tmp/buildah340804419\" to hold temporary data" time="2025-10-04T12:33:27-04:00" level=debug msg="Tar with options on /home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/diff" time="2025-10-04T12:33:27-04:00" level=debug msg="layer \"768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb\" size is 767488 bytes, uncompressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690, possibly-compressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690" time="2025-10-04T12:33:27-04:00" level=debug msg="OCIv1 config = {\"created\":\"2025-10-04T16:33:27.33236731Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/catatonit\",\"-P\"],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"]},\"history\":[{\"created\":\"2025-10-04T16:33:27.331845264Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \",\"empty_layer\":true},{\"created\":\"2025-10-04T16:33:27.335420758Z\",\"created_by\":\"/bin/sh -c #(nop) ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]\"}]}" time="2025-10-04T12:33:27-04:00" level=debug msg="OCIv1 manifest = {\"schemaVersion\":2,\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"config\":{\"mediaType\":\"application/vnd.oci.image.config.v1+json\",\"digest\":\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\",\"size\":667},\"layers\":[{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar\",\"digest\":\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\",\"size\":767488}],\"annotations\":{\"org.opencontainers.image.base.digest\":\"\",\"org.opencontainers.image.base.name\":\"\"}}" time="2025-10-04T12:33:27-04:00" level=debug msg="Docker v2s2 config = {\"created\":\"2025-10-04T16:33:27.33236731Z\",\"container\":\"9516e55badb1147a3bd380a45ee33bd293ab708eefc046d098c76c453fc83966\",\"container_config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[],\"Image\":\"\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":[\"/catatonit\",\"-P\"],\"OnBuild\":[],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[],\"Image\":\"\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":[\"/catatonit\",\"-P\"],\"OnBuild\":[],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"architecture\":\"amd64\",\"os\":\"linux\",\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"]},\"history\":[{\"created\":\"2025-10-04T16:33:27.331845264Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \",\"empty_layer\":true},{\"created\":\"2025-10-04T16:33:27.335420758Z\",\"created_by\":\"/bin/sh -c #(nop) ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]\"}]}" time="2025-10-04T12:33:27-04:00" level=debug msg="Docker v2s2 manifest = {\"schemaVersion\":2,\"mediaType\":\"application/vnd.docker.distribution.manifest.v2+json\",\"config\":{\"mediaType\":\"application/vnd.docker.container.image.v1+json\",\"size\":1341,\"digest\":\"sha256:cc08d8f0e313f02451a20252b1d70f6f69284663aede171c80a5525e2a51ba5b\"},\"layers\":[{\"mediaType\":\"application/vnd.docker.image.rootfs.diff.tar\",\"size\":767488,\"digest\":\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"}]}" time="2025-10-04T12:33:27-04:00" level=debug msg="Using SQLite blob info cache at /home/podman_basic_user/.local/share/containers/cache/blob-info-cache-v1.sqlite" time="2025-10-04T12:33:27-04:00" level=debug msg="IsRunningImageAllowed for image containers-storage:" time="2025-10-04T12:33:27-04:00" level=debug msg=" Using transport \"containers-storage\" policy section " time="2025-10-04T12:33:27-04:00" level=debug msg=" Requirement 0: allowed" time="2025-10-04T12:33:27-04:00" level=debug msg="Overall: allowed" time="2025-10-04T12:33:27-04:00" level=debug msg="start reading config" time="2025-10-04T12:33:27-04:00" level=debug msg="finished reading config" time="2025-10-04T12:33:27-04:00" level=debug msg="Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]" time="2025-10-04T12:33:27-04:00" level=debug msg="... will first try using the original manifest unmodified" time="2025-10-04T12:33:27-04:00" level=debug msg="Checking if we can reuse blob sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690: general substitution = true, compression for MIME type \"application/vnd.oci.image.layer.v1.tar\" = true" time="2025-10-04T12:33:27-04:00" level=debug msg="reading layer \"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"" time="2025-10-04T12:33:27-04:00" level=debug msg="No compression detected" time="2025-10-04T12:33:27-04:00" level=debug msg="Using original blob without modification" time="2025-10-04T12:33:27-04:00" level=debug msg="Applying tar in /home/podman_basic_user/.local/share/containers/storage/overlay/d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690/diff" time="2025-10-04T12:33:27-04:00" level=debug msg="finished reading layer \"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"" time="2025-10-04T12:33:27-04:00" level=debug msg="No compression detected" time="2025-10-04T12:33:27-04:00" level=debug msg="Compression change for blob sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05 (\"application/vnd.oci.image.config.v1+json\") not supported" time="2025-10-04T12:33:27-04:00" level=debug msg="Using original blob without modification" time="2025-10-04T12:33:27-04:00" level=debug msg="setting image creation date to 2025-10-04 16:33:27.33236731 +0000 UTC" time="2025-10-04T12:33:27-04:00" level=debug msg="created new image ID \"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\" with metadata \"{}\"" time="2025-10-04T12:33:27-04:00" level=debug msg="added name \"localhost/podman-pause:4.9.4-dev-1708535009\" to image \"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-10-04T12:33:27-04:00" level=debug msg="printing final image id \"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Pod using bridge network mode" time="2025-10-04T12:33:27-04:00" level=debug msg="Got pod cgroup as /libpod_parent/4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05" time="2025-10-04T12:33:27-04:00" level=debug msg="using systemd mode: false" time="2025-10-04T12:33:27-04:00" level=debug msg="setting container name 4bfdec19f3e3-infra" time="2025-10-04T12:33:27-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Allocated lock 1 for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1" time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created container \"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container \"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\" has work directory \"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container \"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\" has run directory \"/run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:27-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:27-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:27-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:27-04:00" level=debug msg="using systemd mode: false" time="2025-10-04T12:33:27-04:00" level=debug msg="adding container to pod httpd1" time="2025-10-04T12:33:27-04:00" level=debug msg="setting container name httpd1-httpd1" time="2025-10-04T12:33:27-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-10-04T12:33:27-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /proc" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /dev" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /dev/pts" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /dev/mqueue" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /sys" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2025-10-04T12:33:27-04:00" level=debug msg="Allocated lock 2 for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8" time="2025-10-04T12:33:27-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created container \"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container \"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\" has work directory \"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Container \"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\" has run directory \"/run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Strongconnecting node 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1" time="2025-10-04T12:33:27-04:00" level=debug msg="Pushed 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 onto stack" time="2025-10-04T12:33:27-04:00" level=debug msg="Finishing node 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1. Popped 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 off stack" time="2025-10-04T12:33:27-04:00" level=debug msg="Strongconnecting node 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8" time="2025-10-04T12:33:27-04:00" level=debug msg="Pushed 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 onto stack" time="2025-10-04T12:33:27-04:00" level=debug msg="Finishing node 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8. Popped 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 off stack" time="2025-10-04T12:33:27-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/3P7PWYNTG5QJZJOWQ6XDK4NETN,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/work,userxattr,context=\"system_u:object_r:container_file_t:s0:c285,c421\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Made network namespace at /run/user/3001/netns/netns-f5551a3d-13a6-81b5-6f62-8de155b907e3 for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1" time="2025-10-04T12:33:27-04:00" level=debug msg="Mounted container \"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\" at \"/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created root filesystem for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 at /home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged" time="2025-10-04T12:33:27-04:00" level=debug msg="creating rootless network namespace with name \"rootless-netns-d22c9f230d0691b8f418\"" time="2025-10-04T12:33:27-04:00" level=debug msg="slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/3001/netns/rootless-netns-d22c9f230d0691b8f418 tap0" time="2025-10-04T12:33:27-04:00" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/etc/resolv.conf\"" time="2025-10-04T12:33:27-04:00" level=debug msg="cni result for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:e2:98:f4:5f:02:10 Sandbox:} {Name:veth2fe45075 Mac:16:b6:29:b0:6d:39 Sandbox:} {Name:eth0 Mac:2a:18:12:08:ad:32 Sandbox:/run/user/3001/netns/netns-f5551a3d-13a6-81b5-6f62-8de155b907e3}] [{Version:4 Interface:0xc000c3e028 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Starting parent driver\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"opaque=map[builtin.readypipepath:/run/user/3001/libpod/tmp/rootlessport4177357533/.bp-ready.pipe builtin.socketpath:/run/user/3001/libpod/tmp/rootlessport4177357533/.bp.sock]\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Starting child driver in child netns (\\\"/proc/self/exe\\\" [rootlessport-child])\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Waiting for initComplete\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"initComplete is closed; parent and child established the communication channel\"\ntime=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Exposing ports [{ 80 15001 1 tcp}]\"\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport: time=\"2025-10-04T12:33:27-04:00\" level=info msg=Ready\n" time="2025-10-04T12:33:27-04:00" level=debug msg="rootlessport is ready" time="2025-10-04T12:33:27-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-10-04T12:33:27-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-10-04T12:33:27-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created OCI spec for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/config.json" time="2025-10-04T12:33:27-04:00" level=debug msg="Got pod cgroup as " time="2025-10-04T12:33:27-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-10-04T12:33:27-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 -u 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata -p /run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/pidfile -n 4bfdec19f3e3-infra --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1]" time="2025-10-04T12:33:27-04:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied time="2025-10-04T12:33:27-04:00" level=debug msg="Received: 26765" time="2025-10-04T12:33:27-04:00" level=info msg="Got Conmon PID as 26755" time="2025-10-04T12:33:27-04:00" level=debug msg="Created container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 in OCI runtime" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2025-10-04T12:33:27-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2025-10-04T12:33:27-04:00" level=debug msg="Starting container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 with command [/catatonit -P]" time="2025-10-04T12:33:27-04:00" level=debug msg="Started container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1" time="2025-10-04T12:33:27-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/FD7XHZOTU3ZCOHOMS6WJGARUCE,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/work,userxattr,context=\"system_u:object_r:container_file_t:s0:c285,c421\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Mounted container \"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\" at \"/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/merged\"" time="2025-10-04T12:33:27-04:00" level=debug msg="Created root filesystem for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 at /home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/merged" time="2025-10-04T12:33:27-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-10-04T12:33:27-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-10-04T12:33:27-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2025-10-04T12:33:27-04:00" level=debug msg="Created OCI spec for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/config.json" time="2025-10-04T12:33:27-04:00" level=debug msg="Got pod cgroup as " time="2025-10-04T12:33:27-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-10-04T12:33:27-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 -u 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata -p /run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/pidfile -n httpd1-httpd1 --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8]" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied time="2025-10-04T12:33:27-04:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied" time="2025-10-04T12:33:27-04:00" level=debug msg="Received: 26786" time="2025-10-04T12:33:27-04:00" level=info msg="Got Conmon PID as 26776" time="2025-10-04T12:33:27-04:00" level=debug msg="Created container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 in OCI runtime" time="2025-10-04T12:33:27-04:00" level=debug msg="Starting container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 with command [/bin/busybox-extras httpd -f -p 80]" time="2025-10-04T12:33:27-04:00" level=debug msg="Started container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8" time="2025-10-04T12:33:27-04:00" level=debug msg="Called kube.PersistentPostRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-10-04T12:33:27-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Oct 04 12:33:27 managed-node2 sudo[26541]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:28 managed-node2 sudo[26917]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqgeablflvziirakssvhgovxnyqlazn ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595608.143278-15965-116724055097581/AnsiballZ_systemd.py' Oct 04 12:33:28 managed-node2 sudo[26917]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:28 managed-node2 platform-python[26920]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Oct 04 12:33:28 managed-node2 systemd[25493]: Reloading. Oct 04 12:33:28 managed-node2 sudo[26917]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:28 managed-node2 sudo[27054]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcawqykrekhxzainagfjkqbithxyjltw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595608.6875198-15999-90547985485461/AnsiballZ_systemd.py' Oct 04 12:33:28 managed-node2 sudo[27054]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:29 managed-node2 platform-python[27057]: ansible-systemd Invoked with name= scope=user enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Oct 04 12:33:29 managed-node2 systemd[25493]: Reloading. Oct 04 12:33:29 managed-node2 sudo[27054]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:29 managed-node2 sudo[27193]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sugdwogvzmuboswwbcwdjoyqjtijmrjd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595609.2593784-16020-169137665275210/AnsiballZ_systemd.py' Oct 04 12:33:29 managed-node2 sudo[27193]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:29 managed-node2 dnsmasq[26742]: listening on cni-podman1(#3): fe80::e098:f4ff:fe5f:210%cni-podman1 Oct 04 12:33:29 managed-node2 platform-python[27196]: ansible-systemd Invoked with name= scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Oct 04 12:33:29 managed-node2 systemd[25493]: Created slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:29 managed-node2 systemd[25493]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Oct 04 12:33:29 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : container 26765 exited with status 137 Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1)" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Oct 04 12:33:29 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : container 26786 exited with status 137 Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=info msg="Using sqlite as database backend" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using graph driver overlay" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using run root /run/user/3001/containers" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using transient store: false" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that metacopy is not being used" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that native-diff is usable" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Initializing event backend file" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=info msg="Setting parallel job count to 7" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8)" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=info msg="Using sqlite as database backend" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using graph driver overlay" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using run root /run/user/3001/containers" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using transient store: false" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that metacopy is not being used" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Cached value indicated that native-diff is usable" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Initializing event backend file" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=info msg="Setting parallel job count to 7" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8)" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time="2025-10-04T12:33:29-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:29 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state Oct 04 12:33:29 managed-node2 kernel: device veth2fe45075 left promiscuous mode Oct 04 12:33:29 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1)" Oct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time="2025-10-04T12:33:29-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:29 managed-node2 podman[27202]: Pods stopped: Oct 04 12:33:29 managed-node2 podman[27202]: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d Oct 04 12:33:29 managed-node2 podman[27202]: Pods removed: Oct 04 12:33:29 managed-node2 podman[27202]: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d Oct 04 12:33:29 managed-node2 podman[27202]: Secrets removed: Oct 04 12:33:29 managed-node2 podman[27202]: Volumes removed: Oct 04 12:33:30 managed-node2 systemd[25493]: Started rootless-netns-d4627493.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:30 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth938ef76c: link is not ready Oct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered blocking state Oct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state Oct 04 12:33:30 managed-node2 kernel: device veth938ef76c entered promiscuous mode Oct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered blocking state Oct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered forwarding state Oct 04 12:33:30 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth938ef76c: link becomes ready Oct 04 12:33:30 managed-node2 dnsmasq[27452]: listening on cni-podman1(#3): 10.89.0.1 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: started, version 2.79 cachesize 150 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using local addresses only for domain dns.podman Oct 04 12:33:30 managed-node2 dnsmasq[27454]: reading /etc/resolv.conf Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using local addresses only for domain dns.podman Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.0.2.3#53 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.29.169.13#53 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.29.170.12#53 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.2.32.1#53 Oct 04 12:33:30 managed-node2 dnsmasq[27454]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:30 managed-node2 podman[27202]: Pod: Oct 04 12:33:30 managed-node2 podman[27202]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb Oct 04 12:33:30 managed-node2 podman[27202]: Container: Oct 04 12:33:30 managed-node2 podman[27202]: e74648d47617035a35842176c0cd197e876af20efb66c9a6fbb560c1ba4c6833 Oct 04 12:33:30 managed-node2 systemd[25493]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:30 managed-node2 sudo[27193]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:31 managed-node2 platform-python[27630]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:33:32 managed-node2 dnsmasq[27454]: listening on cni-podman1(#3): fe80::f8fb:d3ff:fe6b:28b6%cni-podman1 Oct 04 12:33:32 managed-node2 platform-python[27754]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:33 managed-node2 platform-python[27879]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:34 managed-node2 platform-python[28003]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:34 managed-node2 platform-python[28126]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:35 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:33:36 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:33:36 managed-node2 platform-python[28426]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:36 managed-node2 platform-python[28549]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:37 managed-node2 platform-python[28672]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:33:37 managed-node2 platform-python[28771]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595616.9605668-16444-215555946645887/source _original_basename=tmp7zrtpb5n follow=False checksum=65edd58cfda8e78be7cf81993b5521acb64e8edf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:33:37 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:33:38 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice. -- Subject: Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1056] manager: (cni-podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3) Oct 04 12:33:38 managed-node2 systemd-udevd[28943]: Using default interface naming scheme 'rhel-8.0'. Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1123] manager: (veth58b4002b): new Veth device (/org/freedesktop/NetworkManager/Devices/4) Oct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth58b4002b: link is not ready Oct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered blocking state Oct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state Oct 04 12:33:38 managed-node2 kernel: device veth58b4002b entered promiscuous mode Oct 04 12:33:38 managed-node2 systemd-udevd[28944]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:38 managed-node2 systemd-udevd[28944]: Could not generate persistent MAC address for veth58b4002b: No such file or directory Oct 04 12:33:38 managed-node2 systemd-udevd[28943]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:38 managed-node2 systemd-udevd[28943]: Could not generate persistent MAC address for cni-podman1: No such file or directory Oct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1326] device (cni-podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1331] device (cni-podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1338] device (cni-podman1): Activation: starting connection 'cni-podman1' (f4b0bed9-ed1a-4daa-9776-1b7c64cb04df) Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1339] device (cni-podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1342] device (cni-podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1344] device (cni-podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1345] device (cni-podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth58b4002b: link becomes ready Oct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered blocking state Oct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered forwarding state Oct 04 12:33:38 managed-node2 dbus-daemon[595]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=660 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Oct 04 12:33:38 managed-node2 systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has begun starting up. Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1545] device (veth58b4002b): carrier: link connected Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1548] device (cni-podman1): carrier: link connected Oct 04 12:33:38 managed-node2 dbus-daemon[595]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Oct 04 12:33:38 managed-node2 systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1968] device (cni-podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1970] device (cni-podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Oct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1979] device (cni-podman1): Activation: successful, device activated. Oct 04 12:33:38 managed-node2 dnsmasq[29065]: listening on cni-podman1(#3): 10.89.0.1 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: started, version 2.79 cachesize 150 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using local addresses only for domain dns.podman Oct 04 12:33:38 managed-node2 dnsmasq[29069]: reading /etc/resolv.conf Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using local addresses only for domain dns.podman Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.29.169.13#53 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.29.170.12#53 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.2.32.1#53 Oct 04 12:33:38 managed-node2 dnsmasq[29069]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:38 managed-node2 systemd[1]: Started libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope. -- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach} Oct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : terminal_ctrl_fd: 13 Oct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : winsz read side: 17, winsz write side: 18 Oct 04 12:33:38 managed-node2 systemd[1]: Started libcontainer container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7. -- Subject: Unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : container PID: 29081 Oct 04 12:33:38 managed-node2 systemd[1]: Started libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope. -- Subject: Unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach} Oct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : terminal_ctrl_fd: 12 Oct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : winsz read side: 16, winsz write side: 17 Oct 04 12:33:38 managed-node2 systemd[1]: Started libcontainer container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a. -- Subject: Unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : container PID: 29103 Oct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Oct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a Container: b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a Oct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-10-04T12:33:37-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2025-10-04T12:33:37-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-10-04T12:33:37-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-10-04T12:33:37-04:00" level=info msg="Using sqlite as database backend" time="2025-10-04T12:33:37-04:00" level=debug msg="Using graph driver overlay" time="2025-10-04T12:33:37-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2025-10-04T12:33:37-04:00" level=debug msg="Using run root /run/containers/storage" time="2025-10-04T12:33:37-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2025-10-04T12:33:37-04:00" level=debug msg="Using tmp dir /run/libpod" time="2025-10-04T12:33:37-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2025-10-04T12:33:37-04:00" level=debug msg="Using transient store: false" time="2025-10-04T12:33:37-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-10-04T12:33:37-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:33:37-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:33:37-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2025-10-04T12:33:37-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2025-10-04T12:33:37-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2025-10-04T12:33:37-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2025-10-04T12:33:37-04:00" level=debug msg="Initializing event backend file" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-10-04T12:33:37-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-10-04T12:33:37-04:00" level=info msg="Setting parallel job count to 7" time="2025-10-04T12:33:37-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:31:14.473584587 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-10-04T12:33:37-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-10-04T12:33:37-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:37-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:37-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:37-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:37-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:37-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)" time="2025-10-04T12:33:37-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:37-04:00" level=debug msg="Pod using bridge network mode" time="2025-10-04T12:33:37-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751" time="2025-10-04T12:33:38-04:00" level=debug msg="using systemd mode: false" time="2025-10-04T12:33:38-04:00" level=debug msg="setting container name f7eedbe6e6e1-infra" time="2025-10-04T12:33:38-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Allocated lock 1 for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2025-10-04T12:33:38-04:00" level=debug msg="Check for idmapped mounts support " time="2025-10-04T12:33:38-04:00" level=debug msg="Created container \"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Container \"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\" has work directory \"/var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Container \"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\" has run directory \"/run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:33:38-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-10-04T12:33:38-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:38-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-10-04T12:33:38-04:00" level=debug msg="using systemd mode: false" time="2025-10-04T12:33:38-04:00" level=debug msg="adding container to pod httpd2" time="2025-10-04T12:33:38-04:00" level=debug msg="setting container name httpd2-httpd2" time="2025-10-04T12:33:38-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-10-04T12:33:38-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /proc" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /dev" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /dev/pts" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /dev/mqueue" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /sys" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2025-10-04T12:33:38-04:00" level=debug msg="Allocated lock 2 for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a" time="2025-10-04T12:33:38-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Created container \"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Container \"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\" has work directory \"/var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Container \"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\" has run directory \"/run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Strongconnecting node acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="Pushed acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 onto stack" time="2025-10-04T12:33:38-04:00" level=debug msg="Finishing node acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7. Popped acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 off stack" time="2025-10-04T12:33:38-04:00" level=debug msg="Strongconnecting node b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a" time="2025-10-04T12:33:38-04:00" level=debug msg="Pushed b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a onto stack" time="2025-10-04T12:33:38-04:00" level=debug msg="Finishing node b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a. Popped b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a off stack" time="2025-10-04T12:33:38-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/CLVCQDNEL47VMN42Y3O6VVBSEK,upperdir=/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/diff,workdir=/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c321,c454\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Mounted container \"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\" at \"/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Created root filesystem for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 at /var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged" time="2025-10-04T12:33:38-04:00" level=debug msg="Made network namespace at /run/netns/netns-4bb92ac6-c391-8230-0912-824e2a801d42 for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="cni result for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:92:f8:b0:67:7f:78 Sandbox:} {Name:veth58b4002b Mac:9e:e6:53:58:c5:ef Sandbox:} {Name:eth0 Mac:9a:79:68:03:db:b9 Sandbox:/run/netns/netns-4bb92ac6-c391-8230-0912-824e2a801d42}] [{Version:4 Interface:0xc0006223b8 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}" time="2025-10-04T12:33:38-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-10-04T12:33:38-04:00" level=debug msg="Setting Cgroups for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 to machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice:libpod:acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-10-04T12:33:38-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Created OCI spec for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 at /var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/config.json" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-10-04T12:33:38-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 -u acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata -p /run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/pidfile -n f7eedbe6e6e1-infra --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7]" time="2025-10-04T12:33:38-04:00" level=info msg="Running conmon under slice machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice and unitName libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope" time="2025-10-04T12:33:38-04:00" level=debug msg="Received: 29081" time="2025-10-04T12:33:38-04:00" level=info msg="Got Conmon PID as 29071" time="2025-10-04T12:33:38-04:00" level=debug msg="Created container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 in OCI runtime" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2025-10-04T12:33:38-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2025-10-04T12:33:38-04:00" level=debug msg="Starting container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 with command [/catatonit -P]" time="2025-10-04T12:33:38-04:00" level=debug msg="Started container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7" time="2025-10-04T12:33:38-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/LBBH4VMJZF2KPCTZG3NWOHXUKQ,upperdir=/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/diff,workdir=/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c321,c454\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Mounted container \"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\" at \"/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/merged\"" time="2025-10-04T12:33:38-04:00" level=debug msg="Created root filesystem for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a at /var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/merged" time="2025-10-04T12:33:38-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-10-04T12:33:38-04:00" level=debug msg="Setting Cgroups for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a to machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice:libpod:b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a" time="2025-10-04T12:33:38-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-10-04T12:33:38-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2025-10-04T12:33:38-04:00" level=debug msg="Created OCI spec for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a at /var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/config.json" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a" time="2025-10-04T12:33:38-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice" time="2025-10-04T12:33:38-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-10-04T12:33:38-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a -u b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata -p /run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a]" time="2025-10-04T12:33:38-04:00" level=info msg="Running conmon under slice machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice and unitName libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope" time="2025-10-04T12:33:38-04:00" level=debug msg="Received: 29103" time="2025-10-04T12:33:38-04:00" level=info msg="Got Conmon PID as 29092" time="2025-10-04T12:33:38-04:00" level=debug msg="Created container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a in OCI runtime" time="2025-10-04T12:33:38-04:00" level=debug msg="Starting container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a with command [/bin/busybox-extras httpd -f -p 80]" time="2025-10-04T12:33:38-04:00" level=debug msg="Started container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a" time="2025-10-04T12:33:38-04:00" level=debug msg="Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-10-04T12:33:38-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Oct 04 12:33:39 managed-node2 platform-python[29234]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Oct 04 12:33:39 managed-node2 systemd[1]: Reloading. Oct 04 12:33:39 managed-node2 dnsmasq[29069]: listening on cni-podman1(#3): fe80::90f8:b0ff:fe67:7f78%cni-podman1 Oct 04 12:33:39 managed-node2 platform-python[29403]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Oct 04 12:33:39 managed-node2 systemd[1]: Reloading. Oct 04 12:33:40 managed-node2 platform-python[29558]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Oct 04 12:33:40 managed-node2 systemd[1]: Created slice system-podman\x2dkube.slice. -- Subject: Unit system-podman\x2dkube.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit system-podman\x2dkube.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:40 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun starting up. Oct 04 12:33:40 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : container 29081 exited with status 137 Oct 04 12:33:40 managed-node2 systemd[1]: libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope completed and consumed the indicated resources. Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7)" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=info msg="Using sqlite as database backend" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using graph driver overlay" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using run root /run/containers/storage" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using tmp dir /run/libpod" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using transient store: false" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that metacopy is being used" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Initializing event backend file" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=info msg="Setting parallel job count to 7" Oct 04 12:33:40 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : container 29103 exited with status 137 Oct 04 12:33:40 managed-node2 systemd[1]: libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope completed and consumed the indicated resources. Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a)" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=info msg="Using sqlite as database backend" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using graph driver overlay" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using run root /run/containers/storage" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using tmp dir /run/libpod" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using transient store: false" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that overlay is supported" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that metacopy is being used" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Initializing event backend file" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=info msg="Setting parallel job count to 7" Oct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0-merged.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a)" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time="2025-10-04T12:33:40-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:40 managed-node2 systemd[1]: libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state Oct 04 12:33:40 managed-node2 kernel: device veth58b4002b left promiscuous mode Oct 04 12:33:40 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state Oct 04 12:33:40 managed-node2 systemd[1]: run-netns-netns\x2d4bb92ac6\x2dc391\x2d8230\x2d0912\x2d824e2a801d42.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d4bb92ac6\x2dc391\x2d8230\x2d0912\x2d824e2a801d42.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522-merged.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7)" Oct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time="2025-10-04T12:33:40-04:00" level=debug msg="Shutting down engines" Oct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: Stopping libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope. -- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has begun shutting down. Oct 04 12:33:40 managed-node2 systemd[1]: libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has successfully entered the 'dead' state. Oct 04 12:33:40 managed-node2 systemd[1]: Stopped libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope. -- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished shutting down. Oct 04 12:33:40 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice. -- Subject: Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished shutting down. Oct 04 12:33:40 managed-node2 systemd[1]: machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice: Consumed 193ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice completed and consumed the indicated resources. Oct 04 12:33:40 managed-node2 podman[29565]: Pods stopped: Oct 04 12:33:40 managed-node2 podman[29565]: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a Oct 04 12:33:40 managed-node2 podman[29565]: Pods removed: Oct 04 12:33:40 managed-node2 podman[29565]: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a Oct 04 12:33:40 managed-node2 podman[29565]: Secrets removed: Oct 04 12:33:40 managed-node2 podman[29565]: Volumes removed: Oct 04 12:33:40 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice. -- Subject: Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:40 managed-node2 systemd[1]: Started libcontainer container 2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1. -- Subject: Unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:41 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth44fc3814: link is not ready Oct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0690] manager: (veth44fc3814): new Veth device (/org/freedesktop/NetworkManager/Devices/5) Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state Oct 04 12:33:41 managed-node2 kernel: device veth44fc3814 entered promiscuous mode Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered forwarding state Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state Oct 04 12:33:41 managed-node2 systemd-udevd[29722]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:41 managed-node2 systemd-udevd[29722]: Could not generate persistent MAC address for veth44fc3814: No such file or directory Oct 04 12:33:41 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth44fc3814: link becomes ready Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state Oct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered forwarding state Oct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0827] device (veth44fc3814): carrier: link connected Oct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0829] device (cni-podman1): carrier: link connected Oct 04 12:33:41 managed-node2 dnsmasq[29793]: listening on cni-podman1(#3): 10.89.0.1 Oct 04 12:33:41 managed-node2 dnsmasq[29793]: listening on cni-podman1(#3): fe80::90f8:b0ff:fe67:7f78%cni-podman1 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: started, version 2.79 cachesize 150 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using local addresses only for domain dns.podman Oct 04 12:33:41 managed-node2 dnsmasq[29797]: reading /etc/resolv.conf Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using local addresses only for domain dns.podman Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.29.169.13#53 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.29.170.12#53 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.2.32.1#53 Oct 04 12:33:41 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:41 managed-node2 systemd[1]: Started libcontainer container d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782. -- Subject: Unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:41 managed-node2 systemd[1]: Started libcontainer container c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28. -- Subject: Unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:41 managed-node2 podman[29565]: Pod: Oct 04 12:33:41 managed-node2 podman[29565]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a Oct 04 12:33:41 managed-node2 podman[29565]: Container: Oct 04 12:33:41 managed-node2 podman[29565]: c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28 Oct 04 12:33:41 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:42 managed-node2 platform-python[29963]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:43 managed-node2 platform-python[30096]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:44 managed-node2 platform-python[30220]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:45 managed-node2 platform-python[30343]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:46 managed-node2 platform-python[30638]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:33:47 managed-node2 platform-python[30761]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:47 managed-node2 platform-python[30884]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:33:47 managed-node2 platform-python[30983]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595627.3886073-16945-54933471056529/source _original_basename=tmpukku_qg2 follow=False checksum=e89a97ee50e2e2344cd04b5ef33140ac4f197bf8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Oct 04 12:33:48 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Oct 04 12:33:48 managed-node2 platform-python[31108]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:33:48 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice. -- Subject: Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:48 managed-node2 NetworkManager[660]: [1759595628.5733] manager: (vethca854251): new Veth device (/org/freedesktop/NetworkManager/Devices/6) Oct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethca854251: link is not ready Oct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered blocking state Oct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state Oct 04 12:33:48 managed-node2 kernel: device vethca854251 entered promiscuous mode Oct 04 12:33:48 managed-node2 systemd-udevd[31155]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:48 managed-node2 systemd-udevd[31155]: Could not generate persistent MAC address for vethca854251: No such file or directory Oct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Oct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethca854251: link becomes ready Oct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered blocking state Oct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered forwarding state Oct 04 12:33:48 managed-node2 NetworkManager[660]: [1759595628.6066] device (vethca854251): carrier: link connected Oct 04 12:33:48 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses Oct 04 12:33:48 managed-node2 systemd[1]: Started libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope. -- Subject: Unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:48 managed-node2 systemd[1]: Started libcontainer container 59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca. -- Subject: Unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:48 managed-node2 systemd[1]: Started libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope. -- Subject: Unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:48 managed-node2 systemd[1]: Started libcontainer container 0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac. -- Subject: Unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:49 managed-node2 platform-python[31388]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Oct 04 12:33:49 managed-node2 systemd[1]: Reloading. Oct 04 12:33:50 managed-node2 platform-python[31549]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Oct 04 12:33:50 managed-node2 systemd[1]: Reloading. Oct 04 12:33:50 managed-node2 platform-python[31704]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Oct 04 12:33:50 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun starting up. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has successfully entered the 'dead' state. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope completed and consumed the indicated resources. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has successfully entered the 'dead' state. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope completed and consumed the indicated resources. Oct 04 12:33:50 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:33:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay-96a34ad4fa258979f69c8abe553376ab173aebc4813555f0aa72e1d24059a836-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-96a34ad4fa258979f69c8abe553376ab173aebc4813555f0aa72e1d24059a836-merged.mount has successfully entered the 'dead' state. Oct 04 12:33:50 managed-node2 systemd[1]: libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has successfully entered the 'dead' state. Oct 04 12:33:50 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state Oct 04 12:33:50 managed-node2 kernel: device vethca854251 left promiscuous mode Oct 04 12:33:50 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state Oct 04 12:33:50 managed-node2 systemd[1]: run-netns-netns\x2d04fac8f5\x2d669a\x2d2b56\x2d8dc1\x2d2c27fe482b75.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d04fac8f5\x2d669a\x2d2b56\x2d8dc1\x2d2c27fe482b75.mount has successfully entered the 'dead' state. Oct 04 12:33:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:33:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay-bf4340b80dd987c0b14d9ab53281fd43797b6665f7cf0be1b6e809f99681d28d-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-bf4340b80dd987c0b14d9ab53281fd43797b6665f7cf0be1b6e809f99681d28d-merged.mount has successfully entered the 'dead' state. Oct 04 12:33:51 managed-node2 systemd[1]: libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has successfully entered the 'dead' state. Oct 04 12:33:51 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice. -- Subject: Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished shutting down. Oct 04 12:33:51 managed-node2 systemd[1]: machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice: Consumed 194ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice completed and consumed the indicated resources. Oct 04 12:33:51 managed-node2 podman[31711]: Pods stopped: Oct 04 12:33:51 managed-node2 podman[31711]: 668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3 Oct 04 12:33:51 managed-node2 podman[31711]: Pods removed: Oct 04 12:33:51 managed-node2 podman[31711]: 668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3 Oct 04 12:33:51 managed-node2 podman[31711]: Secrets removed: Oct 04 12:33:51 managed-node2 podman[31711]: Volumes removed: Oct 04 12:33:51 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice. -- Subject: Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container 33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12. -- Subject: Unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:51 managed-node2 NetworkManager[660]: [1759595631.3224] manager: (vethe1bf25d0): new Veth device (/org/freedesktop/NetworkManager/Devices/7) Oct 04 12:33:51 managed-node2 systemd-udevd[31876]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Oct 04 12:33:51 managed-node2 systemd-udevd[31876]: Could not generate persistent MAC address for vethe1bf25d0: No such file or directory Oct 04 12:33:51 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethe1bf25d0: link is not ready Oct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered blocking state Oct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state Oct 04 12:33:51 managed-node2 kernel: device vethe1bf25d0 entered promiscuous mode Oct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered blocking state Oct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered forwarding state Oct 04 12:33:51 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe1bf25d0: link becomes ready Oct 04 12:33:51 managed-node2 NetworkManager[660]: [1759595631.3521] device (vethe1bf25d0): carrier: link connected Oct 04 12:33:51 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses Oct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container 8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46. -- Subject: Unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a. -- Subject: Unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:33:51 managed-node2 podman[31711]: Pod: Oct 04 12:33:51 managed-node2 podman[31711]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c Oct 04 12:33:51 managed-node2 podman[31711]: Container: Oct 04 12:33:51 managed-node2 podman[31711]: d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a Oct 04 12:33:51 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished starting up. -- -- The start-up result is done. Oct 04 12:33:52 managed-node2 sudo[32110]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrjtqrmhbkjdxpmvsixtkgxksntzspm ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595632.0315228-17132-238240516154633/AnsiballZ_command.py' Oct 04 12:33:52 managed-node2 sudo[32110]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:52 managed-node2 platform-python[32113]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:52 managed-node2 systemd[25493]: Started podman-32122.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:33:52 managed-node2 sudo[32110]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:52 managed-node2 platform-python[32260]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:53 managed-node2 platform-python[32391]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:53 managed-node2 sudo[32521]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpttuiaoavnpntacugmnwvpgddxwnpay ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595633.3541243-17183-26845545367359/AnsiballZ_command.py' Oct 04 12:33:53 managed-node2 sudo[32521]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:33:53 managed-node2 platform-python[32524]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:53 managed-node2 sudo[32521]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:33:53 managed-node2 platform-python[32650]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:54 managed-node2 platform-python[32776]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:54 managed-node2 platform-python[32902]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:55 managed-node2 platform-python[33027]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:33:55 managed-node2 rsyslogd[1019]: imjournal: journal files changed, reloading... [v8.2102.0-15.el8 try https://www.rsyslog.com/e/0 ] Oct 04 12:33:55 managed-node2 platform-python[33152]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd1-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:56 managed-node2 platform-python[33276]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd2-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:56 managed-node2 platform-python[33400]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd3-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:33:59 managed-node2 platform-python[33649]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:00 managed-node2 platform-python[33778]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:03 managed-node2 platform-python[33903]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:34:06 managed-node2 platform-python[34026]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:34:06 managed-node2 platform-python[34153]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:34:07 managed-node2 platform-python[34280]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:34:09 managed-node2 platform-python[34403]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:34:11 managed-node2 platform-python[34526]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:34:14 managed-node2 platform-python[34649]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:34:16 managed-node2 platform-python[34772]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 04 12:34:18 managed-node2 platform-python[34933]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Oct 04 12:34:19 managed-node2 platform-python[35056]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Oct 04 12:34:23 managed-node2 platform-python[35179]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Oct 04 12:34:24 managed-node2 platform-python[35303]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:24 managed-node2 platform-python[35428]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:24 managed-node2 platform-python[35552]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:25 managed-node2 platform-python[35676]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:26 managed-node2 platform-python[35800]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Oct 04 12:34:27 managed-node2 platform-python[35923]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:27 managed-node2 platform-python[36046]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:27 managed-node2 sudo[36169]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbwxxttcmtyfwkqugxtiqxlyfylyxbp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595667.7794547-18775-44408969327564/AnsiballZ_podman_image.py' Oct 04 12:34:27 managed-node2 sudo[36169]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36174.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36182.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36189.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36199.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36207.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36215.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36223.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:28 managed-node2 sudo[36169]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:34:29 managed-node2 platform-python[36352]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:29 managed-node2 platform-python[36477]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:30 managed-node2 platform-python[36600]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:34:30 managed-node2 platform-python[36664]: ansible-file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=tmp4tbrh702 recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:30 managed-node2 sudo[36787]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-recvjrutxodvbgmoimxrlbeojjenzstr ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595670.5228426-19044-7471957653983/AnsiballZ_podman_play.py' Oct 04 12:34:30 managed-node2 sudo[36787]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:34:30 managed-node2 systemd[25493]: Started podman-36798.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-10-04T12:34:30-04:00" level=info msg="/bin/podman filtering at log level debug" time="2025-10-04T12:34:30-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-10-04T12:34:30-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-10-04T12:34:30-04:00" level=info msg="Using sqlite as database backend" time="2025-10-04T12:34:30-04:00" level=debug msg="Using graph driver overlay" time="2025-10-04T12:34:30-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2025-10-04T12:34:30-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2025-10-04T12:34:30-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2025-10-04T12:34:30-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2025-10-04T12:34:30-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2025-10-04T12:34:30-04:00" level=debug msg="Using transient store: false" time="2025-10-04T12:34:30-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-10-04T12:34:30-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:34:30-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:34:30-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2025-10-04T12:34:30-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2025-10-04T12:34:30-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2025-10-04T12:34:30-04:00" level=debug msg="Initializing event backend file" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-10-04T12:34:30-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-10-04T12:34:30-04:00" level=info msg="Setting parallel job count to 7" time="2025-10-04T12:34:30-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:33:27.193743577 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-10-04T12:34:30-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-10-04T12:34:30-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:34:30-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:34:30-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:34:30-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:34:30-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:34:30-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05)" time="2025-10-04T12:34:30-04:00" level=debug msg="exporting opaque data as blob \"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"" time="2025-10-04T12:34:30-04:00" level=debug msg="Pod using bridge network mode" time="2025-10-04T12:34:30-04:00" level=debug msg="Got pod cgroup as /libpod_parent/34492a3900bc4a9b7b06bf0f56b147105736e26abab87e6881cbea1b0e369c1d" Error: adding pod to state: name "httpd1" is in use: pod already exists time="2025-10-04T12:34:30-04:00" level=debug msg="Shutting down engines" Oct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Oct 04 12:34:30 managed-node2 sudo[36787]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:34:31 managed-node2 platform-python[36952]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:34:32 managed-node2 platform-python[37076]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:33 managed-node2 platform-python[37201]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:34 managed-node2 platform-python[37325]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:34 managed-node2 platform-python[37448]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:36 managed-node2 platform-python[37743]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:36 managed-node2 platform-python[37868]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:37 managed-node2 platform-python[37991]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:34:37 managed-node2 platform-python[38055]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=tmpeaiobce5 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:34:37 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice. -- Subject: Unit machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-10-04T12:34:37-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2025-10-04T12:34:37-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-10-04T12:34:37-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-10-04T12:34:37-04:00" level=info msg="Using sqlite as database backend" time="2025-10-04T12:34:37-04:00" level=debug msg="Using graph driver overlay" time="2025-10-04T12:34:37-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2025-10-04T12:34:37-04:00" level=debug msg="Using run root /run/containers/storage" time="2025-10-04T12:34:37-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2025-10-04T12:34:37-04:00" level=debug msg="Using tmp dir /run/libpod" time="2025-10-04T12:34:37-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2025-10-04T12:34:37-04:00" level=debug msg="Using transient store: false" time="2025-10-04T12:34:37-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-10-04T12:34:37-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:34:37-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-10-04T12:34:37-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2025-10-04T12:34:37-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2025-10-04T12:34:37-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2025-10-04T12:34:37-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2025-10-04T12:34:37-04:00" level=debug msg="Initializing event backend file" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-10-04T12:34:37-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-10-04T12:34:37-04:00" level=info msg="Setting parallel job count to 7" time="2025-10-04T12:34:37-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:31:14.473584587 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-10-04T12:34:37-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-10-04T12:34:37-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:34:37-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-10-04T12:34:37-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-10-04T12:34:37-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:34:37-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-10-04T12:34:37-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)" time="2025-10-04T12:34:37-04:00" level=debug msg="exporting opaque data as blob \"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"" time="2025-10-04T12:34:37-04:00" level=debug msg="Pod using bridge network mode" time="2025-10-04T12:34:37-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice for parent machine.slice and name libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3" time="2025-10-04T12:34:37-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice" time="2025-10-04T12:34:37-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice" Error: adding pod to state: name "httpd2" is in use: pod already exists time="2025-10-04T12:34:37-04:00" level=debug msg="Shutting down engines" Oct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Oct 04 12:34:39 managed-node2 platform-python[38339]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:40 managed-node2 platform-python[38464]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:41 managed-node2 platform-python[38588]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:41 managed-node2 platform-python[38711]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:43 managed-node2 platform-python[39006]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:43 managed-node2 platform-python[39131]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:43 managed-node2 platform-python[39254]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 04 12:34:44 managed-node2 platform-python[39318]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=tmps2by7p7f recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:44 managed-node2 platform-python[39441]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:34:44 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice. -- Subject: Unit machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice has finished starting up. -- -- The start-up result is done. Oct 04 12:34:45 managed-node2 sudo[39603]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjcjbadzeevkrtchrfausielavpgqkug ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595685.2238288-19784-175751449856146/AnsiballZ_command.py' Oct 04 12:34:45 managed-node2 sudo[39603]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:45 managed-node2 platform-python[39606]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:45 managed-node2 systemd[25493]: Started podman-39616.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:34:45 managed-node2 sudo[39603]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:34:45 managed-node2 platform-python[39746]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:46 managed-node2 platform-python[39877]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:46 managed-node2 sudo[40008]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubjcpvqkodstxhlsbjwhddazysxbggp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595686.427822-19841-159760384353159/AnsiballZ_command.py' Oct 04 12:34:46 managed-node2 sudo[40008]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:46 managed-node2 platform-python[40011]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:46 managed-node2 sudo[40008]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:34:47 managed-node2 platform-python[40137]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:47 managed-node2 platform-python[40263]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:47 managed-node2 platform-python[40389]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:48 managed-node2 platform-python[40513]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:48 managed-node2 platform-python[40637]: ansible-uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:34:51 managed-node2 platform-python[40886]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:52 managed-node2 platform-python[41015]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:55 managed-node2 platform-python[41140]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Oct 04 12:34:56 managed-node2 platform-python[41264]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:56 managed-node2 platform-python[41389]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:57 managed-node2 platform-python[41513]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:58 managed-node2 platform-python[41637]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:34:58 managed-node2 platform-python[41761]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:34:58 managed-node2 sudo[41886]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzgxgxnqwlywntbcwmbxfvzqsvbvyldz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595698.7728405-20488-3389549821227/AnsiballZ_systemd.py' Oct 04 12:34:58 managed-node2 sudo[41886]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:34:59 managed-node2 platform-python[41889]: ansible-systemd Invoked with name= scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:34:59 managed-node2 systemd[25493]: Reloading. Oct 04 12:34:59 managed-node2 systemd[25493]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Oct 04 12:34:59 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state Oct 04 12:34:59 managed-node2 kernel: device veth938ef76c left promiscuous mode Oct 04 12:34:59 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state Oct 04 12:34:59 managed-node2 podman[42042]: time="2025-10-04T12:34:59-04:00" level=error msg="container not running" Oct 04 12:34:59 managed-node2 podman[41905]: Pods stopped: Oct 04 12:34:59 managed-node2 podman[41905]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb Oct 04 12:34:59 managed-node2 podman[41905]: Pods removed: Oct 04 12:34:59 managed-node2 podman[41905]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb Oct 04 12:34:59 managed-node2 podman[41905]: Secrets removed: Oct 04 12:34:59 managed-node2 podman[41905]: Volumes removed: Oct 04 12:34:59 managed-node2 systemd[25493]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:34:59 managed-node2 sudo[41886]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:00 managed-node2 platform-python[42189]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:00 managed-node2 sudo[42314]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrcjhxluknhgauyehthbegbfnykwuipf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595700.2224684-20562-98442846681196/AnsiballZ_podman_play.py' Oct 04 12:35:00 managed-node2 sudo[42314]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Oct 04 12:35:00 managed-node2 systemd[25493]: Started podman-42325.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Oct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Oct 04 12:35:00 managed-node2 sudo[42314]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:00 managed-node2 platform-python[42454]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:02 managed-node2 platform-python[42577]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:02 managed-node2 platform-python[42701]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:04 managed-node2 platform-python[42826]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:04 managed-node2 platform-python[42950]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:35:04 managed-node2 systemd[1]: Reloading. Oct 04 12:35:04 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun shutting down. Oct 04 12:35:04 managed-node2 systemd[1]: libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has successfully entered the 'dead' state. Oct 04 12:35:04 managed-node2 systemd[1]: libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope: Consumed 34ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope completed and consumed the indicated resources. Oct 04 12:35:04 managed-node2 systemd[1]: libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has successfully entered the 'dead' state. Oct 04 12:35:04 managed-node2 systemd[1]: libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope completed and consumed the indicated resources. Oct 04 12:35:04 managed-node2 systemd[1]: var-lib-containers-storage-overlay-e31deca989ff1b9cab2066515ef70e9fb506731c52bd3eea5fcc524723f3fd95-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-e31deca989ff1b9cab2066515ef70e9fb506731c52bd3eea5fcc524723f3fd95-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Oct 04 12:35:05 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state Oct 04 12:35:05 managed-node2 kernel: device veth44fc3814 left promiscuous mode Oct 04 12:35:05 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state Oct 04 12:35:05 managed-node2 systemd[1]: run-netns-netns\x2d1f7b53eb\x2d816f\x2d29e7\x2dfe7f\x2d6eb0cf8f8502.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d1f7b53eb\x2d816f\x2d29e7\x2dfe7f\x2d6eb0cf8f8502.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay-db61c157c6f5df8b7c955b07912536ab089af634fcd5e6ba324c673952746b22-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-db61c157c6f5df8b7c955b07912536ab089af634fcd5e6ba324c673952746b22-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice. -- Subject: Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished shutting down. Oct 04 12:35:05 managed-node2 systemd[1]: machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice: Consumed 67ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice completed and consumed the indicated resources. Oct 04 12:35:05 managed-node2 podman[42986]: Pods stopped: Oct 04 12:35:05 managed-node2 podman[42986]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a Oct 04 12:35:05 managed-node2 podman[42986]: Pods removed: Oct 04 12:35:05 managed-node2 podman[42986]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a Oct 04 12:35:05 managed-node2 podman[42986]: Secrets removed: Oct 04 12:35:05 managed-node2 podman[42986]: Volumes removed: Oct 04 12:35:05 managed-node2 systemd[1]: libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope completed and consumed the indicated resources. Oct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 dnsmasq[29797]: exiting on receipt of SIGTERM Oct 04 12:35:05 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state. Oct 04 12:35:05 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished shutting down. Oct 04 12:35:05 managed-node2 platform-python[43261]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay-4f2be2d0d065dff6d155c9d9c0fcaafe14b4a59e471bfad994f4271172df41f6-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-4f2be2d0d065dff6d155c9d9c0fcaafe14b4a59e471bfad994f4271172df41f6-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Oct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Oct 04 12:35:06 managed-node2 platform-python[43522]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:08 managed-node2 platform-python[43645]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:09 managed-node2 platform-python[43770]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:10 managed-node2 platform-python[43894]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:35:10 managed-node2 systemd[1]: Reloading. Oct 04 12:35:10 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun shutting down. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope completed and consumed the indicated resources. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope completed and consumed the indicated resources. Oct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay-41fd0b8b2abd57633d991528a5058698098ba238146a9b3301e23d7fc73f3208-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-41fd0b8b2abd57633d991528a5058698098ba238146a9b3301e23d7fc73f3208-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state Oct 04 12:35:10 managed-node2 kernel: device vethe1bf25d0 left promiscuous mode Oct 04 12:35:10 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state Oct 04 12:35:10 managed-node2 systemd[1]: run-netns-netns\x2d027c972b\x2d4f60\x2dd6f9\x2d5e22\x2d75c001071f96.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d027c972b\x2d4f60\x2dd6f9\x2d5e22\x2d75c001071f96.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay-4d023b27109b3ccd34a210cfed9c7806ca6f6666b5b632942f94ae65bcc6121b-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-4d023b27109b3ccd34a210cfed9c7806ca6f6666b5b632942f94ae65bcc6121b-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice. -- Subject: Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished shutting down. Oct 04 12:35:10 managed-node2 systemd[1]: machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice: Consumed 65ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice completed and consumed the indicated resources. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope: Consumed 34ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope completed and consumed the indicated resources. Oct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12-userdata-shm.mount has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 podman[43930]: Pods stopped: Oct 04 12:35:10 managed-node2 podman[43930]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c Oct 04 12:35:10 managed-node2 podman[43930]: Pods removed: Oct 04 12:35:10 managed-node2 podman[43930]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c Oct 04 12:35:10 managed-node2 podman[43930]: Secrets removed: Oct 04 12:35:10 managed-node2 podman[43930]: Volumes removed: Oct 04 12:35:10 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state. Oct 04 12:35:10 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished shutting down. Oct 04 12:35:11 managed-node2 platform-python[44199]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay-92a24157aa21e0a67c0987f9e002760f709781079b00f0e3baf2a1840c17ef8f-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-92a24157aa21e0a67c0987f9e002760f709781079b00f0e3baf2a1840c17ef8f-merged.mount has successfully entered the 'dead' state. Oct 04 12:35:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:35:11 managed-node2 platform-python[44324]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 04 12:35:11 managed-node2 platform-python[44324]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml Oct 04 12:35:12 managed-node2 platform-python[44460]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:13 managed-node2 platform-python[44583]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Oct 04 12:35:13 managed-node2 platform-python[44707]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:14 managed-node2 sudo[44832]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yohamxjcuwwqowlxqaokqdnnkiwnlnpj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595713.8242128-21262-181373190609036/AnsiballZ_podman_container_info.py' Oct 04 12:35:14 managed-node2 sudo[44832]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:14 managed-node2 platform-python[44835]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 04 12:35:14 managed-node2 systemd[25493]: Started podman-44837.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:14 managed-node2 sudo[44832]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:14 managed-node2 sudo[44966]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkavwbxyujjrxdsuzqtbvginlcolirvx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595714.3794177-21283-159383776115316/AnsiballZ_command.py' Oct 04 12:35:14 managed-node2 sudo[44966]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:14 managed-node2 platform-python[44969]: ansible-command Invoked with _raw_params=podman network ls -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:14 managed-node2 systemd[25493]: Started podman-44971.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:14 managed-node2 sudo[44966]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:14 managed-node2 sudo[45126]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzvbfpfbqkpdwzlmyrlntccwniydmtz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595714.849909-21318-262235972613194/AnsiballZ_command.py' Oct 04 12:35:14 managed-node2 sudo[45126]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:15 managed-node2 platform-python[45129]: ansible-command Invoked with _raw_params=podman secret ls -n -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:15 managed-node2 systemd[25493]: Started podman-45131.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:15 managed-node2 sudo[45126]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:15 managed-node2 platform-python[45261]: ansible-command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Oct 04 12:35:15 managed-node2 systemd[1]: Stopping User Manager for UID 3001... -- Subject: Unit user@3001.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has begun shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopping podman-pause-f03acc05.scope. -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Default. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopping D-Bus User Message Bus... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Removed slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped D-Bus User Message Bus. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Basic System. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Timers. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped Mark boot as successful after the user session has run 2 minutes. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Paths. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Sockets. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Closed D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Stopped podman-pause-f03acc05.scope. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Removed slice user.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[25493]: Reached target Shutdown. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:15 managed-node2 systemd[25493]: Started Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:15 managed-node2 systemd[25493]: Reached target Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Oct 04 12:35:15 managed-node2 systemd[1]: user@3001.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@3001.service has successfully entered the 'dead' state. Oct 04 12:35:15 managed-node2 systemd[1]: Stopped User Manager for UID 3001. -- Subject: Unit user@3001.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[1]: Stopping User runtime directory /run/user/3001... -- Subject: Unit user-runtime-dir@3001.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has begun shutting down. Oct 04 12:35:15 managed-node2 systemd[1]: run-user-3001.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-user-3001.mount has successfully entered the 'dead' state. Oct 04 12:35:15 managed-node2 systemd[1]: user-runtime-dir@3001.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@3001.service has successfully entered the 'dead' state. Oct 04 12:35:15 managed-node2 systemd[1]: Stopped User runtime directory /run/user/3001. -- Subject: Unit user-runtime-dir@3001.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has finished shutting down. Oct 04 12:35:15 managed-node2 systemd[1]: Removed slice User Slice of UID 3001. -- Subject: Unit user-3001.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-3001.slice has finished shutting down. Oct 04 12:35:15 managed-node2 platform-python[45395]: ansible-command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:16 managed-node2 sudo[45519]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtaohsaclvtebxmwbqjbotdzsbchavvn ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595716.4392693-21370-112299601135578/AnsiballZ_command.py' Oct 04 12:35:16 managed-node2 sudo[45519]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:16 managed-node2 platform-python[45522]: ansible-command Invoked with _raw_params=podman pod exists httpd1 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:16 managed-node2 sudo[45519]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:17 managed-node2 platform-python[45652]: ansible-command Invoked with _raw_params=podman pod exists httpd2 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:17 managed-node2 platform-python[45782]: ansible-command Invoked with _raw_params=podman pod exists httpd3 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:17 managed-node2 sudo[45913]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmaujacgizsrobtsjzpmjnnqwleohar ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595717.6763232-21443-47795602630278/AnsiballZ_command.py' Oct 04 12:35:17 managed-node2 sudo[45913]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Oct 04 12:35:17 managed-node2 platform-python[45916]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:17 managed-node2 sudo[45913]: pam_unix(sudo:session): session closed for user podman_basic_user Oct 04 12:35:18 managed-node2 platform-python[46042]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:18 managed-node2 platform-python[46168]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:19 managed-node2 platform-python[46294]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:21 managed-node2 platform-python[46542]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:22 managed-node2 platform-python[46671]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:23 managed-node2 platform-python[46795]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:26 managed-node2 platform-python[46920]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Oct 04 12:35:26 managed-node2 platform-python[47044]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:27 managed-node2 platform-python[47169]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:27 managed-node2 platform-python[47293]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:28 managed-node2 platform-python[47417]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:29 managed-node2 platform-python[47541]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:29 managed-node2 platform-python[47664]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:30 managed-node2 platform-python[47787]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:31 managed-node2 platform-python[47910]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:31 managed-node2 platform-python[48034]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:33 managed-node2 platform-python[48159]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:33 managed-node2 platform-python[48283]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:35:34 managed-node2 platform-python[48410]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:34 managed-node2 platform-python[48533]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:35 managed-node2 platform-python[48656]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:37 managed-node2 platform-python[48781]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:37 managed-node2 platform-python[48905]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Oct 04 12:35:38 managed-node2 platform-python[49032]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:38 managed-node2 platform-python[49155]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:39 managed-node2 platform-python[49278]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Oct 04 12:35:40 managed-node2 platform-python[49402]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:41 managed-node2 platform-python[49525]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:41 managed-node2 platform-python[49648]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:35:42 managed-node2 sshd[49669]: Accepted publickey for root from 10.31.11.222 port 49618 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:35:42 managed-node2 systemd[1]: Started Session 9 of user root. -- Subject: Unit session-9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-9.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:35:42 managed-node2 systemd-logind[598]: New session 9 of user root. -- Subject: A new session 9 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 9 has been created for the user root. -- -- The leading process of the session is 49669. Oct 04 12:35:42 managed-node2 sshd[49669]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:35:42 managed-node2 sshd[49672]: Received disconnect from 10.31.11.222 port 49618:11: disconnected by user Oct 04 12:35:42 managed-node2 sshd[49672]: Disconnected from user root 10.31.11.222 port 49618 Oct 04 12:35:42 managed-node2 sshd[49669]: pam_unix(sshd:session): session closed for user root Oct 04 12:35:42 managed-node2 systemd[1]: session-9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-9.scope has successfully entered the 'dead' state. Oct 04 12:35:42 managed-node2 systemd-logind[598]: Session 9 logged out. Waiting for processes to exit. Oct 04 12:35:42 managed-node2 systemd-logind[598]: Removed session 9. -- Subject: Session 9 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 9 has been terminated. Oct 04 12:35:44 managed-node2 platform-python[49834]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:35:44 managed-node2 platform-python[49961]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:45 managed-node2 platform-python[50084]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:47 managed-node2 platform-python[50332]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:48 managed-node2 platform-python[50461]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:48 managed-node2 platform-python[50585]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:50 managed-node2 sshd[50608]: Accepted publickey for root from 10.31.11.222 port 49628 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:35:50 managed-node2 systemd[1]: Started Session 10 of user root. -- Subject: Unit session-10.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-10.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:35:50 managed-node2 systemd-logind[598]: New session 10 of user root. -- Subject: A new session 10 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 10 has been created for the user root. -- -- The leading process of the session is 50608. Oct 04 12:35:50 managed-node2 sshd[50608]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:35:50 managed-node2 sshd[50611]: Received disconnect from 10.31.11.222 port 49628:11: disconnected by user Oct 04 12:35:50 managed-node2 sshd[50611]: Disconnected from user root 10.31.11.222 port 49628 Oct 04 12:35:50 managed-node2 sshd[50608]: pam_unix(sshd:session): session closed for user root Oct 04 12:35:50 managed-node2 systemd[1]: session-10.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-10.scope has successfully entered the 'dead' state. Oct 04 12:35:50 managed-node2 systemd-logind[598]: Session 10 logged out. Waiting for processes to exit. Oct 04 12:35:50 managed-node2 systemd-logind[598]: Removed session 10. -- Subject: Session 10 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 10 has been terminated. Oct 04 12:35:51 managed-node2 platform-python[50773]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:35:54 managed-node2 platform-python[50925]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:55 managed-node2 platform-python[51048]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:35:57 managed-node2 platform-python[51296]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:35:58 managed-node2 platform-python[51425]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:35:58 managed-node2 platform-python[51549]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:02 managed-node2 sshd[51572]: Accepted publickey for root from 10.31.11.222 port 39572 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:36:02 managed-node2 systemd[1]: Started Session 11 of user root. -- Subject: Unit session-11.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-11.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:36:02 managed-node2 systemd-logind[598]: New session 11 of user root. -- Subject: A new session 11 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 11 has been created for the user root. -- -- The leading process of the session is 51572. Oct 04 12:36:02 managed-node2 sshd[51572]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:36:02 managed-node2 sshd[51575]: Received disconnect from 10.31.11.222 port 39572:11: disconnected by user Oct 04 12:36:02 managed-node2 sshd[51575]: Disconnected from user root 10.31.11.222 port 39572 Oct 04 12:36:02 managed-node2 sshd[51572]: pam_unix(sshd:session): session closed for user root Oct 04 12:36:02 managed-node2 systemd[1]: session-11.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-11.scope has successfully entered the 'dead' state. Oct 04 12:36:02 managed-node2 systemd-logind[598]: Session 11 logged out. Waiting for processes to exit. Oct 04 12:36:02 managed-node2 systemd-logind[598]: Removed session 11. -- Subject: Session 11 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 11 has been terminated. Oct 04 12:36:04 managed-node2 platform-python[51737]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:36:04 managed-node2 platform-python[51889]: ansible-user Invoked with name=lsr_multiple_user1 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None Oct 04 12:36:04 managed-node2 useradd[51893]: new group: name=lsr_multiple_user1, GID=3002 Oct 04 12:36:04 managed-node2 useradd[51893]: new user: name=lsr_multiple_user1, UID=3002, GID=3002, home=/home/lsr_multiple_user1, shell=/bin/bash Oct 04 12:36:05 managed-node2 platform-python[52021]: ansible-user Invoked with name=lsr_multiple_user2 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None Oct 04 12:36:05 managed-node2 useradd[52025]: new group: name=lsr_multiple_user2, GID=3003 Oct 04 12:36:05 managed-node2 useradd[52025]: new user: name=lsr_multiple_user2, UID=3003, GID=3003, home=/home/lsr_multiple_user2, shell=/bin/bash Oct 04 12:36:06 managed-node2 platform-python[52153]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:06 managed-node2 platform-python[52276]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:09 managed-node2 platform-python[52524]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:09 managed-node2 sshd[52551]: Accepted publickey for root from 10.31.11.222 port 39576 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:36:09 managed-node2 systemd[1]: Started Session 12 of user root. -- Subject: Unit session-12.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-12.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:36:09 managed-node2 systemd-logind[598]: New session 12 of user root. -- Subject: A new session 12 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 12 has been created for the user root. -- -- The leading process of the session is 52551. Oct 04 12:36:09 managed-node2 sshd[52551]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:36:09 managed-node2 sshd[52554]: Received disconnect from 10.31.11.222 port 39576:11: disconnected by user Oct 04 12:36:09 managed-node2 sshd[52554]: Disconnected from user root 10.31.11.222 port 39576 Oct 04 12:36:09 managed-node2 sshd[52551]: pam_unix(sshd:session): session closed for user root Oct 04 12:36:09 managed-node2 systemd[1]: session-12.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-12.scope has successfully entered the 'dead' state. Oct 04 12:36:09 managed-node2 systemd-logind[598]: Session 12 logged out. Waiting for processes to exit. Oct 04 12:36:09 managed-node2 systemd-logind[598]: Removed session 12. -- Subject: Session 12 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 12 has been terminated. Oct 04 12:36:11 managed-node2 platform-python[52716]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:36:12 managed-node2 platform-python[52868]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:12 managed-node2 platform-python[52991]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:13 managed-node2 platform-python[53115]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:15 managed-node2 chronyd[603]: Source 74.208.25.46 replaced with 163.123.152.14 (2.centos.pool.ntp.org) Oct 04 12:36:16 managed-node2 platform-python[53244]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 systemd[1]: Reloading. Oct 04 12:36:18 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:18 managed-node2 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Oct 04 12:36:19 managed-node2 systemd[1]: Reloading. Oct 04 12:36:19 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Oct 04 12:36:19 managed-node2 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:19 managed-node2 systemd[1]: run-ra349d219a6fb4468acd54152311c9c85.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ra349d219a6fb4468acd54152311c9c85.service has successfully entered the 'dead' state. Oct 04 12:36:20 managed-node2 platform-python[53877]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:20 managed-node2 platform-python[54000]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:21 managed-node2 platform-python[54123]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:36:21 managed-node2 systemd[1]: Reloading. Oct 04 12:36:21 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment... -- Subject: Unit certmonger.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has begun starting up. Oct 04 12:36:21 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment. -- Subject: Unit certmonger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:22 managed-node2 platform-python[54316]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=# # Ansible managed # # system_role:certificate booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54332]: Certificate in file "/etc/pki/tls/certs/quadlet_demo.crt" issued by CA and saved. Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 platform-python[54454]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Oct 04 12:36:23 managed-node2 platform-python[54577]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key Oct 04 12:36:23 managed-node2 platform-python[54700]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Oct 04 12:36:24 managed-node2 platform-python[54823]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:24 managed-node2 certmonger[54159]: 2025-10-04 12:36:24 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:24 managed-node2 platform-python[54947]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:24 managed-node2 platform-python[55070]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:25 managed-node2 platform-python[55193]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:25 managed-node2 platform-python[55316]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:26 managed-node2 platform-python[55439]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:28 managed-node2 platform-python[55687]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:29 managed-node2 platform-python[55816]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:36:29 managed-node2 platform-python[55940]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:31 managed-node2 platform-python[56065]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:32 managed-node2 platform-python[56188]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:32 managed-node2 platform-python[56311]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:33 managed-node2 platform-python[56435]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:36 managed-node2 platform-python[56558]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:36:36 managed-node2 platform-python[56685]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:36:37 managed-node2 platform-python[56812]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:37 managed-node2 platform-python[56935]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:40 managed-node2 platform-python[57058]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:40 managed-node2 platform-python[57182]: ansible-command Invoked with _raw_params=podman ps -a warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3122420482-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-metacopy\x2dcheck3122420482-merged.mount has successfully entered the 'dead' state. Oct 04 12:36:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:36:41 managed-node2 platform-python[57312]: ansible-command Invoked with _raw_params=podman pod ps --ctr-ids --ctr-names --ctr-status warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:41 managed-node2 platform-python[57442]: ansible-command Invoked with _raw_params=set -euo pipefail; systemctl list-units --all | grep quadlet _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:41 managed-node2 platform-python[57568]: ansible-command Invoked with _raw_params=ls -alrtF /etc/systemd/system warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:44 managed-node2 platform-python[57817]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:45 managed-node2 platform-python[57946]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:47 managed-node2 platform-python[58071]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:50 managed-node2 platform-python[58194]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:36:51 managed-node2 platform-python[58321]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:36:51 managed-node2 platform-python[58448]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:52 managed-node2 platform-python[58571]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:54 managed-node2 platform-python[58694]: ansible-command Invoked with _raw_params=exec 1>&2 set -x set -o pipefail systemctl list-units --plain -l --all | grep quadlet || : systemctl list-unit-files --all | grep quadlet || : systemctl list-units --plain --failed -l --all | grep quadlet || : _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:54 managed-node2 platform-python[58824]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None PLAY RECAP ********************************************************************* managed-node2 : ok=90 changed=8 unreachable=0 failed=2 skipped=140 rescued=2 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.9.27", "end_time": "2025-10-04T16:36:40.030298+00:00Z", "host": "managed-node2", "message": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "start_time": "2025-10-04T16:36:40.014132+00:00Z", "task_name": "Manage each secret", "task_path": "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42" }, { "ansible_version": "2.9.27", "delta": "0:00:00.027706", "end_time": "2025-10-04 12:36:40.354159", "host": "managed-node2", "message": "No message could be found", "rc": 0, "start_time": "2025-10-04 12:36:40.326453", "stdout": "-- Logs begin at Sat 2025-10-04 12:26:12 EDT, end at Sat 2025-10-04 12:36:40 EDT. --\nOct 04 12:31:06 managed-node2 platform-python[14699]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:31:10 managed-node2 platform-python[14822]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:12 managed-node2 platform-python[14947]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:13 managed-node2 platform-python[15070]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:31:13 managed-node2 platform-python[15193]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:31:13 managed-node2 platform-python[15292]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/nopull.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595473.2695348-10044-42164822916015/source _original_basename=tmp0bj2sg47 follow=False checksum=d5dc917e3cae36de03aa971a17ac473f86fdf934 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:31:14 managed-node2 platform-python[15417]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:31:14 managed-node2 kernel: evm: overlay not supported\nOct 04 12:31:14 managed-node2 systemd[1]: var-lib-containers-storage-overlay-metacopy\\x2dcheck4171139467-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-metacopy\\x2dcheck4171139467-merged.mount has successfully entered the 'dead' state.\nOct 04 12:31:14 managed-node2 systemd[1]: Created slice machine.slice.\n-- Subject: Unit machine.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:31:14 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice.\n-- Subject: Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:31:15 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:31:19 managed-node2 platform-python[15743]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:31:20 managed-node2 platform-python[15872]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:23 managed-node2 platform-python[15997]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:26 managed-node2 platform-python[16120]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:31:26 managed-node2 platform-python[16247]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:31:27 managed-node2 platform-python[16374]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:31:29 managed-node2 platform-python[16497]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:32 managed-node2 platform-python[16620]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:34 managed-node2 platform-python[16743]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:37 managed-node2 platform-python[16866]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:31:38 managed-node2 platform-python[17014]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:31:39 managed-node2 platform-python[17137]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:31:43 managed-node2 platform-python[17260]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:46 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:31:46 managed-node2 platform-python[17523]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:47 managed-node2 platform-python[17646]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:31:47 managed-node2 platform-python[17769]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:31:47 managed-node2 platform-python[17868]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/bogus.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595507.1829739-11549-146476787522942/source _original_basename=tmpisytpwv2 follow=False checksum=f8266a972ed3be7e204d2a67883fe3a22b8dbf18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:31:48 managed-node2 platform-python[17993]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:31:48 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice.\n-- Subject: Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:31:48 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:31:51 managed-node2 platform-python[18280]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:31:52 managed-node2 platform-python[18409]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:55 managed-node2 platform-python[18534]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:58 managed-node2 platform-python[18657]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:31:59 managed-node2 platform-python[18784]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:31:59 managed-node2 platform-python[18911]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:32:01 managed-node2 platform-python[19034]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:04 managed-node2 platform-python[19157]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:06 managed-node2 platform-python[19280]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:09 managed-node2 platform-python[19403]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:32:11 managed-node2 platform-python[19551]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:32:11 managed-node2 platform-python[19674]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:32:15 managed-node2 platform-python[19797]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:17 managed-node2 platform-python[19922]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:17 managed-node2 platform-python[20046]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:32:18 managed-node2 platform-python[20173]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:18 managed-node2 platform-python[20298]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:32:18 managed-node2 platform-python[20298]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/nopull.yml\nOct 04 12:32:18 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice.\n-- Subject: Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished shutting down.\nOct 04 12:32:18 managed-node2 systemd[1]: machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice: Consumed 0 CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice completed and consumed the indicated resources.\nOct 04 12:32:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:32:19 managed-node2 platform-python[20436]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:32:19 managed-node2 platform-python[20559]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:22 managed-node2 platform-python[20814]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:23 managed-node2 platform-python[20942]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:27 managed-node2 platform-python[21067]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:29 managed-node2 platform-python[21190]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:32:30 managed-node2 platform-python[21317]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:32:31 managed-node2 platform-python[21444]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:32:33 managed-node2 platform-python[21567]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:35 managed-node2 platform-python[21690]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:38 managed-node2 platform-python[21813]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:40 managed-node2 platform-python[21936]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:32:42 managed-node2 platform-python[22084]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:32:43 managed-node2 platform-python[22207]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:32:47 managed-node2 platform-python[22330]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:48 managed-node2 platform-python[22455]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:49 managed-node2 platform-python[22579]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:32:50 managed-node2 platform-python[22706]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:50 managed-node2 platform-python[22831]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:32:50 managed-node2 platform-python[22831]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/bogus.yml\nOct 04 12:32:50 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice.\n-- Subject: Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished shutting down.\nOct 04 12:32:50 managed-node2 systemd[1]: machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice: Consumed 0 CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice completed and consumed the indicated resources.\nOct 04 12:32:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:32:50 managed-node2 platform-python[22970]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:32:51 managed-node2 platform-python[23093]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:32:54 managed-node2 platform-python[23349]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:56 managed-node2 platform-python[23478]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:59 managed-node2 platform-python[23603]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:33:00 managed-node2 chronyd[603]: Detected falseticker 74.208.25.46 (2.centos.pool.ntp.org)\nOct 04 12:33:01 managed-node2 platform-python[23726]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:33:02 managed-node2 platform-python[23853]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:33:03 managed-node2 platform-python[23980]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:33:05 managed-node2 platform-python[24103]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:33:07 managed-node2 platform-python[24226]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:33:10 managed-node2 platform-python[24349]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:33:12 managed-node2 platform-python[24472]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:33:14 managed-node2 platform-python[24620]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:33:15 managed-node2 platform-python[24743]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:33:19 managed-node2 platform-python[24866]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nOct 04 12:33:19 managed-node2 platform-python[24990]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:20 managed-node2 platform-python[25115]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:20 managed-node2 platform-python[25239]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:21 managed-node2 platform-python[25363]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:22 managed-node2 platform-python[25487]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nOct 04 12:33:22 managed-node2 systemd[1]: Created slice User Slice of UID 3001.\n-- Subject: Unit user-3001.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-3001.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[1]: Starting User runtime directory /run/user/3001...\n-- Subject: Unit user-runtime-dir@3001.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has begun starting up.\nOct 04 12:33:22 managed-node2 systemd[1]: Started User runtime directory /run/user/3001.\n-- Subject: Unit user-runtime-dir@3001.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[1]: Starting User Manager for UID 3001...\n-- Subject: Unit user@3001.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has begun starting up.\nOct 04 12:33:22 managed-node2 systemd[25493]: pam_unix(systemd-user:session): session opened for user podman_basic_user by (uid=0)\nOct 04 12:33:22 managed-node2 systemd[25493]: Starting D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun starting up.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Paths.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Started Mark boot as successful after the user session has run 2 minutes.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Timers.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Listening on D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Sockets.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Basic System.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Default.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Startup finished in 26ms.\n-- Subject: User manager start-up is now complete\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The user manager instance for user 3001 has been started. All services queued\n-- for starting have been started. Note that other services might still be starting\n-- up or be started at any later time.\n-- \n-- Startup of the manager took 26808 microseconds.\nOct 04 12:33:22 managed-node2 systemd[1]: Started User Manager for UID 3001.\n-- Subject: Unit user@3001.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:23 managed-node2 platform-python[25628]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:23 managed-node2 platform-python[25751]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:23 managed-node2 sudo[25874]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sifywbsrisccwijbkunlnmxfrflsisdd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595603.7086673-15808-227803905840648/AnsiballZ_podman_image.py'\nOct 04 12:33:23 managed-node2 sudo[25874]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:24 managed-node2 systemd[25493]: Started D-Bus User Message Bus.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Created slice user.slice.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25886.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-pause-f03acc05.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25902.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25917.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25926.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:25 managed-node2 systemd[25493]: Started podman-25933.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:25 managed-node2 systemd[25493]: Started podman-25942.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:25 managed-node2 sudo[25874]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:25 managed-node2 platform-python[26071]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:25 managed-node2 platform-python[26194]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:26 managed-node2 platform-python[26317]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:33:26 managed-node2 platform-python[26416]: ansible-copy Invoked with dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml owner=podman_basic_user group=3001 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595606.1679773-15914-270785518997063/source _original_basename=tmpck_isd86 follow=False checksum=4df6e405cb1c69d6fda71fca57ba10095c6652bf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:33:26 managed-node2 sudo[26541]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhtmywupgtnbvlezrrhjughqngefyblk ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595606.8491445-15948-152734101610115/AnsiballZ_podman_play.py'\nOct 04 12:33:26 managed-node2 sudo[26541]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:33:27 managed-node2 systemd[25493]: Started podman-26552.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:27 managed-node2 kernel: tun: Universal TUN/TAP device driver, 1.6\nOct 04 12:33:27 managed-node2 systemd[25493]: Started rootless-netns-edb70a77.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:27 managed-node2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.\nOct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth2fe45075: link is not ready\nOct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered blocking state\nOct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state\nOct 04 12:33:27 managed-node2 kernel: device veth2fe45075 entered promiscuous mode\nOct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nOct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nOct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2fe45075: link becomes ready\nOct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered blocking state\nOct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered forwarding state\nOct 04 12:33:27 managed-node2 dnsmasq[26740]: listening on cni-podman1(#3): 10.89.0.1\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: started, version 2.79 cachesize 150\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using local addresses only for domain dns.podman\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: reading /etc/resolv.conf\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using local addresses only for domain dns.podman\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.0.2.3#53\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.29.169.13#53\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.29.170.12#53\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.2.32.1#53\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:27 managed-node2 conmon[26754]: conmon 978f42b0916c823a3a50 : failed to write to /proc/self/oom_score_adj: Permission denied\nOct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach}\nOct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : terminal_ctrl_fd: 14\nOct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : winsz read side: 17, winsz write side: 18\nOct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : container PID: 26765\nOct 04 12:33:27 managed-node2 conmon[26775]: conmon 4c95f0539eb18fb7ecd6 : failed to write to /proc/self/oom_score_adj: Permission denied\nOct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}\nOct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : terminal_ctrl_fd: 13\nOct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : winsz read side: 16, winsz write side: 17\nOct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : container PID: 26786\nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d\n Container:\n 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\n \nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Successfully loaded 1 networks\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"found free device name cni-podman1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"found free ipv4 network subnet 10.89.0.0/24\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:33:27.193743577 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reference \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" does not resolve to an image ID\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reference \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" does not resolve to an image ID\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"FROM \\\"scratch\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are not supported\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Check for idmapped mounts support \"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"overlay: test mount indicated that volatile is being used\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/empty,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/work,userxattr,volatile,context=\\\"system_u:object_r:container_file_t:s0:c105,c564\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container ID: 9516e55badb1147a3bd380a45ee33bd293ab708eefc046d098c76c453fc83966\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:copy Args:[/usr/libexec/podman/catatonit /catatonit] Flags:[] Attrs:map[] Message:COPY /usr/libexec/podman/catatonit /catatonit Heredocs:[] Original:COPY /usr/libexec/podman/catatonit /catatonit}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"COPY []string(nil), imagebuilder.Copy{FromFS:false, From:\\\"\\\", Src:[]string{\\\"/usr/libexec/podman/catatonit\\\"}, Dest:\\\"/catatonit\\\", Download:false, Chown:\\\"\\\", Chmod:\\\"\\\", Checksum:\\\"\\\", Files:[]imagebuilder.File(nil)}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"added content file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:entrypoint Args:[/catatonit -P] Flags:[] Attrs:map[json:true] Message:ENTRYPOINT /catatonit -P Heredocs:[] Original:ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"COMMIT localhost/podman-pause:4.9.4-dev-1708535009\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"COMMIT \\\"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"committing image with reference \\\"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" is allowed by policy\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"layer list: [\\\"768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb\\\"]\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"using \\\"/var/tmp/buildah340804419\\\" to hold temporary data\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Tar with options on /home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/diff\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"layer \\\"768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb\\\" size is 767488 bytes, uncompressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690, possibly-compressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"OCIv1 config = {\\\"created\\\":\\\"2025-10-04T16:33:27.33236731Z\\\",\\\"architecture\\\":\\\"amd64\\\",\\\"os\\\":\\\"linux\\\",\\\"config\\\":{\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"rootfs\\\":{\\\"type\\\":\\\"layers\\\",\\\"diff_ids\\\":[\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"]},\\\"history\\\":[{\\\"created\\\":\\\"2025-10-04T16:33:27.331845264Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \\\",\\\"empty_layer\\\":true},{\\\"created\\\":\\\"2025-10-04T16:33:27.335420758Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) ENTRYPOINT [\\\\\\\"/catatonit\\\\\\\", \\\\\\\"-P\\\\\\\"]\\\"}]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"OCIv1 manifest = {\\\"schemaVersion\\\":2,\\\"mediaType\\\":\\\"application/vnd.oci.image.manifest.v1+json\\\",\\\"config\\\":{\\\"mediaType\\\":\\\"application/vnd.oci.image.config.v1+json\\\",\\\"digest\\\":\\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\",\\\"size\\\":667},\\\"layers\\\":[{\\\"mediaType\\\":\\\"application/vnd.oci.image.layer.v1.tar\\\",\\\"digest\\\":\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\",\\\"size\\\":767488}],\\\"annotations\\\":{\\\"org.opencontainers.image.base.digest\\\":\\\"\\\",\\\"org.opencontainers.image.base.name\\\":\\\"\\\"}}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Docker v2s2 config = {\\\"created\\\":\\\"2025-10-04T16:33:27.33236731Z\\\",\\\"container\\\":\\\"9516e55badb1147a3bd380a45ee33bd293ab708eefc046d098c76c453fc83966\\\",\\\"container_config\\\":{\\\"Hostname\\\":\\\"\\\",\\\"Domainname\\\":\\\"\\\",\\\"User\\\":\\\"\\\",\\\"AttachStdin\\\":false,\\\"AttachStdout\\\":false,\\\"AttachStderr\\\":false,\\\"Tty\\\":false,\\\"OpenStdin\\\":false,\\\"StdinOnce\\\":false,\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Cmd\\\":[],\\\"Image\\\":\\\"\\\",\\\"Volumes\\\":{},\\\"WorkingDir\\\":\\\"\\\",\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"OnBuild\\\":[],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"config\\\":{\\\"Hostname\\\":\\\"\\\",\\\"Domainname\\\":\\\"\\\",\\\"User\\\":\\\"\\\",\\\"AttachStdin\\\":false,\\\"AttachStdout\\\":false,\\\"AttachStderr\\\":false,\\\"Tty\\\":false,\\\"OpenStdin\\\":false,\\\"StdinOnce\\\":false,\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Cmd\\\":[],\\\"Image\\\":\\\"\\\",\\\"Volumes\\\":{},\\\"WorkingDir\\\":\\\"\\\",\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"OnBuild\\\":[],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"architecture\\\":\\\"amd64\\\",\\\"os\\\":\\\"linux\\\",\\\"rootfs\\\":{\\\"type\\\":\\\"layers\\\",\\\"diff_ids\\\":[\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"]},\\\"history\\\":[{\\\"created\\\":\\\"2025-10-04T16:33:27.331845264Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \\\",\\\"empty_layer\\\":true},{\\\"created\\\":\\\"2025-10-04T16:33:27.335420758Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) ENTRYPOINT [\\\\\\\"/catatonit\\\\\\\", \\\\\\\"-P\\\\\\\"]\\\"}]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Docker v2s2 manifest = {\\\"schemaVersion\\\":2,\\\"mediaType\\\":\\\"application/vnd.docker.distribution.manifest.v2+json\\\",\\\"config\\\":{\\\"mediaType\\\":\\\"application/vnd.docker.container.image.v1+json\\\",\\\"size\\\":1341,\\\"digest\\\":\\\"sha256:cc08d8f0e313f02451a20252b1d70f6f69284663aede171c80a5525e2a51ba5b\\\"},\\\"layers\\\":[{\\\"mediaType\\\":\\\"application/vnd.docker.image.rootfs.diff.tar\\\",\\\"size\\\":767488,\\\"digest\\\":\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"}]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using SQLite blob info cache at /home/podman_basic_user/.local/share/containers/cache/blob-info-cache-v1.sqlite\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"IsRunningImageAllowed for image containers-storage:\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\" Using transport \\\"containers-storage\\\" policy section \"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\" Requirement 0: allowed\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Overall: allowed\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"start reading config\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"finished reading config\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"... will first try using the original manifest unmodified\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Checking if we can reuse blob sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690: general substitution = true, compression for MIME type \\\"application/vnd.oci.image.layer.v1.tar\\\" = true\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reading layer \\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"No compression detected\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using original blob without modification\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Applying tar in /home/podman_basic_user/.local/share/containers/storage/overlay/d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690/diff\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"finished reading layer \\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"No compression detected\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Compression change for blob sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05 (\\\"application/vnd.oci.image.config.v1+json\\\") not supported\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using original blob without modification\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"setting image creation date to 2025-10-04 16:33:27.33236731 +0000 UTC\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"created new image ID \\\"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\" with metadata \\\"{}\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"added name \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" to image \\\"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"printing final image id \\\"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Got pod cgroup as /libpod_parent/4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"setting container name 4bfdec19f3e3-infra\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Allocated lock 1 for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created container \\\"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container \\\"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\\\" has work directory \\\"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container \\\"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\\\" has run directory \\\"/run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"adding container to pod httpd1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"setting container name httpd1-httpd1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Allocated lock 2 for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created container \\\"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container \\\"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\\\" has work directory \\\"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container \\\"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\\\" has run directory \\\"/run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Strongconnecting node 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Pushed 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 onto stack\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Finishing node 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1. Popped 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 off stack\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Strongconnecting node 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Pushed 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 onto stack\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Finishing node 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8. Popped 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 off stack\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/3P7PWYNTG5QJZJOWQ6XDK4NETN,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/work,userxattr,context=\\\"system_u:object_r:container_file_t:s0:c285,c421\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Made network namespace at /run/user/3001/netns/netns-f5551a3d-13a6-81b5-6f62-8de155b907e3 for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Mounted container \\\"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\\\" at \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created root filesystem for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 at /home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"creating rootless network namespace with name \\\"rootless-netns-d22c9f230d0691b8f418\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/3001/netns/rootless-netns-d22c9f230d0691b8f418 tap0\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"The path of /etc/resolv.conf in the mount ns is \\\"/etc/resolv.conf\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"cni result for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:e2:98:f4:5f:02:10 Sandbox:} {Name:veth2fe45075 Mac:16:b6:29:b0:6d:39 Sandbox:} {Name:eth0 Mac:2a:18:12:08:ad:32 Sandbox:/run/user/3001/netns/netns-f5551a3d-13a6-81b5-6f62-8de155b907e3}] [{Version:4 Interface:0xc000c3e028 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"Starting parent driver\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"opaque=map[builtin.readypipepath:/run/user/3001/libpod/tmp/rootlessport4177357533/.bp-ready.pipe builtin.socketpath:/run/user/3001/libpod/tmp/rootlessport4177357533/.bp.sock]\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"Starting child driver in child netns (\\\\\\\"/proc/self/exe\\\\\\\" [rootlessport-child])\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"Waiting for initComplete\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"initComplete is closed; parent and child established the communication channel\\\"\\ntime=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"Exposing ports [{ 80 15001 1 tcp}]\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=Ready\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport is ready\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created OCI spec for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/config.json\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Got pod cgroup as \"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 -u 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata -p /run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/pidfile -n 4bfdec19f3e3-infra --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1]\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied\"\n [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n \n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Received: 26765\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Got Conmon PID as 26755\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 in OCI runtime\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Starting container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 with command [/catatonit -P]\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Started container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/FD7XHZOTU3ZCOHOMS6WJGARUCE,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/work,userxattr,context=\\\"system_u:object_r:container_file_t:s0:c285,c421\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Mounted container \\\"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\\\" at \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/merged\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created root filesystem for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 at /home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/merged\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created OCI spec for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/config.json\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Got pod cgroup as \"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 -u 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata -p /run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/pidfile -n httpd1-httpd1 --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8]\"\n [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n \n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Received: 26786\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Got Conmon PID as 26776\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 in OCI runtime\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Starting container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Started container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Called kube.PersistentPostRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nOct 04 12:33:27 managed-node2 sudo[26541]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:28 managed-node2 sudo[26917]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqgeablflvziirakssvhgovxnyqlazn ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595608.143278-15965-116724055097581/AnsiballZ_systemd.py'\nOct 04 12:33:28 managed-node2 sudo[26917]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:28 managed-node2 platform-python[26920]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nOct 04 12:33:28 managed-node2 systemd[25493]: Reloading.\nOct 04 12:33:28 managed-node2 sudo[26917]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:28 managed-node2 sudo[27054]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcawqykrekhxzainagfjkqbithxyjltw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595608.6875198-15999-90547985485461/AnsiballZ_systemd.py'\nOct 04 12:33:28 managed-node2 sudo[27054]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:29 managed-node2 platform-python[27057]: ansible-systemd Invoked with name= scope=user enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nOct 04 12:33:29 managed-node2 systemd[25493]: Reloading.\nOct 04 12:33:29 managed-node2 sudo[27054]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:29 managed-node2 sudo[27193]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sugdwogvzmuboswwbcwdjoyqjtijmrjd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595609.2593784-16020-169137665275210/AnsiballZ_systemd.py'\nOct 04 12:33:29 managed-node2 sudo[27193]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:29 managed-node2 dnsmasq[26742]: listening on cni-podman1(#3): fe80::e098:f4ff:fe5f:210%cni-podman1\nOct 04 12:33:29 managed-node2 platform-python[27196]: ansible-systemd Invoked with name= scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nOct 04 12:33:29 managed-node2 systemd[25493]: Created slice podman\\x2dkube.slice.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:29 managed-node2 systemd[25493]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit UNIT has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun starting up.\nOct 04 12:33:29 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : container 26765 exited with status 137\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1)\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nOct 04 12:33:29 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : container 26786 exited with status 137\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=info msg=\"Using sqlite as database backend\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using graph driver overlay\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using transient store: false\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Initializing event backend file\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=info msg=\"Setting parallel job count to 7\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8)\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=info msg=\"Using sqlite as database backend\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using graph driver overlay\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using transient store: false\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Initializing event backend file\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=info msg=\"Setting parallel job count to 7\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8)\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:29 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state\nOct 04 12:33:29 managed-node2 kernel: device veth2fe45075 left promiscuous mode\nOct 04 12:33:29 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1)\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:29 managed-node2 podman[27202]: Pods stopped:\nOct 04 12:33:29 managed-node2 podman[27202]: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d\nOct 04 12:33:29 managed-node2 podman[27202]: Pods removed:\nOct 04 12:33:29 managed-node2 podman[27202]: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d\nOct 04 12:33:29 managed-node2 podman[27202]: Secrets removed:\nOct 04 12:33:29 managed-node2 podman[27202]: Volumes removed:\nOct 04 12:33:30 managed-node2 systemd[25493]: Started rootless-netns-d4627493.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:30 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth938ef76c: link is not ready\nOct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered blocking state\nOct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state\nOct 04 12:33:30 managed-node2 kernel: device veth938ef76c entered promiscuous mode\nOct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered blocking state\nOct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered forwarding state\nOct 04 12:33:30 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth938ef76c: link becomes ready\nOct 04 12:33:30 managed-node2 dnsmasq[27452]: listening on cni-podman1(#3): 10.89.0.1\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: started, version 2.79 cachesize 150\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using local addresses only for domain dns.podman\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: reading /etc/resolv.conf\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using local addresses only for domain dns.podman\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.0.2.3#53\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.29.169.13#53\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.29.170.12#53\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.2.32.1#53\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:30 managed-node2 podman[27202]: Pod:\nOct 04 12:33:30 managed-node2 podman[27202]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb\nOct 04 12:33:30 managed-node2 podman[27202]: Container:\nOct 04 12:33:30 managed-node2 podman[27202]: e74648d47617035a35842176c0cd197e876af20efb66c9a6fbb560c1ba4c6833\nOct 04 12:33:30 managed-node2 systemd[25493]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:30 managed-node2 sudo[27193]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:31 managed-node2 platform-python[27630]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:33:32 managed-node2 dnsmasq[27454]: listening on cni-podman1(#3): fe80::f8fb:d3ff:fe6b:28b6%cni-podman1\nOct 04 12:33:32 managed-node2 platform-python[27754]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:33 managed-node2 platform-python[27879]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:34 managed-node2 platform-python[28003]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:34 managed-node2 platform-python[28126]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:35 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:33:36 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:33:36 managed-node2 platform-python[28426]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:36 managed-node2 platform-python[28549]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:37 managed-node2 platform-python[28672]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:33:37 managed-node2 platform-python[28771]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595616.9605668-16444-215555946645887/source _original_basename=tmp7zrtpb5n follow=False checksum=65edd58cfda8e78be7cf81993b5521acb64e8edf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:33:37 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:33:38 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice.\n-- Subject: Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1056] manager: (cni-podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3)\nOct 04 12:33:38 managed-node2 systemd-udevd[28943]: Using default interface naming scheme 'rhel-8.0'.\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1123] manager: (veth58b4002b): new Veth device (/org/freedesktop/NetworkManager/Devices/4)\nOct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth58b4002b: link is not ready\nOct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered blocking state\nOct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state\nOct 04 12:33:38 managed-node2 kernel: device veth58b4002b entered promiscuous mode\nOct 04 12:33:38 managed-node2 systemd-udevd[28944]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:38 managed-node2 systemd-udevd[28944]: Could not generate persistent MAC address for veth58b4002b: No such file or directory\nOct 04 12:33:38 managed-node2 systemd-udevd[28943]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:38 managed-node2 systemd-udevd[28943]: Could not generate persistent MAC address for cni-podman1: No such file or directory\nOct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1326] device (cni-podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1331] device (cni-podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1338] device (cni-podman1): Activation: starting connection 'cni-podman1' (f4b0bed9-ed1a-4daa-9776-1b7c64cb04df)\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1339] device (cni-podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1342] device (cni-podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1344] device (cni-podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1345] device (cni-podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nOct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth58b4002b: link becomes ready\nOct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered blocking state\nOct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered forwarding state\nOct 04 12:33:38 managed-node2 dbus-daemon[595]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=660 comm=\"/usr/sbin/NetworkManager --no-daemon \" label=\"system_u:system_r:NetworkManager_t:s0\")\nOct 04 12:33:38 managed-node2 systemd[1]: Starting Network Manager Script Dispatcher Service...\n-- Subject: Unit NetworkManager-dispatcher.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit NetworkManager-dispatcher.service has begun starting up.\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1545] device (veth58b4002b): carrier: link connected\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1548] device (cni-podman1): carrier: link connected\nOct 04 12:33:38 managed-node2 dbus-daemon[595]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'\nOct 04 12:33:38 managed-node2 systemd[1]: Started Network Manager Script Dispatcher Service.\n-- Subject: Unit NetworkManager-dispatcher.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit NetworkManager-dispatcher.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1968] device (cni-podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1970] device (cni-podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1979] device (cni-podman1): Activation: successful, device activated.\nOct 04 12:33:38 managed-node2 dnsmasq[29065]: listening on cni-podman1(#3): 10.89.0.1\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: started, version 2.79 cachesize 150\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using local addresses only for domain dns.podman\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: reading /etc/resolv.conf\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using local addresses only for domain dns.podman\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.29.169.13#53\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.29.170.12#53\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.2.32.1#53\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:38 managed-node2 systemd[1]: Started libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope.\n-- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}\nOct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : terminal_ctrl_fd: 13\nOct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : winsz read side: 17, winsz write side: 18\nOct 04 12:33:38 managed-node2 systemd[1]: Started libcontainer container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.\n-- Subject: Unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : container PID: 29081\nOct 04 12:33:38 managed-node2 systemd[1]: Started libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope.\n-- Subject: Unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}\nOct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : terminal_ctrl_fd: 12\nOct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : winsz read side: 16, winsz write side: 17\nOct 04 12:33:38 managed-node2 systemd[1]: Started libcontainer container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.\n-- Subject: Unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : container PID: 29103\nOct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nOct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\n Container:\n b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\n \nOct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-10-04T12:33:37-04:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2025-10-04T12:33:37-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:31:14.473584587 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"setting container name f7eedbe6e6e1-infra\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Allocated lock 1 for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are not supported\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Check for idmapped mounts support \"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created container \\\"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Container \\\"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Container \\\"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\\\" has run directory \\\"/run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"adding container to pod httpd2\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"setting container name httpd2-httpd2\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Allocated lock 2 for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created container \\\"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Container \\\"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Container \\\"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\\\" has run directory \\\"/run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Strongconnecting node acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Pushed acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 onto stack\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Finishing node acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7. Popped acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 off stack\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Strongconnecting node b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Pushed b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a onto stack\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Finishing node b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a. Popped b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a off stack\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/CLVCQDNEL47VMN42Y3O6VVBSEK,upperdir=/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/diff,workdir=/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c321,c454\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Mounted container \\\"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\\\" at \\\"/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created root filesystem for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 at /var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Made network namespace at /run/netns/netns-4bb92ac6-c391-8230-0912-824e2a801d42 for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"cni result for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:92:f8:b0:67:7f:78 Sandbox:} {Name:veth58b4002b Mac:9e:e6:53:58:c5:ef Sandbox:} {Name:eth0 Mac:9a:79:68:03:db:b9 Sandbox:/run/netns/netns-4bb92ac6-c391-8230-0912-824e2a801d42}] [{Version:4 Interface:0xc0006223b8 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Setting Cgroups for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 to machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice:libpod:acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created OCI spec for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 at /var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/config.json\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 -u acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata -p /run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/pidfile -n f7eedbe6e6e1-infra --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7]\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice and unitName libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Received: 29081\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Got Conmon PID as 29071\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 in OCI runtime\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Starting container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 with command [/catatonit -P]\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Started container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/LBBH4VMJZF2KPCTZG3NWOHXUKQ,upperdir=/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/diff,workdir=/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c321,c454\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Mounted container \\\"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\\\" at \\\"/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/merged\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created root filesystem for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a at /var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/merged\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Setting Cgroups for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a to machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice:libpod:b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created OCI spec for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a at /var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/config.json\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a -u b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata -p /run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a]\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice and unitName libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Received: 29103\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Got Conmon PID as 29092\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a in OCI runtime\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Starting container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Started container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nOct 04 12:33:39 managed-node2 platform-python[29234]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nOct 04 12:33:39 managed-node2 systemd[1]: Reloading.\nOct 04 12:33:39 managed-node2 dnsmasq[29069]: listening on cni-podman1(#3): fe80::90f8:b0ff:fe67:7f78%cni-podman1\nOct 04 12:33:39 managed-node2 platform-python[29403]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nOct 04 12:33:39 managed-node2 systemd[1]: Reloading.\nOct 04 12:33:40 managed-node2 platform-python[29558]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nOct 04 12:33:40 managed-node2 systemd[1]: Created slice system-podman\\x2dkube.slice.\n-- Subject: Unit system-podman\\x2dkube.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit system-podman\\x2dkube.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:40 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun starting up.\nOct 04 12:33:40 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : container 29081 exited with status 137\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Consumed 31ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope completed and consumed the indicated resources.\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7)\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Using sqlite as database backend\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using graph driver overlay\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using transient store: false\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Initializing event backend file\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Setting parallel job count to 7\"\nOct 04 12:33:40 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : container 29103 exited with status 137\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope completed and consumed the indicated resources.\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a)\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Using sqlite as database backend\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using graph driver overlay\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using transient store: false\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Initializing event backend file\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Setting parallel job count to 7\"\nOct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0-merged.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a)\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state\nOct 04 12:33:40 managed-node2 kernel: device veth58b4002b left promiscuous mode\nOct 04 12:33:40 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state\nOct 04 12:33:40 managed-node2 systemd[1]: run-netns-netns\\x2d4bb92ac6\\x2dc391\\x2d8230\\x2d0912\\x2d824e2a801d42.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d4bb92ac6\\x2dc391\\x2d8230\\x2d0912\\x2d824e2a801d42.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522-merged.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7)\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: Stopping libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope.\n-- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has begun shutting down.\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: Stopped libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope.\n-- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished shutting down.\nOct 04 12:33:40 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice.\n-- Subject: Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished shutting down.\nOct 04 12:33:40 managed-node2 systemd[1]: machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice: Consumed 193ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice completed and consumed the indicated resources.\nOct 04 12:33:40 managed-node2 podman[29565]: Pods stopped:\nOct 04 12:33:40 managed-node2 podman[29565]: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\nOct 04 12:33:40 managed-node2 podman[29565]: Pods removed:\nOct 04 12:33:40 managed-node2 podman[29565]: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\nOct 04 12:33:40 managed-node2 podman[29565]: Secrets removed:\nOct 04 12:33:40 managed-node2 podman[29565]: Volumes removed:\nOct 04 12:33:40 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice.\n-- Subject: Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:40 managed-node2 systemd[1]: Started libcontainer container 2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.\n-- Subject: Unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:41 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth44fc3814: link is not ready\nOct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0690] manager: (veth44fc3814): new Veth device (/org/freedesktop/NetworkManager/Devices/5)\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state\nOct 04 12:33:41 managed-node2 kernel: device veth44fc3814 entered promiscuous mode\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered forwarding state\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state\nOct 04 12:33:41 managed-node2 systemd-udevd[29722]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:41 managed-node2 systemd-udevd[29722]: Could not generate persistent MAC address for veth44fc3814: No such file or directory\nOct 04 12:33:41 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth44fc3814: link becomes ready\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered forwarding state\nOct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0827] device (veth44fc3814): carrier: link connected\nOct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0829] device (cni-podman1): carrier: link connected\nOct 04 12:33:41 managed-node2 dnsmasq[29793]: listening on cni-podman1(#3): 10.89.0.1\nOct 04 12:33:41 managed-node2 dnsmasq[29793]: listening on cni-podman1(#3): fe80::90f8:b0ff:fe67:7f78%cni-podman1\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: started, version 2.79 cachesize 150\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using local addresses only for domain dns.podman\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: reading /etc/resolv.conf\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using local addresses only for domain dns.podman\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.29.169.13#53\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.29.170.12#53\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.2.32.1#53\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:41 managed-node2 systemd[1]: Started libcontainer container d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.\n-- Subject: Unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:41 managed-node2 systemd[1]: Started libcontainer container c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.\n-- Subject: Unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:41 managed-node2 podman[29565]: Pod:\nOct 04 12:33:41 managed-node2 podman[29565]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a\nOct 04 12:33:41 managed-node2 podman[29565]: Container:\nOct 04 12:33:41 managed-node2 podman[29565]: c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28\nOct 04 12:33:41 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:42 managed-node2 platform-python[29963]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:43 managed-node2 platform-python[30096]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:44 managed-node2 platform-python[30220]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:45 managed-node2 platform-python[30343]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:46 managed-node2 platform-python[30638]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:47 managed-node2 platform-python[30761]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:47 managed-node2 platform-python[30884]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:33:47 managed-node2 platform-python[30983]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595627.3886073-16945-54933471056529/source _original_basename=tmpukku_qg2 follow=False checksum=e89a97ee50e2e2344cd04b5ef33140ac4f197bf8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:33:48 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state.\nOct 04 12:33:48 managed-node2 platform-python[31108]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:33:48 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice.\n-- Subject: Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:48 managed-node2 NetworkManager[660]: [1759595628.5733] manager: (vethca854251): new Veth device (/org/freedesktop/NetworkManager/Devices/6)\nOct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethca854251: link is not ready\nOct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered blocking state\nOct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state\nOct 04 12:33:48 managed-node2 kernel: device vethca854251 entered promiscuous mode\nOct 04 12:33:48 managed-node2 systemd-udevd[31155]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:48 managed-node2 systemd-udevd[31155]: Could not generate persistent MAC address for vethca854251: No such file or directory\nOct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nOct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nOct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethca854251: link becomes ready\nOct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered blocking state\nOct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered forwarding state\nOct 04 12:33:48 managed-node2 NetworkManager[660]: [1759595628.6066] device (vethca854251): carrier: link connected\nOct 04 12:33:48 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses\nOct 04 12:33:48 managed-node2 systemd[1]: Started libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope.\n-- Subject: Unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:48 managed-node2 systemd[1]: Started libcontainer container 59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.\n-- Subject: Unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:48 managed-node2 systemd[1]: Started libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope.\n-- Subject: Unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:48 managed-node2 systemd[1]: Started libcontainer container 0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.\n-- Subject: Unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:49 managed-node2 platform-python[31388]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nOct 04 12:33:49 managed-node2 systemd[1]: Reloading.\nOct 04 12:33:50 managed-node2 platform-python[31549]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nOct 04 12:33:50 managed-node2 systemd[1]: Reloading.\nOct 04 12:33:50 managed-node2 platform-python[31704]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nOct 04 12:33:50 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun starting up.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has successfully entered the 'dead' state.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Consumed 31ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope completed and consumed the indicated resources.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has successfully entered the 'dead' state.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Consumed 31ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope completed and consumed the indicated resources.\nOct 04 12:33:50 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay-96a34ad4fa258979f69c8abe553376ab173aebc4813555f0aa72e1d24059a836-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-96a34ad4fa258979f69c8abe553376ab173aebc4813555f0aa72e1d24059a836-merged.mount has successfully entered the 'dead' state.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has successfully entered the 'dead' state.\nOct 04 12:33:50 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state\nOct 04 12:33:50 managed-node2 kernel: device vethca854251 left promiscuous mode\nOct 04 12:33:50 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state\nOct 04 12:33:50 managed-node2 systemd[1]: run-netns-netns\\x2d04fac8f5\\x2d669a\\x2d2b56\\x2d8dc1\\x2d2c27fe482b75.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d04fac8f5\\x2d669a\\x2d2b56\\x2d8dc1\\x2d2c27fe482b75.mount has successfully entered the 'dead' state.\nOct 04 12:33:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:33:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay-bf4340b80dd987c0b14d9ab53281fd43797b6665f7cf0be1b6e809f99681d28d-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-bf4340b80dd987c0b14d9ab53281fd43797b6665f7cf0be1b6e809f99681d28d-merged.mount has successfully entered the 'dead' state.\nOct 04 12:33:51 managed-node2 systemd[1]: libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has successfully entered the 'dead' state.\nOct 04 12:33:51 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice.\n-- Subject: Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished shutting down.\nOct 04 12:33:51 managed-node2 systemd[1]: machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice: Consumed 194ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice completed and consumed the indicated resources.\nOct 04 12:33:51 managed-node2 podman[31711]: Pods stopped:\nOct 04 12:33:51 managed-node2 podman[31711]: 668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3\nOct 04 12:33:51 managed-node2 podman[31711]: Pods removed:\nOct 04 12:33:51 managed-node2 podman[31711]: 668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3\nOct 04 12:33:51 managed-node2 podman[31711]: Secrets removed:\nOct 04 12:33:51 managed-node2 podman[31711]: Volumes removed:\nOct 04 12:33:51 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice.\n-- Subject: Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container 33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.\n-- Subject: Unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:51 managed-node2 NetworkManager[660]: [1759595631.3224] manager: (vethe1bf25d0): new Veth device (/org/freedesktop/NetworkManager/Devices/7)\nOct 04 12:33:51 managed-node2 systemd-udevd[31876]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:51 managed-node2 systemd-udevd[31876]: Could not generate persistent MAC address for vethe1bf25d0: No such file or directory\nOct 04 12:33:51 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethe1bf25d0: link is not ready\nOct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered blocking state\nOct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state\nOct 04 12:33:51 managed-node2 kernel: device vethe1bf25d0 entered promiscuous mode\nOct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered blocking state\nOct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered forwarding state\nOct 04 12:33:51 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe1bf25d0: link becomes ready\nOct 04 12:33:51 managed-node2 NetworkManager[660]: [1759595631.3521] device (vethe1bf25d0): carrier: link connected\nOct 04 12:33:51 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses\nOct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container 8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.\n-- Subject: Unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.\n-- Subject: Unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:51 managed-node2 podman[31711]: Pod:\nOct 04 12:33:51 managed-node2 podman[31711]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c\nOct 04 12:33:51 managed-node2 podman[31711]: Container:\nOct 04 12:33:51 managed-node2 podman[31711]: d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a\nOct 04 12:33:51 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:52 managed-node2 sudo[32110]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrjtqrmhbkjdxpmvsixtkgxksntzspm ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595632.0315228-17132-238240516154633/AnsiballZ_command.py'\nOct 04 12:33:52 managed-node2 sudo[32110]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:52 managed-node2 platform-python[32113]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:52 managed-node2 systemd[25493]: Started podman-32122.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:52 managed-node2 sudo[32110]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:52 managed-node2 platform-python[32260]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:53 managed-node2 platform-python[32391]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:53 managed-node2 sudo[32521]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpttuiaoavnpntacugmnwvpgddxwnpay ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595633.3541243-17183-26845545367359/AnsiballZ_command.py'\nOct 04 12:33:53 managed-node2 sudo[32521]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:53 managed-node2 platform-python[32524]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:53 managed-node2 sudo[32521]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:53 managed-node2 platform-python[32650]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:54 managed-node2 platform-python[32776]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:54 managed-node2 platform-python[32902]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:55 managed-node2 platform-python[33027]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:55 managed-node2 rsyslogd[1019]: imjournal: journal files changed, reloading... [v8.2102.0-15.el8 try https://www.rsyslog.com/e/0 ]\nOct 04 12:33:55 managed-node2 platform-python[33152]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd1-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:56 managed-node2 platform-python[33276]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd2-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:56 managed-node2 platform-python[33400]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd3-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:59 managed-node2 platform-python[33649]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:00 managed-node2 platform-python[33778]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:03 managed-node2 platform-python[33903]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:34:06 managed-node2 platform-python[34026]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:34:06 managed-node2 platform-python[34153]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:34:07 managed-node2 platform-python[34280]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:34:09 managed-node2 platform-python[34403]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:34:11 managed-node2 platform-python[34526]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:34:14 managed-node2 platform-python[34649]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:34:16 managed-node2 platform-python[34772]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:34:18 managed-node2 platform-python[34933]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:34:19 managed-node2 platform-python[35056]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:34:23 managed-node2 platform-python[35179]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nOct 04 12:34:24 managed-node2 platform-python[35303]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:24 managed-node2 platform-python[35428]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:24 managed-node2 platform-python[35552]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:25 managed-node2 platform-python[35676]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:26 managed-node2 platform-python[35800]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nOct 04 12:34:27 managed-node2 platform-python[35923]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:27 managed-node2 platform-python[36046]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:27 managed-node2 sudo[36169]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbwxxttcmtyfwkqugxtiqxlyfylyxbp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595667.7794547-18775-44408969327564/AnsiballZ_podman_image.py'\nOct 04 12:34:27 managed-node2 sudo[36169]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36174.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36182.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36189.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36199.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36207.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36215.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36223.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 sudo[36169]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:34:29 managed-node2 platform-python[36352]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:29 managed-node2 platform-python[36477]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:30 managed-node2 platform-python[36600]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:34:30 managed-node2 platform-python[36664]: ansible-file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=tmp4tbrh702 recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:30 managed-node2 sudo[36787]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-recvjrutxodvbgmoimxrlbeojjenzstr ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595670.5228426-19044-7471957653983/AnsiballZ_podman_play.py'\nOct 04 12:34:30 managed-node2 sudo[36787]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:34:30 managed-node2 systemd[25493]: Started podman-36798.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-10-04T12:34:30-04:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:33:27.193743577 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05)\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Got pod cgroup as /libpod_parent/34492a3900bc4a9b7b06bf0f56b147105736e26abab87e6881cbea1b0e369c1d\"\n Error: adding pod to state: name \"httpd1\" is in use: pod already exists\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nOct 04 12:34:30 managed-node2 sudo[36787]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:34:31 managed-node2 platform-python[36952]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:34:32 managed-node2 platform-python[37076]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:33 managed-node2 platform-python[37201]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:34 managed-node2 platform-python[37325]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:34 managed-node2 platform-python[37448]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:36 managed-node2 platform-python[37743]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:36 managed-node2 platform-python[37868]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:37 managed-node2 platform-python[37991]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:34:37 managed-node2 platform-python[38055]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=tmpeaiobce5 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:34:37 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice.\n-- Subject: Unit machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-10-04T12:34:37-04:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2025-10-04T12:34:37-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:31:14.473584587 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice for parent machine.slice and name libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice\"\n Error: adding pod to state: name \"httpd2\" is in use: pod already exists\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nOct 04 12:34:39 managed-node2 platform-python[38339]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:40 managed-node2 platform-python[38464]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:41 managed-node2 platform-python[38588]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:41 managed-node2 platform-python[38711]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:43 managed-node2 platform-python[39006]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:43 managed-node2 platform-python[39131]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:43 managed-node2 platform-python[39254]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:34:44 managed-node2 platform-python[39318]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=tmps2by7p7f recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:44 managed-node2 platform-python[39441]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:34:44 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice.\n-- Subject: Unit machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:45 managed-node2 sudo[39603]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjcjbadzeevkrtchrfausielavpgqkug ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595685.2238288-19784-175751449856146/AnsiballZ_command.py'\nOct 04 12:34:45 managed-node2 sudo[39603]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:45 managed-node2 platform-python[39606]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:45 managed-node2 systemd[25493]: Started podman-39616.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:45 managed-node2 sudo[39603]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:34:45 managed-node2 platform-python[39746]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:46 managed-node2 platform-python[39877]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:46 managed-node2 sudo[40008]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubjcpvqkodstxhlsbjwhddazysxbggp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595686.427822-19841-159760384353159/AnsiballZ_command.py'\nOct 04 12:34:46 managed-node2 sudo[40008]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:46 managed-node2 platform-python[40011]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:46 managed-node2 sudo[40008]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:34:47 managed-node2 platform-python[40137]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:47 managed-node2 platform-python[40263]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:47 managed-node2 platform-python[40389]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:48 managed-node2 platform-python[40513]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:48 managed-node2 platform-python[40637]: ansible-uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:51 managed-node2 platform-python[40886]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:52 managed-node2 platform-python[41015]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:55 managed-node2 platform-python[41140]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nOct 04 12:34:56 managed-node2 platform-python[41264]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:56 managed-node2 platform-python[41389]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:57 managed-node2 platform-python[41513]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:58 managed-node2 platform-python[41637]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:58 managed-node2 platform-python[41761]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:58 managed-node2 sudo[41886]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzgxgxnqwlywntbcwmbxfvzqsvbvyldz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595698.7728405-20488-3389549821227/AnsiballZ_systemd.py'\nOct 04 12:34:58 managed-node2 sudo[41886]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:59 managed-node2 platform-python[41889]: ansible-systemd Invoked with name= scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:34:59 managed-node2 systemd[25493]: Reloading.\nOct 04 12:34:59 managed-node2 systemd[25493]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nOct 04 12:34:59 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state\nOct 04 12:34:59 managed-node2 kernel: device veth938ef76c left promiscuous mode\nOct 04 12:34:59 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state\nOct 04 12:34:59 managed-node2 podman[42042]: time=\"2025-10-04T12:34:59-04:00\" level=error msg=\"container not running\"\nOct 04 12:34:59 managed-node2 podman[41905]: Pods stopped:\nOct 04 12:34:59 managed-node2 podman[41905]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb\nOct 04 12:34:59 managed-node2 podman[41905]: Pods removed:\nOct 04 12:34:59 managed-node2 podman[41905]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb\nOct 04 12:34:59 managed-node2 podman[41905]: Secrets removed:\nOct 04 12:34:59 managed-node2 podman[41905]: Volumes removed:\nOct 04 12:34:59 managed-node2 systemd[25493]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:34:59 managed-node2 sudo[41886]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:00 managed-node2 platform-python[42189]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:00 managed-node2 sudo[42314]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrcjhxluknhgauyehthbegbfnykwuipf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595700.2224684-20562-98442846681196/AnsiballZ_podman_play.py'\nOct 04 12:35:00 managed-node2 sudo[42314]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nOct 04 12:35:00 managed-node2 systemd[25493]: Started podman-42325.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nOct 04 12:35:00 managed-node2 sudo[42314]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:00 managed-node2 platform-python[42454]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:02 managed-node2 platform-python[42577]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:02 managed-node2 platform-python[42701]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:04 managed-node2 platform-python[42826]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:04 managed-node2 platform-python[42950]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:35:04 managed-node2 systemd[1]: Reloading.\nOct 04 12:35:04 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun shutting down.\nOct 04 12:35:04 managed-node2 systemd[1]: libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has successfully entered the 'dead' state.\nOct 04 12:35:04 managed-node2 systemd[1]: libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope: Consumed 34ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope completed and consumed the indicated resources.\nOct 04 12:35:04 managed-node2 systemd[1]: libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has successfully entered the 'dead' state.\nOct 04 12:35:04 managed-node2 systemd[1]: libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope completed and consumed the indicated resources.\nOct 04 12:35:04 managed-node2 systemd[1]: var-lib-containers-storage-overlay-e31deca989ff1b9cab2066515ef70e9fb506731c52bd3eea5fcc524723f3fd95-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-e31deca989ff1b9cab2066515ef70e9fb506731c52bd3eea5fcc524723f3fd95-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:35:05 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state\nOct 04 12:35:05 managed-node2 kernel: device veth44fc3814 left promiscuous mode\nOct 04 12:35:05 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state\nOct 04 12:35:05 managed-node2 systemd[1]: run-netns-netns\\x2d1f7b53eb\\x2d816f\\x2d29e7\\x2dfe7f\\x2d6eb0cf8f8502.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d1f7b53eb\\x2d816f\\x2d29e7\\x2dfe7f\\x2d6eb0cf8f8502.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay-db61c157c6f5df8b7c955b07912536ab089af634fcd5e6ba324c673952746b22-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-db61c157c6f5df8b7c955b07912536ab089af634fcd5e6ba324c673952746b22-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice.\n-- Subject: Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished shutting down.\nOct 04 12:35:05 managed-node2 systemd[1]: machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice: Consumed 67ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice completed and consumed the indicated resources.\nOct 04 12:35:05 managed-node2 podman[42986]: Pods stopped:\nOct 04 12:35:05 managed-node2 podman[42986]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a\nOct 04 12:35:05 managed-node2 podman[42986]: Pods removed:\nOct 04 12:35:05 managed-node2 podman[42986]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a\nOct 04 12:35:05 managed-node2 podman[42986]: Secrets removed:\nOct 04 12:35:05 managed-node2 podman[42986]: Volumes removed:\nOct 04 12:35:05 managed-node2 systemd[1]: libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope: Consumed 32ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope completed and consumed the indicated resources.\nOct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 dnsmasq[29797]: exiting on receipt of SIGTERM\nOct 04 12:35:05 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished shutting down.\nOct 04 12:35:05 managed-node2 platform-python[43261]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay-4f2be2d0d065dff6d155c9d9c0fcaafe14b4a59e471bfad994f4271172df41f6-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-4f2be2d0d065dff6d155c9d9c0fcaafe14b4a59e471bfad994f4271172df41f6-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nOct 04 12:35:06 managed-node2 platform-python[43522]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:08 managed-node2 platform-python[43645]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:09 managed-node2 platform-python[43770]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:10 managed-node2 platform-python[43894]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:35:10 managed-node2 systemd[1]: Reloading.\nOct 04 12:35:10 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun shutting down.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope completed and consumed the indicated resources.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope: Consumed 32ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope completed and consumed the indicated resources.\nOct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay-41fd0b8b2abd57633d991528a5058698098ba238146a9b3301e23d7fc73f3208-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-41fd0b8b2abd57633d991528a5058698098ba238146a9b3301e23d7fc73f3208-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state\nOct 04 12:35:10 managed-node2 kernel: device vethe1bf25d0 left promiscuous mode\nOct 04 12:35:10 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state\nOct 04 12:35:10 managed-node2 systemd[1]: run-netns-netns\\x2d027c972b\\x2d4f60\\x2dd6f9\\x2d5e22\\x2d75c001071f96.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d027c972b\\x2d4f60\\x2dd6f9\\x2d5e22\\x2d75c001071f96.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay-4d023b27109b3ccd34a210cfed9c7806ca6f6666b5b632942f94ae65bcc6121b-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-4d023b27109b3ccd34a210cfed9c7806ca6f6666b5b632942f94ae65bcc6121b-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice.\n-- Subject: Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished shutting down.\nOct 04 12:35:10 managed-node2 systemd[1]: machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice: Consumed 65ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice completed and consumed the indicated resources.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope: Consumed 34ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope completed and consumed the indicated resources.\nOct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 podman[43930]: Pods stopped:\nOct 04 12:35:10 managed-node2 podman[43930]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c\nOct 04 12:35:10 managed-node2 podman[43930]: Pods removed:\nOct 04 12:35:10 managed-node2 podman[43930]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c\nOct 04 12:35:10 managed-node2 podman[43930]: Secrets removed:\nOct 04 12:35:10 managed-node2 podman[43930]: Volumes removed:\nOct 04 12:35:10 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished shutting down.\nOct 04 12:35:11 managed-node2 platform-python[44199]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay-92a24157aa21e0a67c0987f9e002760f709781079b00f0e3baf2a1840c17ef8f-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-92a24157aa21e0a67c0987f9e002760f709781079b00f0e3baf2a1840c17ef8f-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:35:11 managed-node2 platform-python[44324]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:35:11 managed-node2 platform-python[44324]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml\nOct 04 12:35:12 managed-node2 platform-python[44460]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:13 managed-node2 platform-python[44583]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nOct 04 12:35:13 managed-node2 platform-python[44707]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:14 managed-node2 sudo[44832]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yohamxjcuwwqowlxqaokqdnnkiwnlnpj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595713.8242128-21262-181373190609036/AnsiballZ_podman_container_info.py'\nOct 04 12:35:14 managed-node2 sudo[44832]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:14 managed-node2 platform-python[44835]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None\nOct 04 12:35:14 managed-node2 systemd[25493]: Started podman-44837.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:14 managed-node2 sudo[44832]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:14 managed-node2 sudo[44966]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkavwbxyujjrxdsuzqtbvginlcolirvx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595714.3794177-21283-159383776115316/AnsiballZ_command.py'\nOct 04 12:35:14 managed-node2 sudo[44966]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:14 managed-node2 platform-python[44969]: ansible-command Invoked with _raw_params=podman network ls -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:14 managed-node2 systemd[25493]: Started podman-44971.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:14 managed-node2 sudo[44966]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:14 managed-node2 sudo[45126]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzvbfpfbqkpdwzlmyrlntccwniydmtz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595714.849909-21318-262235972613194/AnsiballZ_command.py'\nOct 04 12:35:14 managed-node2 sudo[45126]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:15 managed-node2 platform-python[45129]: ansible-command Invoked with _raw_params=podman secret ls -n -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:15 managed-node2 systemd[25493]: Started podman-45131.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:15 managed-node2 sudo[45126]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:15 managed-node2 platform-python[45261]: ansible-command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None\nOct 04 12:35:15 managed-node2 systemd[1]: Stopping User Manager for UID 3001...\n-- Subject: Unit user@3001.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has begun shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopping podman-pause-f03acc05.scope.\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Default.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopping D-Bus User Message Bus...\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Removed slice podman\\x2dkube.slice.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped D-Bus User Message Bus.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Basic System.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Timers.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped Mark boot as successful after the user session has run 2 minutes.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Paths.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Sockets.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Closed D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped podman-pause-f03acc05.scope.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Removed slice user.slice.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Reached target Shutdown.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:15 managed-node2 systemd[25493]: Started Exit the Session.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:15 managed-node2 systemd[25493]: Reached target Exit the Session.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:15 managed-node2 systemd[1]: user@3001.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit user@3001.service has successfully entered the 'dead' state.\nOct 04 12:35:15 managed-node2 systemd[1]: Stopped User Manager for UID 3001.\n-- Subject: Unit user@3001.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[1]: Stopping User runtime directory /run/user/3001...\n-- Subject: Unit user-runtime-dir@3001.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has begun shutting down.\nOct 04 12:35:15 managed-node2 systemd[1]: run-user-3001.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-user-3001.mount has successfully entered the 'dead' state.\nOct 04 12:35:15 managed-node2 systemd[1]: user-runtime-dir@3001.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit user-runtime-dir@3001.service has successfully entered the 'dead' state.\nOct 04 12:35:15 managed-node2 systemd[1]: Stopped User runtime directory /run/user/3001.\n-- Subject: Unit user-runtime-dir@3001.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[1]: Removed slice User Slice of UID 3001.\n-- Subject: Unit user-3001.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-3001.slice has finished shutting down.\nOct 04 12:35:15 managed-node2 platform-python[45395]: ansible-command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:16 managed-node2 sudo[45519]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtaohsaclvtebxmwbqjbotdzsbchavvn ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595716.4392693-21370-112299601135578/AnsiballZ_command.py'\nOct 04 12:35:16 managed-node2 sudo[45519]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:16 managed-node2 platform-python[45522]: ansible-command Invoked with _raw_params=podman pod exists httpd1 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:16 managed-node2 sudo[45519]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:17 managed-node2 platform-python[45652]: ansible-command Invoked with _raw_params=podman pod exists httpd2 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:17 managed-node2 platform-python[45782]: ansible-command Invoked with _raw_params=podman pod exists httpd3 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:17 managed-node2 sudo[45913]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmaujacgizsrobtsjzpmjnnqwleohar ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595717.6763232-21443-47795602630278/AnsiballZ_command.py'\nOct 04 12:35:17 managed-node2 sudo[45913]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:17 managed-node2 platform-python[45916]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:17 managed-node2 sudo[45913]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:18 managed-node2 platform-python[46042]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:18 managed-node2 platform-python[46168]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:19 managed-node2 platform-python[46294]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:21 managed-node2 platform-python[46542]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:22 managed-node2 platform-python[46671]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:23 managed-node2 platform-python[46795]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:26 managed-node2 platform-python[46920]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nOct 04 12:35:26 managed-node2 platform-python[47044]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:27 managed-node2 platform-python[47169]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:27 managed-node2 platform-python[47293]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:28 managed-node2 platform-python[47417]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:29 managed-node2 platform-python[47541]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:29 managed-node2 platform-python[47664]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:30 managed-node2 platform-python[47787]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:31 managed-node2 platform-python[47910]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:31 managed-node2 platform-python[48034]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:33 managed-node2 platform-python[48159]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:33 managed-node2 platform-python[48283]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:35:34 managed-node2 platform-python[48410]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:34 managed-node2 platform-python[48533]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:35 managed-node2 platform-python[48656]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:37 managed-node2 platform-python[48781]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:37 managed-node2 platform-python[48905]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:35:38 managed-node2 platform-python[49032]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:38 managed-node2 platform-python[49155]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:39 managed-node2 platform-python[49278]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nOct 04 12:35:40 managed-node2 platform-python[49402]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:41 managed-node2 platform-python[49525]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:41 managed-node2 platform-python[49648]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:42 managed-node2 sshd[49669]: Accepted publickey for root from 10.31.11.222 port 49618 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nOct 04 12:35:42 managed-node2 systemd[1]: Started Session 9 of user root.\n-- Subject: Unit session-9.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-9.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:42 managed-node2 systemd-logind[598]: New session 9 of user root.\n-- Subject: A new session 9 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 9 has been created for the user root.\n-- \n-- The leading process of the session is 49669.\nOct 04 12:35:42 managed-node2 sshd[49669]: pam_unix(sshd:session): session opened for user root by (uid=0)\nOct 04 12:35:42 managed-node2 sshd[49672]: Received disconnect from 10.31.11.222 port 49618:11: disconnected by user\nOct 04 12:35:42 managed-node2 sshd[49672]: Disconnected from user root 10.31.11.222 port 49618\nOct 04 12:35:42 managed-node2 sshd[49669]: pam_unix(sshd:session): session closed for user root\nOct 04 12:35:42 managed-node2 systemd[1]: session-9.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-9.scope has successfully entered the 'dead' state.\nOct 04 12:35:42 managed-node2 systemd-logind[598]: Session 9 logged out. Waiting for processes to exit.\nOct 04 12:35:42 managed-node2 systemd-logind[598]: Removed session 9.\n-- Subject: Session 9 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 9 has been terminated.\nOct 04 12:35:44 managed-node2 platform-python[49834]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nOct 04 12:35:44 managed-node2 platform-python[49961]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:45 managed-node2 platform-python[50084]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:47 managed-node2 platform-python[50332]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:48 managed-node2 platform-python[50461]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:48 managed-node2 platform-python[50585]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:50 managed-node2 sshd[50608]: Accepted publickey for root from 10.31.11.222 port 49628 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nOct 04 12:35:50 managed-node2 systemd[1]: Started Session 10 of user root.\n-- Subject: Unit session-10.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-10.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:50 managed-node2 systemd-logind[598]: New session 10 of user root.\n-- Subject: A new session 10 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 10 has been created for the user root.\n-- \n-- The leading process of the session is 50608.\nOct 04 12:35:50 managed-node2 sshd[50608]: pam_unix(sshd:session): session opened for user root by (uid=0)\nOct 04 12:35:50 managed-node2 sshd[50611]: Received disconnect from 10.31.11.222 port 49628:11: disconnected by user\nOct 04 12:35:50 managed-node2 sshd[50611]: Disconnected from user root 10.31.11.222 port 49628\nOct 04 12:35:50 managed-node2 sshd[50608]: pam_unix(sshd:session): session closed for user root\nOct 04 12:35:50 managed-node2 systemd[1]: session-10.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-10.scope has successfully entered the 'dead' state.\nOct 04 12:35:50 managed-node2 systemd-logind[598]: Session 10 logged out. Waiting for processes to exit.\nOct 04 12:35:50 managed-node2 systemd-logind[598]: Removed session 10.\n-- Subject: Session 10 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 10 has been terminated.\nOct 04 12:35:51 managed-node2 platform-python[50773]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nOct 04 12:35:54 managed-node2 platform-python[50925]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:55 managed-node2 platform-python[51048]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:57 managed-node2 platform-python[51296]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:58 managed-node2 platform-python[51425]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:58 managed-node2 platform-python[51549]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:02 managed-node2 sshd[51572]: Accepted publickey for root from 10.31.11.222 port 39572 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nOct 04 12:36:02 managed-node2 systemd[1]: Started Session 11 of user root.\n-- Subject: Unit session-11.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-11.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:02 managed-node2 systemd-logind[598]: New session 11 of user root.\n-- Subject: A new session 11 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 11 has been created for the user root.\n-- \n-- The leading process of the session is 51572.\nOct 04 12:36:02 managed-node2 sshd[51572]: pam_unix(sshd:session): session opened for user root by (uid=0)\nOct 04 12:36:02 managed-node2 sshd[51575]: Received disconnect from 10.31.11.222 port 39572:11: disconnected by user\nOct 04 12:36:02 managed-node2 sshd[51575]: Disconnected from user root 10.31.11.222 port 39572\nOct 04 12:36:02 managed-node2 sshd[51572]: pam_unix(sshd:session): session closed for user root\nOct 04 12:36:02 managed-node2 systemd[1]: session-11.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-11.scope has successfully entered the 'dead' state.\nOct 04 12:36:02 managed-node2 systemd-logind[598]: Session 11 logged out. Waiting for processes to exit.\nOct 04 12:36:02 managed-node2 systemd-logind[598]: Removed session 11.\n-- Subject: Session 11 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 11 has been terminated.\nOct 04 12:36:04 managed-node2 platform-python[51737]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nOct 04 12:36:04 managed-node2 platform-python[51889]: ansible-user Invoked with name=lsr_multiple_user1 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None\nOct 04 12:36:04 managed-node2 useradd[51893]: new group: name=lsr_multiple_user1, GID=3002\nOct 04 12:36:04 managed-node2 useradd[51893]: new user: name=lsr_multiple_user1, UID=3002, GID=3002, home=/home/lsr_multiple_user1, shell=/bin/bash\nOct 04 12:36:05 managed-node2 platform-python[52021]: ansible-user Invoked with name=lsr_multiple_user2 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None\nOct 04 12:36:05 managed-node2 useradd[52025]: new group: name=lsr_multiple_user2, GID=3003\nOct 04 12:36:05 managed-node2 useradd[52025]: new user: name=lsr_multiple_user2, UID=3003, GID=3003, home=/home/lsr_multiple_user2, shell=/bin/bash\nOct 04 12:36:06 managed-node2 platform-python[52153]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:06 managed-node2 platform-python[52276]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:09 managed-node2 platform-python[52524]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:09 managed-node2 sshd[52551]: Accepted publickey for root from 10.31.11.222 port 39576 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nOct 04 12:36:09 managed-node2 systemd[1]: Started Session 12 of user root.\n-- Subject: Unit session-12.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-12.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:09 managed-node2 systemd-logind[598]: New session 12 of user root.\n-- Subject: A new session 12 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 12 has been created for the user root.\n-- \n-- The leading process of the session is 52551.\nOct 04 12:36:09 managed-node2 sshd[52551]: pam_unix(sshd:session): session opened for user root by (uid=0)\nOct 04 12:36:09 managed-node2 sshd[52554]: Received disconnect from 10.31.11.222 port 39576:11: disconnected by user\nOct 04 12:36:09 managed-node2 sshd[52554]: Disconnected from user root 10.31.11.222 port 39576\nOct 04 12:36:09 managed-node2 sshd[52551]: pam_unix(sshd:session): session closed for user root\nOct 04 12:36:09 managed-node2 systemd[1]: session-12.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-12.scope has successfully entered the 'dead' state.\nOct 04 12:36:09 managed-node2 systemd-logind[598]: Session 12 logged out. Waiting for processes to exit.\nOct 04 12:36:09 managed-node2 systemd-logind[598]: Removed session 12.\n-- Subject: Session 12 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 12 has been terminated.\nOct 04 12:36:11 managed-node2 platform-python[52716]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nOct 04 12:36:12 managed-node2 platform-python[52868]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:12 managed-node2 platform-python[52991]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:13 managed-node2 platform-python[53115]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:36:15 managed-node2 chronyd[603]: Source 74.208.25.46 replaced with 163.123.152.14 (2.centos.pool.ntp.org)\nOct 04 12:36:16 managed-node2 platform-python[53244]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 systemd[1]: Reloading.\nOct 04 12:36:18 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.\n-- Subject: Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:18 managed-node2 systemd[1]: Starting man-db-cache-update.service...\n-- Subject: Unit man-db-cache-update.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has begun starting up.\nOct 04 12:36:19 managed-node2 systemd[1]: Reloading.\nOct 04 12:36:19 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit man-db-cache-update.service has successfully entered the 'dead' state.\nOct 04 12:36:19 managed-node2 systemd[1]: Started man-db-cache-update.service.\n-- Subject: Unit man-db-cache-update.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:19 managed-node2 systemd[1]: run-ra349d219a6fb4468acd54152311c9c85.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-ra349d219a6fb4468acd54152311c9c85.service has successfully entered the 'dead' state.\nOct 04 12:36:20 managed-node2 platform-python[53877]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:20 managed-node2 platform-python[54000]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:21 managed-node2 platform-python[54123]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:36:21 managed-node2 systemd[1]: Reloading.\nOct 04 12:36:21 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment...\n-- Subject: Unit certmonger.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit certmonger.service has begun starting up.\nOct 04 12:36:21 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment.\n-- Subject: Unit certmonger.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit certmonger.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:22 managed-node2 platform-python[54316]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=#\n # Ansible managed\n #\n # system_role:certificate\n booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54332]: Certificate in file \"/etc/pki/tls/certs/quadlet_demo.crt\" issued by CA and saved.\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 platform-python[54454]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt\nOct 04 12:36:23 managed-node2 platform-python[54577]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key\nOct 04 12:36:23 managed-node2 platform-python[54700]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt\nOct 04 12:36:24 managed-node2 platform-python[54823]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:24 managed-node2 certmonger[54159]: 2025-10-04 12:36:24 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:24 managed-node2 platform-python[54947]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:24 managed-node2 platform-python[55070]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:25 managed-node2 platform-python[55193]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:25 managed-node2 platform-python[55316]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:26 managed-node2 platform-python[55439]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:28 managed-node2 platform-python[55687]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:29 managed-node2 platform-python[55816]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:36:29 managed-node2 platform-python[55940]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:31 managed-node2 platform-python[56065]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:32 managed-node2 platform-python[56188]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:32 managed-node2 platform-python[56311]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:33 managed-node2 platform-python[56435]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:36:36 managed-node2 platform-python[56558]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:36:36 managed-node2 platform-python[56685]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:36:37 managed-node2 platform-python[56812]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:36:37 managed-node2 platform-python[56935]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:36:40 managed-node2 platform-python[57058]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None", "task_name": "Dump journal", "task_path": "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:142" }, { "ansible_version": "2.9.27", "end_time": "2025-10-04T16:36:53.751654+00:00Z", "host": "managed-node2", "message": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "start_time": "2025-10-04T16:36:53.735866+00:00Z", "task_name": "Manage each secret", "task_path": "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42" }, { "ansible_version": "2.9.27", "delta": "0:00:00.025383", "end_time": "2025-10-04 12:36:54.803541", "host": "managed-node2", "message": "No message could be found", "rc": 0, "start_time": "2025-10-04 12:36:54.778158", "stdout": "-- Logs begin at Sat 2025-10-04 12:26:12 EDT, end at Sat 2025-10-04 12:36:54 EDT. --\nOct 04 12:31:26 managed-node2 platform-python[16120]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:31:26 managed-node2 platform-python[16247]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:31:27 managed-node2 platform-python[16374]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:31:29 managed-node2 platform-python[16497]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:32 managed-node2 platform-python[16620]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:34 managed-node2 platform-python[16743]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:37 managed-node2 platform-python[16866]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:31:38 managed-node2 platform-python[17014]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:31:39 managed-node2 platform-python[17137]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:31:43 managed-node2 platform-python[17260]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:46 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:31:46 managed-node2 platform-python[17523]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:47 managed-node2 platform-python[17646]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:31:47 managed-node2 platform-python[17769]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:31:47 managed-node2 platform-python[17868]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/bogus.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595507.1829739-11549-146476787522942/source _original_basename=tmpisytpwv2 follow=False checksum=f8266a972ed3be7e204d2a67883fe3a22b8dbf18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:31:48 managed-node2 platform-python[17993]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:31:48 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice.\n-- Subject: Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:31:48 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:31:51 managed-node2 platform-python[18280]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:31:52 managed-node2 platform-python[18409]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:31:55 managed-node2 platform-python[18534]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:31:58 managed-node2 platform-python[18657]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:31:59 managed-node2 platform-python[18784]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:31:59 managed-node2 platform-python[18911]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:32:01 managed-node2 platform-python[19034]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:04 managed-node2 platform-python[19157]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:06 managed-node2 platform-python[19280]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:09 managed-node2 platform-python[19403]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:32:11 managed-node2 platform-python[19551]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:32:11 managed-node2 platform-python[19674]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:32:15 managed-node2 platform-python[19797]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:17 managed-node2 platform-python[19922]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:17 managed-node2 platform-python[20046]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:32:18 managed-node2 platform-python[20173]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:18 managed-node2 platform-python[20298]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:32:18 managed-node2 platform-python[20298]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/nopull.yml\nOct 04 12:32:18 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice.\n-- Subject: Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice has finished shutting down.\nOct 04 12:32:18 managed-node2 systemd[1]: machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice: Consumed 0 CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_ffbebf2d7836c73643cc8f798020b4514f2382aa5523b755da0180e62f98959a.slice completed and consumed the indicated resources.\nOct 04 12:32:18 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:32:19 managed-node2 platform-python[20436]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:32:19 managed-node2 platform-python[20559]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:22 managed-node2 platform-python[20814]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:23 managed-node2 platform-python[20942]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:27 managed-node2 platform-python[21067]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:29 managed-node2 platform-python[21190]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:32:30 managed-node2 platform-python[21317]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:32:31 managed-node2 platform-python[21444]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:32:33 managed-node2 platform-python[21567]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:35 managed-node2 platform-python[21690]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:38 managed-node2 platform-python[21813]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:32:40 managed-node2 platform-python[21936]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:32:42 managed-node2 platform-python[22084]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:32:43 managed-node2 platform-python[22207]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:32:47 managed-node2 platform-python[22330]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:48 managed-node2 platform-python[22455]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:49 managed-node2 platform-python[22579]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:32:50 managed-node2 platform-python[22706]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:50 managed-node2 platform-python[22831]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:32:50 managed-node2 platform-python[22831]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/bogus.yml\nOct 04 12:32:50 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice.\n-- Subject: Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice has finished shutting down.\nOct 04 12:32:50 managed-node2 systemd[1]: machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice: Consumed 0 CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_ef950dbfd393893c3068aed764bebec787bcb736c1254b3b7c0cde0f3c9fc02e.slice completed and consumed the indicated resources.\nOct 04 12:32:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:32:50 managed-node2 platform-python[22970]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:32:51 managed-node2 platform-python[23093]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:32:54 managed-node2 platform-python[23349]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:32:56 managed-node2 platform-python[23478]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:32:59 managed-node2 platform-python[23603]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:33:00 managed-node2 chronyd[603]: Detected falseticker 74.208.25.46 (2.centos.pool.ntp.org)\nOct 04 12:33:01 managed-node2 platform-python[23726]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:33:02 managed-node2 platform-python[23853]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:33:03 managed-node2 platform-python[23980]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:33:05 managed-node2 platform-python[24103]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:33:07 managed-node2 platform-python[24226]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:33:10 managed-node2 platform-python[24349]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:33:12 managed-node2 platform-python[24472]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:33:14 managed-node2 platform-python[24620]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:33:15 managed-node2 platform-python[24743]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:33:19 managed-node2 platform-python[24866]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nOct 04 12:33:19 managed-node2 platform-python[24990]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:20 managed-node2 platform-python[25115]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:20 managed-node2 platform-python[25239]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:21 managed-node2 platform-python[25363]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:22 managed-node2 platform-python[25487]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nOct 04 12:33:22 managed-node2 systemd[1]: Created slice User Slice of UID 3001.\n-- Subject: Unit user-3001.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-3001.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[1]: Starting User runtime directory /run/user/3001...\n-- Subject: Unit user-runtime-dir@3001.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has begun starting up.\nOct 04 12:33:22 managed-node2 systemd[1]: Started User runtime directory /run/user/3001.\n-- Subject: Unit user-runtime-dir@3001.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[1]: Starting User Manager for UID 3001...\n-- Subject: Unit user@3001.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has begun starting up.\nOct 04 12:33:22 managed-node2 systemd[25493]: pam_unix(systemd-user:session): session opened for user podman_basic_user by (uid=0)\nOct 04 12:33:22 managed-node2 systemd[25493]: Starting D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun starting up.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Paths.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Started Mark boot as successful after the user session has run 2 minutes.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Timers.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Listening on D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Sockets.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Basic System.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Reached target Default.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:22 managed-node2 systemd[25493]: Startup finished in 26ms.\n-- Subject: User manager start-up is now complete\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The user manager instance for user 3001 has been started. All services queued\n-- for starting have been started. Note that other services might still be starting\n-- up or be started at any later time.\n-- \n-- Startup of the manager took 26808 microseconds.\nOct 04 12:33:22 managed-node2 systemd[1]: Started User Manager for UID 3001.\n-- Subject: Unit user@3001.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:23 managed-node2 platform-python[25628]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:23 managed-node2 platform-python[25751]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:23 managed-node2 sudo[25874]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sifywbsrisccwijbkunlnmxfrflsisdd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595603.7086673-15808-227803905840648/AnsiballZ_podman_image.py'\nOct 04 12:33:23 managed-node2 sudo[25874]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:24 managed-node2 systemd[25493]: Started D-Bus User Message Bus.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Created slice user.slice.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25886.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-pause-f03acc05.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25902.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25917.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:24 managed-node2 systemd[25493]: Started podman-25926.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:25 managed-node2 systemd[25493]: Started podman-25933.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:25 managed-node2 systemd[25493]: Started podman-25942.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:25 managed-node2 sudo[25874]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:25 managed-node2 platform-python[26071]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:25 managed-node2 platform-python[26194]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:26 managed-node2 platform-python[26317]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:33:26 managed-node2 platform-python[26416]: ansible-copy Invoked with dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml owner=podman_basic_user group=3001 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595606.1679773-15914-270785518997063/source _original_basename=tmpck_isd86 follow=False checksum=4df6e405cb1c69d6fda71fca57ba10095c6652bf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:33:26 managed-node2 sudo[26541]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhtmywupgtnbvlezrrhjughqngefyblk ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595606.8491445-15948-152734101610115/AnsiballZ_podman_play.py'\nOct 04 12:33:26 managed-node2 sudo[26541]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:33:27 managed-node2 systemd[25493]: Started podman-26552.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:27 managed-node2 kernel: tun: Universal TUN/TAP device driver, 1.6\nOct 04 12:33:27 managed-node2 systemd[25493]: Started rootless-netns-edb70a77.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:27 managed-node2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.\nOct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth2fe45075: link is not ready\nOct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered blocking state\nOct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state\nOct 04 12:33:27 managed-node2 kernel: device veth2fe45075 entered promiscuous mode\nOct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nOct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nOct 04 12:33:27 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2fe45075: link becomes ready\nOct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered blocking state\nOct 04 12:33:27 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered forwarding state\nOct 04 12:33:27 managed-node2 dnsmasq[26740]: listening on cni-podman1(#3): 10.89.0.1\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: started, version 2.79 cachesize 150\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using local addresses only for domain dns.podman\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: reading /etc/resolv.conf\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using local addresses only for domain dns.podman\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.0.2.3#53\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.29.169.13#53\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.29.170.12#53\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: using nameserver 10.2.32.1#53\nOct 04 12:33:27 managed-node2 dnsmasq[26742]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:27 managed-node2 conmon[26754]: conmon 978f42b0916c823a3a50 : failed to write to /proc/self/oom_score_adj: Permission denied\nOct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach}\nOct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : terminal_ctrl_fd: 14\nOct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : winsz read side: 17, winsz write side: 18\nOct 04 12:33:27 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : container PID: 26765\nOct 04 12:33:27 managed-node2 conmon[26775]: conmon 4c95f0539eb18fb7ecd6 : failed to write to /proc/self/oom_score_adj: Permission denied\nOct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}\nOct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : terminal_ctrl_fd: 13\nOct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : winsz read side: 16, winsz write side: 17\nOct 04 12:33:27 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : container PID: 26786\nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d\n Container:\n 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\n \nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Successfully loaded 1 networks\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"found free device name cni-podman1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"found free ipv4 network subnet 10.89.0.0/24\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:33:27.193743577 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reference \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" does not resolve to an image ID\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reference \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" does not resolve to an image ID\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"FROM \\\"scratch\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are not supported\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Check for idmapped mounts support \"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"overlay: test mount indicated that volatile is being used\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/empty,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/work,userxattr,volatile,context=\\\"system_u:object_r:container_file_t:s0:c105,c564\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container ID: 9516e55badb1147a3bd380a45ee33bd293ab708eefc046d098c76c453fc83966\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:copy Args:[/usr/libexec/podman/catatonit /catatonit] Flags:[] Attrs:map[] Message:COPY /usr/libexec/podman/catatonit /catatonit Heredocs:[] Original:COPY /usr/libexec/podman/catatonit /catatonit}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"COPY []string(nil), imagebuilder.Copy{FromFS:false, From:\\\"\\\", Src:[]string{\\\"/usr/libexec/podman/catatonit\\\"}, Dest:\\\"/catatonit\\\", Download:false, Chown:\\\"\\\", Chmod:\\\"\\\", Checksum:\\\"\\\", Files:[]imagebuilder.File(nil)}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"added content file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:entrypoint Args:[/catatonit -P] Flags:[] Attrs:map[json:true] Message:ENTRYPOINT /catatonit -P Heredocs:[] Original:ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"COMMIT localhost/podman-pause:4.9.4-dev-1708535009\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"COMMIT \\\"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"committing image with reference \\\"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" is allowed by policy\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"layer list: [\\\"768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb\\\"]\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"using \\\"/var/tmp/buildah340804419\\\" to hold temporary data\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Tar with options on /home/podman_basic_user/.local/share/containers/storage/overlay/768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb/diff\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"layer \\\"768c06865f3cc3730baf73cb4487bc93f4d154deeff9e2a81925b8322f4232eb\\\" size is 767488 bytes, uncompressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690, possibly-compressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"OCIv1 config = {\\\"created\\\":\\\"2025-10-04T16:33:27.33236731Z\\\",\\\"architecture\\\":\\\"amd64\\\",\\\"os\\\":\\\"linux\\\",\\\"config\\\":{\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"rootfs\\\":{\\\"type\\\":\\\"layers\\\",\\\"diff_ids\\\":[\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"]},\\\"history\\\":[{\\\"created\\\":\\\"2025-10-04T16:33:27.331845264Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \\\",\\\"empty_layer\\\":true},{\\\"created\\\":\\\"2025-10-04T16:33:27.335420758Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) ENTRYPOINT [\\\\\\\"/catatonit\\\\\\\", \\\\\\\"-P\\\\\\\"]\\\"}]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"OCIv1 manifest = {\\\"schemaVersion\\\":2,\\\"mediaType\\\":\\\"application/vnd.oci.image.manifest.v1+json\\\",\\\"config\\\":{\\\"mediaType\\\":\\\"application/vnd.oci.image.config.v1+json\\\",\\\"digest\\\":\\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\",\\\"size\\\":667},\\\"layers\\\":[{\\\"mediaType\\\":\\\"application/vnd.oci.image.layer.v1.tar\\\",\\\"digest\\\":\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\",\\\"size\\\":767488}],\\\"annotations\\\":{\\\"org.opencontainers.image.base.digest\\\":\\\"\\\",\\\"org.opencontainers.image.base.name\\\":\\\"\\\"}}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Docker v2s2 config = {\\\"created\\\":\\\"2025-10-04T16:33:27.33236731Z\\\",\\\"container\\\":\\\"9516e55badb1147a3bd380a45ee33bd293ab708eefc046d098c76c453fc83966\\\",\\\"container_config\\\":{\\\"Hostname\\\":\\\"\\\",\\\"Domainname\\\":\\\"\\\",\\\"User\\\":\\\"\\\",\\\"AttachStdin\\\":false,\\\"AttachStdout\\\":false,\\\"AttachStderr\\\":false,\\\"Tty\\\":false,\\\"OpenStdin\\\":false,\\\"StdinOnce\\\":false,\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Cmd\\\":[],\\\"Image\\\":\\\"\\\",\\\"Volumes\\\":{},\\\"WorkingDir\\\":\\\"\\\",\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"OnBuild\\\":[],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"config\\\":{\\\"Hostname\\\":\\\"\\\",\\\"Domainname\\\":\\\"\\\",\\\"User\\\":\\\"\\\",\\\"AttachStdin\\\":false,\\\"AttachStdout\\\":false,\\\"AttachStderr\\\":false,\\\"Tty\\\":false,\\\"OpenStdin\\\":false,\\\"StdinOnce\\\":false,\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Cmd\\\":[],\\\"Image\\\":\\\"\\\",\\\"Volumes\\\":{},\\\"WorkingDir\\\":\\\"\\\",\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"OnBuild\\\":[],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"architecture\\\":\\\"amd64\\\",\\\"os\\\":\\\"linux\\\",\\\"rootfs\\\":{\\\"type\\\":\\\"layers\\\",\\\"diff_ids\\\":[\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"]},\\\"history\\\":[{\\\"created\\\":\\\"2025-10-04T16:33:27.331845264Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \\\",\\\"empty_layer\\\":true},{\\\"created\\\":\\\"2025-10-04T16:33:27.335420758Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) ENTRYPOINT [\\\\\\\"/catatonit\\\\\\\", \\\\\\\"-P\\\\\\\"]\\\"}]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Docker v2s2 manifest = {\\\"schemaVersion\\\":2,\\\"mediaType\\\":\\\"application/vnd.docker.distribution.manifest.v2+json\\\",\\\"config\\\":{\\\"mediaType\\\":\\\"application/vnd.docker.container.image.v1+json\\\",\\\"size\\\":1341,\\\"digest\\\":\\\"sha256:cc08d8f0e313f02451a20252b1d70f6f69284663aede171c80a5525e2a51ba5b\\\"},\\\"layers\\\":[{\\\"mediaType\\\":\\\"application/vnd.docker.image.rootfs.diff.tar\\\",\\\"size\\\":767488,\\\"digest\\\":\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"}]}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using SQLite blob info cache at /home/podman_basic_user/.local/share/containers/cache/blob-info-cache-v1.sqlite\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"IsRunningImageAllowed for image containers-storage:\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\" Using transport \\\"containers-storage\\\" policy section \"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\" Requirement 0: allowed\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Overall: allowed\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"start reading config\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"finished reading config\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"... will first try using the original manifest unmodified\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Checking if we can reuse blob sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690: general substitution = true, compression for MIME type \\\"application/vnd.oci.image.layer.v1.tar\\\" = true\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reading layer \\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"No compression detected\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using original blob without modification\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Applying tar in /home/podman_basic_user/.local/share/containers/storage/overlay/d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690/diff\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"finished reading layer \\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"No compression detected\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Compression change for blob sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05 (\\\"application/vnd.oci.image.config.v1+json\\\") not supported\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Using original blob without modification\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"setting image creation date to 2025-10-04 16:33:27.33236731 +0000 UTC\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"created new image ID \\\"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\" with metadata \\\"{}\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"added name \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" to image \\\"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"printing final image id \\\"b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Got pod cgroup as /libpod_parent/4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"setting container name 4bfdec19f3e3-infra\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Allocated lock 1 for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created container \\\"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container \\\"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\\\" has work directory \\\"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container \\\"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\\\" has run directory \\\"/run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"adding container to pod httpd1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"setting container name httpd1-httpd1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Allocated lock 2 for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created container \\\"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container \\\"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\\\" has work directory \\\"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Container \\\"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\\\" has run directory \\\"/run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Strongconnecting node 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Pushed 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 onto stack\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Finishing node 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1. Popped 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 off stack\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Strongconnecting node 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Pushed 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 onto stack\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Finishing node 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8. Popped 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 off stack\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/3P7PWYNTG5QJZJOWQ6XDK4NETN,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/work,userxattr,context=\\\"system_u:object_r:container_file_t:s0:c285,c421\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Made network namespace at /run/user/3001/netns/netns-f5551a3d-13a6-81b5-6f62-8de155b907e3 for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Mounted container \\\"978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\\\" at \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created root filesystem for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 at /home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"creating rootless network namespace with name \\\"rootless-netns-d22c9f230d0691b8f418\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/3001/netns/rootless-netns-d22c9f230d0691b8f418 tap0\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"The path of /etc/resolv.conf in the mount ns is \\\"/etc/resolv.conf\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"cni result for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:e2:98:f4:5f:02:10 Sandbox:} {Name:veth2fe45075 Mac:16:b6:29:b0:6d:39 Sandbox:} {Name:eth0 Mac:2a:18:12:08:ad:32 Sandbox:/run/user/3001/netns/netns-f5551a3d-13a6-81b5-6f62-8de155b907e3}] [{Version:4 Interface:0xc000c3e028 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"Starting parent driver\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"opaque=map[builtin.readypipepath:/run/user/3001/libpod/tmp/rootlessport4177357533/.bp-ready.pipe builtin.socketpath:/run/user/3001/libpod/tmp/rootlessport4177357533/.bp.sock]\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"Starting child driver in child netns (\\\\\\\"/proc/self/exe\\\\\\\" [rootlessport-child])\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"Waiting for initComplete\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"initComplete is closed; parent and child established the communication channel\\\"\\ntime=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=\\\"Exposing ports [{ 80 15001 1 tcp}]\\\"\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-10-04T12:33:27-04:00\\\" level=info msg=Ready\\n\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"rootlessport is ready\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/f6dd3907c28a7adf3b6b9745055722e2c5bda208c5ded26d1df9eb0cad75f575/merged\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created OCI spec for container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/config.json\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Got pod cgroup as \"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 -u 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata -p /run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/pidfile -n 4bfdec19f3e3-infra --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1]\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied\"\n [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n \n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Received: 26765\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Got Conmon PID as 26755\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 in OCI runtime\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Starting container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1 with command [/catatonit -P]\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Started container 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/FD7XHZOTU3ZCOHOMS6WJGARUCE,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/work,userxattr,context=\\\"system_u:object_r:container_file_t:s0:c285,c421\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Mounted container \\\"4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\\\" at \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/merged\\\"\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created root filesystem for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 at /home/podman_basic_user/.local/share/containers/storage/overlay/a4627996b823fa44a22151d38d37a93c621853a440affc8728babd794c866d58/merged\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created OCI spec for container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/config.json\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Got pod cgroup as \"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 -u 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata -p /run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/pidfile -n httpd1-httpd1 --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8]\"\n [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n \n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Received: 26786\"\n time=\"2025-10-04T12:33:27-04:00\" level=info msg=\"Got Conmon PID as 26776\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Created container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 in OCI runtime\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Starting container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8 with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Started container 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Called kube.PersistentPostRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-10-04T12:33:27-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:27 managed-node2 platform-python[26544]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nOct 04 12:33:27 managed-node2 sudo[26541]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:28 managed-node2 sudo[26917]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqgeablflvziirakssvhgovxnyqlazn ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595608.143278-15965-116724055097581/AnsiballZ_systemd.py'\nOct 04 12:33:28 managed-node2 sudo[26917]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:28 managed-node2 platform-python[26920]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nOct 04 12:33:28 managed-node2 systemd[25493]: Reloading.\nOct 04 12:33:28 managed-node2 sudo[26917]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:28 managed-node2 sudo[27054]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcawqykrekhxzainagfjkqbithxyjltw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595608.6875198-15999-90547985485461/AnsiballZ_systemd.py'\nOct 04 12:33:28 managed-node2 sudo[27054]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:29 managed-node2 platform-python[27057]: ansible-systemd Invoked with name= scope=user enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nOct 04 12:33:29 managed-node2 systemd[25493]: Reloading.\nOct 04 12:33:29 managed-node2 sudo[27054]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:29 managed-node2 sudo[27193]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sugdwogvzmuboswwbcwdjoyqjtijmrjd ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595609.2593784-16020-169137665275210/AnsiballZ_systemd.py'\nOct 04 12:33:29 managed-node2 sudo[27193]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:29 managed-node2 dnsmasq[26742]: listening on cni-podman1(#3): fe80::e098:f4ff:fe5f:210%cni-podman1\nOct 04 12:33:29 managed-node2 platform-python[27196]: ansible-systemd Invoked with name= scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nOct 04 12:33:29 managed-node2 systemd[25493]: Created slice podman\\x2dkube.slice.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:29 managed-node2 systemd[25493]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit UNIT has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun starting up.\nOct 04 12:33:29 managed-node2 conmon[26755]: conmon 978f42b0916c823a3a50 : container 26765 exited with status 137\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1)\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nOct 04 12:33:29 managed-node2 conmon[26776]: conmon 4c95f0539eb18fb7ecd6 : container 26786 exited with status 137\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=info msg=\"Using sqlite as database backend\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using graph driver overlay\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using transient store: false\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Initializing event backend file\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=info msg=\"Setting parallel job count to 7\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8)\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=info msg=\"Using sqlite as database backend\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using graph driver overlay\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using transient store: false\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Initializing event backend file\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=info msg=\"Setting parallel job count to 7\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 4c95f0539eb18fb7ecd6a5123533ac761b12e784a406789e4057197c61ab30e8)\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27236]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:29 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state\nOct 04 12:33:29 managed-node2 kernel: device veth2fe45075 left promiscuous mode\nOct 04 12:33:29 managed-node2 kernel: cni-podman1: port 1(veth2fe45075) entered disabled state\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 978f42b0916c823a3a506ada2e96d5ffa72ad09833d6dbdd8e1931d16d09c5f1)\"\nOct 04 12:33:29 managed-node2 /usr/bin/podman[27216]: time=\"2025-10-04T12:33:29-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:29 managed-node2 podman[27202]: Pods stopped:\nOct 04 12:33:29 managed-node2 podman[27202]: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d\nOct 04 12:33:29 managed-node2 podman[27202]: Pods removed:\nOct 04 12:33:29 managed-node2 podman[27202]: 4bfdec19f3e3473fb85e0e592677616e52edb98ecbb7b62ea407b71d476ee63d\nOct 04 12:33:29 managed-node2 podman[27202]: Secrets removed:\nOct 04 12:33:29 managed-node2 podman[27202]: Volumes removed:\nOct 04 12:33:30 managed-node2 systemd[25493]: Started rootless-netns-d4627493.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:30 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth938ef76c: link is not ready\nOct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered blocking state\nOct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state\nOct 04 12:33:30 managed-node2 kernel: device veth938ef76c entered promiscuous mode\nOct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered blocking state\nOct 04 12:33:30 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered forwarding state\nOct 04 12:33:30 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth938ef76c: link becomes ready\nOct 04 12:33:30 managed-node2 dnsmasq[27452]: listening on cni-podman1(#3): 10.89.0.1\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: started, version 2.79 cachesize 150\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using local addresses only for domain dns.podman\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: reading /etc/resolv.conf\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using local addresses only for domain dns.podman\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.0.2.3#53\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.29.169.13#53\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.29.170.12#53\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: using nameserver 10.2.32.1#53\nOct 04 12:33:30 managed-node2 dnsmasq[27454]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:30 managed-node2 podman[27202]: Pod:\nOct 04 12:33:30 managed-node2 podman[27202]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb\nOct 04 12:33:30 managed-node2 podman[27202]: Container:\nOct 04 12:33:30 managed-node2 podman[27202]: e74648d47617035a35842176c0cd197e876af20efb66c9a6fbb560c1ba4c6833\nOct 04 12:33:30 managed-node2 systemd[25493]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:30 managed-node2 sudo[27193]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:31 managed-node2 platform-python[27630]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:33:32 managed-node2 dnsmasq[27454]: listening on cni-podman1(#3): fe80::f8fb:d3ff:fe6b:28b6%cni-podman1\nOct 04 12:33:32 managed-node2 platform-python[27754]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:33 managed-node2 platform-python[27879]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:34 managed-node2 platform-python[28003]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:34 managed-node2 platform-python[28126]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:35 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:33:36 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:33:36 managed-node2 platform-python[28426]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:36 managed-node2 platform-python[28549]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:37 managed-node2 platform-python[28672]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:33:37 managed-node2 platform-python[28771]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595616.9605668-16444-215555946645887/source _original_basename=tmp7zrtpb5n follow=False checksum=65edd58cfda8e78be7cf81993b5521acb64e8edf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:33:37 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:33:38 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice.\n-- Subject: Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1056] manager: (cni-podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3)\nOct 04 12:33:38 managed-node2 systemd-udevd[28943]: Using default interface naming scheme 'rhel-8.0'.\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1123] manager: (veth58b4002b): new Veth device (/org/freedesktop/NetworkManager/Devices/4)\nOct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth58b4002b: link is not ready\nOct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered blocking state\nOct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state\nOct 04 12:33:38 managed-node2 kernel: device veth58b4002b entered promiscuous mode\nOct 04 12:33:38 managed-node2 systemd-udevd[28944]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:38 managed-node2 systemd-udevd[28944]: Could not generate persistent MAC address for veth58b4002b: No such file or directory\nOct 04 12:33:38 managed-node2 systemd-udevd[28943]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:38 managed-node2 systemd-udevd[28943]: Could not generate persistent MAC address for cni-podman1: No such file or directory\nOct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1326] device (cni-podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1331] device (cni-podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1338] device (cni-podman1): Activation: starting connection 'cni-podman1' (f4b0bed9-ed1a-4daa-9776-1b7c64cb04df)\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1339] device (cni-podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1342] device (cni-podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1344] device (cni-podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1345] device (cni-podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nOct 04 12:33:38 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth58b4002b: link becomes ready\nOct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered blocking state\nOct 04 12:33:38 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered forwarding state\nOct 04 12:33:38 managed-node2 dbus-daemon[595]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=660 comm=\"/usr/sbin/NetworkManager --no-daemon \" label=\"system_u:system_r:NetworkManager_t:s0\")\nOct 04 12:33:38 managed-node2 systemd[1]: Starting Network Manager Script Dispatcher Service...\n-- Subject: Unit NetworkManager-dispatcher.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit NetworkManager-dispatcher.service has begun starting up.\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1545] device (veth58b4002b): carrier: link connected\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1548] device (cni-podman1): carrier: link connected\nOct 04 12:33:38 managed-node2 dbus-daemon[595]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'\nOct 04 12:33:38 managed-node2 systemd[1]: Started Network Manager Script Dispatcher Service.\n-- Subject: Unit NetworkManager-dispatcher.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit NetworkManager-dispatcher.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1968] device (cni-podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1970] device (cni-podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')\nOct 04 12:33:38 managed-node2 NetworkManager[660]: [1759595618.1979] device (cni-podman1): Activation: successful, device activated.\nOct 04 12:33:38 managed-node2 dnsmasq[29065]: listening on cni-podman1(#3): 10.89.0.1\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: started, version 2.79 cachesize 150\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using local addresses only for domain dns.podman\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: reading /etc/resolv.conf\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using local addresses only for domain dns.podman\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.29.169.13#53\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.29.170.12#53\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: using nameserver 10.2.32.1#53\nOct 04 12:33:38 managed-node2 dnsmasq[29069]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:38 managed-node2 systemd[1]: Started libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope.\n-- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}\nOct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : terminal_ctrl_fd: 13\nOct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : winsz read side: 17, winsz write side: 18\nOct 04 12:33:38 managed-node2 systemd[1]: Started libcontainer container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.\n-- Subject: Unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : container PID: 29081\nOct 04 12:33:38 managed-node2 systemd[1]: Started libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope.\n-- Subject: Unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}\nOct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : terminal_ctrl_fd: 12\nOct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : winsz read side: 16, winsz write side: 17\nOct 04 12:33:38 managed-node2 systemd[1]: Started libcontainer container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.\n-- Subject: Unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:38 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : container PID: 29103\nOct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nOct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\n Container:\n b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\n \nOct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-10-04T12:33:37-04:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2025-10-04T12:33:37-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:31:14.473584587 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-10-04T12:33:37-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"setting container name f7eedbe6e6e1-infra\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Allocated lock 1 for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are not supported\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Check for idmapped mounts support \"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created container \\\"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Container \\\"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Container \\\"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\\\" has run directory \\\"/run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"adding container to pod httpd2\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"setting container name httpd2-httpd2\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Allocated lock 2 for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created container \\\"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Container \\\"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Container \\\"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\\\" has run directory \\\"/run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Strongconnecting node acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Pushed acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 onto stack\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Finishing node acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7. Popped acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 off stack\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Strongconnecting node b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Pushed b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a onto stack\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Finishing node b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a. Popped b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a off stack\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/CLVCQDNEL47VMN42Y3O6VVBSEK,upperdir=/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/diff,workdir=/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c321,c454\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Mounted container \\\"acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\\\" at \\\"/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created root filesystem for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 at /var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Made network namespace at /run/netns/netns-4bb92ac6-c391-8230-0912-824e2a801d42 for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"cni result for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:92:f8:b0:67:7f:78 Sandbox:} {Name:veth58b4002b Mac:9e:e6:53:58:c5:ef Sandbox:} {Name:eth0 Mac:9a:79:68:03:db:b9 Sandbox:/run/netns/netns-4bb92ac6-c391-8230-0912-824e2a801d42}] [{Version:4 Interface:0xc0006223b8 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Setting Cgroups for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 to machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice:libpod:acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/var/lib/containers/storage/overlay/93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522/merged\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created OCI spec for container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 at /var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/config.json\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 -u acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata -p /run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/pidfile -n f7eedbe6e6e1-infra --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7]\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice and unitName libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Received: 29081\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Got Conmon PID as 29071\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 in OCI runtime\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Starting container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7 with command [/catatonit -P]\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Started container acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/LBBH4VMJZF2KPCTZG3NWOHXUKQ,upperdir=/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/diff,workdir=/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c321,c454\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Mounted container \\\"b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\\\" at \\\"/var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/merged\\\"\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created root filesystem for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a at /var/lib/containers/storage/overlay/e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0/merged\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Setting Cgroups for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a to machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice:libpod:b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created OCI spec for container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a at /var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/config.json\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice for parent machine.slice and name libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a -u b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata -p /run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a]\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice and unitName libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Received: 29103\"\n time=\"2025-10-04T12:33:38-04:00\" level=info msg=\"Got Conmon PID as 29092\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Created container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a in OCI runtime\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Starting container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Started container b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-10-04T12:33:38-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:38 managed-node2 platform-python[28896]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nOct 04 12:33:39 managed-node2 platform-python[29234]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nOct 04 12:33:39 managed-node2 systemd[1]: Reloading.\nOct 04 12:33:39 managed-node2 dnsmasq[29069]: listening on cni-podman1(#3): fe80::90f8:b0ff:fe67:7f78%cni-podman1\nOct 04 12:33:39 managed-node2 platform-python[29403]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nOct 04 12:33:39 managed-node2 systemd[1]: Reloading.\nOct 04 12:33:40 managed-node2 platform-python[29558]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nOct 04 12:33:40 managed-node2 systemd[1]: Created slice system-podman\\x2dkube.slice.\n-- Subject: Unit system-podman\\x2dkube.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit system-podman\\x2dkube.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:40 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun starting up.\nOct 04 12:33:40 managed-node2 conmon[29071]: conmon acfc6789a4b1745da3ca : container 29081 exited with status 137\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Consumed 31ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope completed and consumed the indicated resources.\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7)\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Using sqlite as database backend\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using graph driver overlay\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using transient store: false\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Initializing event backend file\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Setting parallel job count to 7\"\nOct 04 12:33:40 managed-node2 conmon[29092]: conmon b24762f2266fc468f515 : container 29103 exited with status 137\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope completed and consumed the indicated resources.\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a)\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Using sqlite as database backend\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using graph driver overlay\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using transient store: false\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Initializing event backend file\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=info msg=\"Setting parallel job count to 7\"\nOct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-e22ddd3644751118ad7dcde0a3555d3ef78273813a6cba15ca9fe8a53a4b15e0-merged.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a)\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29595]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-b24762f2266fc468f515133aec34b9cecec863f184567f87d3e4f22f2730281a.scope has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state\nOct 04 12:33:40 managed-node2 kernel: device veth58b4002b left promiscuous mode\nOct 04 12:33:40 managed-node2 kernel: cni-podman1: port 1(veth58b4002b) entered disabled state\nOct 04 12:33:40 managed-node2 systemd[1]: run-netns-netns\\x2d4bb92ac6\\x2dc391\\x2d8230\\x2d0912\\x2d824e2a801d42.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d4bb92ac6\\x2dc391\\x2d8230\\x2d0912\\x2d824e2a801d42.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-93e53b2b8a698c0026ff62cc003a324ae01075b90c5b864966a218b6bc38c522-merged.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7)\"\nOct 04 12:33:40 managed-node2 /usr/bin/podman[29580]: time=\"2025-10-04T12:33:40-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:33:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: Stopping libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope.\n-- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has begun shutting down.\nOct 04 12:33:40 managed-node2 systemd[1]: libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has successfully entered the 'dead' state.\nOct 04 12:33:40 managed-node2 systemd[1]: Stopped libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope.\n-- Subject: Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-acfc6789a4b1745da3ca150f30bec749a1c091ff840acfe2f188faed5b1a4fc7.scope has finished shutting down.\nOct 04 12:33:40 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice.\n-- Subject: Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice has finished shutting down.\nOct 04 12:33:40 managed-node2 systemd[1]: machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice: Consumed 193ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a.slice completed and consumed the indicated resources.\nOct 04 12:33:40 managed-node2 podman[29565]: Pods stopped:\nOct 04 12:33:40 managed-node2 podman[29565]: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\nOct 04 12:33:40 managed-node2 podman[29565]: Pods removed:\nOct 04 12:33:40 managed-node2 podman[29565]: f7eedbe6e6e11a4f474173e1faa441f5329f6f7a44f1214ae42ce28fb402fe5a\nOct 04 12:33:40 managed-node2 podman[29565]: Secrets removed:\nOct 04 12:33:40 managed-node2 podman[29565]: Volumes removed:\nOct 04 12:33:40 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice.\n-- Subject: Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:40 managed-node2 systemd[1]: Started libcontainer container 2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.\n-- Subject: Unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:41 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth44fc3814: link is not ready\nOct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0690] manager: (veth44fc3814): new Veth device (/org/freedesktop/NetworkManager/Devices/5)\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state\nOct 04 12:33:41 managed-node2 kernel: device veth44fc3814 entered promiscuous mode\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered forwarding state\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state\nOct 04 12:33:41 managed-node2 systemd-udevd[29722]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:41 managed-node2 systemd-udevd[29722]: Could not generate persistent MAC address for veth44fc3814: No such file or directory\nOct 04 12:33:41 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth44fc3814: link becomes ready\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered blocking state\nOct 04 12:33:41 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered forwarding state\nOct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0827] device (veth44fc3814): carrier: link connected\nOct 04 12:33:41 managed-node2 NetworkManager[660]: [1759595621.0829] device (cni-podman1): carrier: link connected\nOct 04 12:33:41 managed-node2 dnsmasq[29793]: listening on cni-podman1(#3): 10.89.0.1\nOct 04 12:33:41 managed-node2 dnsmasq[29793]: listening on cni-podman1(#3): fe80::90f8:b0ff:fe67:7f78%cni-podman1\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: started, version 2.79 cachesize 150\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using local addresses only for domain dns.podman\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: reading /etc/resolv.conf\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using local addresses only for domain dns.podman\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.29.169.13#53\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.29.170.12#53\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: using nameserver 10.2.32.1#53\nOct 04 12:33:41 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:41 managed-node2 systemd[1]: Started libcontainer container d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.\n-- Subject: Unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:41 managed-node2 systemd[1]: Started libcontainer container c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.\n-- Subject: Unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:41 managed-node2 podman[29565]: Pod:\nOct 04 12:33:41 managed-node2 podman[29565]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a\nOct 04 12:33:41 managed-node2 podman[29565]: Container:\nOct 04 12:33:41 managed-node2 podman[29565]: c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28\nOct 04 12:33:41 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:42 managed-node2 platform-python[29963]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:43 managed-node2 platform-python[30096]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:44 managed-node2 platform-python[30220]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:45 managed-node2 platform-python[30343]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:46 managed-node2 platform-python[30638]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:33:47 managed-node2 platform-python[30761]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:47 managed-node2 platform-python[30884]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:33:47 managed-node2 platform-python[30983]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1759595627.3886073-16945-54933471056529/source _original_basename=tmpukku_qg2 follow=False checksum=e89a97ee50e2e2344cd04b5ef33140ac4f197bf8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nOct 04 12:33:48 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state.\nOct 04 12:33:48 managed-node2 platform-python[31108]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:33:48 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice.\n-- Subject: Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:48 managed-node2 NetworkManager[660]: [1759595628.5733] manager: (vethca854251): new Veth device (/org/freedesktop/NetworkManager/Devices/6)\nOct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethca854251: link is not ready\nOct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered blocking state\nOct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state\nOct 04 12:33:48 managed-node2 kernel: device vethca854251 entered promiscuous mode\nOct 04 12:33:48 managed-node2 systemd-udevd[31155]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:48 managed-node2 systemd-udevd[31155]: Could not generate persistent MAC address for vethca854251: No such file or directory\nOct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nOct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nOct 04 12:33:48 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethca854251: link becomes ready\nOct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered blocking state\nOct 04 12:33:48 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered forwarding state\nOct 04 12:33:48 managed-node2 NetworkManager[660]: [1759595628.6066] device (vethca854251): carrier: link connected\nOct 04 12:33:48 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses\nOct 04 12:33:48 managed-node2 systemd[1]: Started libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope.\n-- Subject: Unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:48 managed-node2 systemd[1]: Started libcontainer container 59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.\n-- Subject: Unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:48 managed-node2 systemd[1]: Started libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope.\n-- Subject: Unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:48 managed-node2 systemd[1]: Started libcontainer container 0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.\n-- Subject: Unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:49 managed-node2 platform-python[31388]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nOct 04 12:33:49 managed-node2 systemd[1]: Reloading.\nOct 04 12:33:50 managed-node2 platform-python[31549]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nOct 04 12:33:50 managed-node2 systemd[1]: Reloading.\nOct 04 12:33:50 managed-node2 platform-python[31704]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nOct 04 12:33:50 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun starting up.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has successfully entered the 'dead' state.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Consumed 31ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope completed and consumed the indicated resources.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has successfully entered the 'dead' state.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Consumed 31ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope completed and consumed the indicated resources.\nOct 04 12:33:50 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:33:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay-96a34ad4fa258979f69c8abe553376ab173aebc4813555f0aa72e1d24059a836-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-96a34ad4fa258979f69c8abe553376ab173aebc4813555f0aa72e1d24059a836-merged.mount has successfully entered the 'dead' state.\nOct 04 12:33:50 managed-node2 systemd[1]: libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-0a9aebf21ef59da78d892ecdfbfeafd9a02ce1dd6cecb886a4e457c1c3d1cfac.scope has successfully entered the 'dead' state.\nOct 04 12:33:50 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state\nOct 04 12:33:50 managed-node2 kernel: device vethca854251 left promiscuous mode\nOct 04 12:33:50 managed-node2 kernel: cni-podman1: port 2(vethca854251) entered disabled state\nOct 04 12:33:50 managed-node2 systemd[1]: run-netns-netns\\x2d04fac8f5\\x2d669a\\x2d2b56\\x2d8dc1\\x2d2c27fe482b75.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d04fac8f5\\x2d669a\\x2d2b56\\x2d8dc1\\x2d2c27fe482b75.mount has successfully entered the 'dead' state.\nOct 04 12:33:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:33:51 managed-node2 systemd[1]: var-lib-containers-storage-overlay-bf4340b80dd987c0b14d9ab53281fd43797b6665f7cf0be1b6e809f99681d28d-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-bf4340b80dd987c0b14d9ab53281fd43797b6665f7cf0be1b6e809f99681d28d-merged.mount has successfully entered the 'dead' state.\nOct 04 12:33:51 managed-node2 systemd[1]: libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-59e2b966496d244cf30f264876beba78a3e30c3f8007889addf807658b0912ca.scope has successfully entered the 'dead' state.\nOct 04 12:33:51 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice.\n-- Subject: Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice has finished shutting down.\nOct 04 12:33:51 managed-node2 systemd[1]: machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice: Consumed 194ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3.slice completed and consumed the indicated resources.\nOct 04 12:33:51 managed-node2 podman[31711]: Pods stopped:\nOct 04 12:33:51 managed-node2 podman[31711]: 668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3\nOct 04 12:33:51 managed-node2 podman[31711]: Pods removed:\nOct 04 12:33:51 managed-node2 podman[31711]: 668960924bd19de58e70c370f0672a18890459bbc04a315274301b645cae85d3\nOct 04 12:33:51 managed-node2 podman[31711]: Secrets removed:\nOct 04 12:33:51 managed-node2 podman[31711]: Volumes removed:\nOct 04 12:33:51 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice.\n-- Subject: Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container 33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.\n-- Subject: Unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:51 managed-node2 NetworkManager[660]: [1759595631.3224] manager: (vethe1bf25d0): new Veth device (/org/freedesktop/NetworkManager/Devices/7)\nOct 04 12:33:51 managed-node2 systemd-udevd[31876]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nOct 04 12:33:51 managed-node2 systemd-udevd[31876]: Could not generate persistent MAC address for vethe1bf25d0: No such file or directory\nOct 04 12:33:51 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethe1bf25d0: link is not ready\nOct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered blocking state\nOct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state\nOct 04 12:33:51 managed-node2 kernel: device vethe1bf25d0 entered promiscuous mode\nOct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered blocking state\nOct 04 12:33:51 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered forwarding state\nOct 04 12:33:51 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe1bf25d0: link becomes ready\nOct 04 12:33:51 managed-node2 NetworkManager[660]: [1759595631.3521] device (vethe1bf25d0): carrier: link connected\nOct 04 12:33:51 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses\nOct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container 8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.\n-- Subject: Unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:51 managed-node2 systemd[1]: Started libcontainer container d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.\n-- Subject: Unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:51 managed-node2 podman[31711]: Pod:\nOct 04 12:33:51 managed-node2 podman[31711]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c\nOct 04 12:33:51 managed-node2 podman[31711]: Container:\nOct 04 12:33:51 managed-node2 podman[31711]: d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a\nOct 04 12:33:51 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:52 managed-node2 sudo[32110]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrjtqrmhbkjdxpmvsixtkgxksntzspm ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595632.0315228-17132-238240516154633/AnsiballZ_command.py'\nOct 04 12:33:52 managed-node2 sudo[32110]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:52 managed-node2 platform-python[32113]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:52 managed-node2 systemd[25493]: Started podman-32122.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:33:52 managed-node2 sudo[32110]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:52 managed-node2 platform-python[32260]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:53 managed-node2 platform-python[32391]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:53 managed-node2 sudo[32521]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpttuiaoavnpntacugmnwvpgddxwnpay ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595633.3541243-17183-26845545367359/AnsiballZ_command.py'\nOct 04 12:33:53 managed-node2 sudo[32521]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:33:53 managed-node2 platform-python[32524]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:53 managed-node2 sudo[32521]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:33:53 managed-node2 platform-python[32650]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:54 managed-node2 platform-python[32776]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:54 managed-node2 platform-python[32902]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:55 managed-node2 platform-python[33027]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:33:55 managed-node2 rsyslogd[1019]: imjournal: journal files changed, reloading... [v8.2102.0-15.el8 try https://www.rsyslog.com/e/0 ]\nOct 04 12:33:55 managed-node2 platform-python[33152]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd1-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:56 managed-node2 platform-python[33276]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd2-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:56 managed-node2 platform-python[33400]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_herecvo4_podman/httpd3-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:33:59 managed-node2 platform-python[33649]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:00 managed-node2 platform-python[33778]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:03 managed-node2 platform-python[33903]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:34:06 managed-node2 platform-python[34026]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:34:06 managed-node2 platform-python[34153]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:34:07 managed-node2 platform-python[34280]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:34:09 managed-node2 platform-python[34403]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:34:11 managed-node2 platform-python[34526]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:34:14 managed-node2 platform-python[34649]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:34:16 managed-node2 platform-python[34772]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nOct 04 12:34:18 managed-node2 platform-python[34933]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nOct 04 12:34:19 managed-node2 platform-python[35056]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nOct 04 12:34:23 managed-node2 platform-python[35179]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nOct 04 12:34:24 managed-node2 platform-python[35303]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:24 managed-node2 platform-python[35428]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:24 managed-node2 platform-python[35552]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:25 managed-node2 platform-python[35676]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:26 managed-node2 platform-python[35800]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nOct 04 12:34:27 managed-node2 platform-python[35923]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:27 managed-node2 platform-python[36046]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:27 managed-node2 sudo[36169]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbwxxttcmtyfwkqugxtiqxlyfylyxbp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595667.7794547-18775-44408969327564/AnsiballZ_podman_image.py'\nOct 04 12:34:27 managed-node2 sudo[36169]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36174.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36182.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36189.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36199.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36207.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36215.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 systemd[25493]: Started podman-36223.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:28 managed-node2 sudo[36169]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:34:29 managed-node2 platform-python[36352]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:29 managed-node2 platform-python[36477]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:30 managed-node2 platform-python[36600]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:34:30 managed-node2 platform-python[36664]: ansible-file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=tmp4tbrh702 recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:30 managed-node2 sudo[36787]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-recvjrutxodvbgmoimxrlbeojjenzstr ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595670.5228426-19044-7471957653983/AnsiballZ_podman_play.py'\nOct 04 12:34:30 managed-node2 sudo[36787]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:34:30 managed-node2 systemd[25493]: Started podman-36798.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-10-04T12:34:30-04:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:33:27.193743577 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05)\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:b990f987be4eed07dae2d2c8855477bd90130e304b5d45564b94e44df1ef7a05\\\"\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Got pod cgroup as /libpod_parent/34492a3900bc4a9b7b06bf0f56b147105736e26abab87e6881cbea1b0e369c1d\"\n Error: adding pod to state: name \"httpd1\" is in use: pod already exists\n time=\"2025-10-04T12:34:30-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:34:30 managed-node2 platform-python[36790]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nOct 04 12:34:30 managed-node2 sudo[36787]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:34:31 managed-node2 platform-python[36952]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:34:32 managed-node2 platform-python[37076]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:33 managed-node2 platform-python[37201]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:34 managed-node2 platform-python[37325]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:34 managed-node2 platform-python[37448]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:36 managed-node2 platform-python[37743]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:36 managed-node2 platform-python[37868]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:37 managed-node2 platform-python[37991]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:34:37 managed-node2 platform-python[38055]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=tmpeaiobce5 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:34:37 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice.\n-- Subject: Unit machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-10-04T12:34:37-04:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2025-10-04T12:34:37-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-10-04 12:31:14.473584587 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751)\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9bf5f45dc7a059f998280fd1a84d2cb1cc3c9cf732cdcd689c287e1835e23751\\\"\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice for parent machine.slice and name libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice\"\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_8de2630a4e6252baa81e39deedaa71ee2e5251829c17a5625df1448508dcf4c3.slice\"\n Error: adding pod to state: name \"httpd2\" is in use: pod already exists\n time=\"2025-10-04T12:34:37-04:00\" level=debug msg=\"Shutting down engines\"\nOct 04 12:34:37 managed-node2 platform-python[38178]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nOct 04 12:34:39 managed-node2 platform-python[38339]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:40 managed-node2 platform-python[38464]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:41 managed-node2 platform-python[38588]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:41 managed-node2 platform-python[38711]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:43 managed-node2 platform-python[39006]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:43 managed-node2 platform-python[39131]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:43 managed-node2 platform-python[39254]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nOct 04 12:34:44 managed-node2 platform-python[39318]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=tmps2by7p7f recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:44 managed-node2 platform-python[39441]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:34:44 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice.\n-- Subject: Unit machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_989871f381fec1f937d3af216da739576d2f9779a5e90cbfaa850502488d38c5.slice has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:45 managed-node2 sudo[39603]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjcjbadzeevkrtchrfausielavpgqkug ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595685.2238288-19784-175751449856146/AnsiballZ_command.py'\nOct 04 12:34:45 managed-node2 sudo[39603]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:45 managed-node2 platform-python[39606]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:45 managed-node2 systemd[25493]: Started podman-39616.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:34:45 managed-node2 sudo[39603]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:34:45 managed-node2 platform-python[39746]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:46 managed-node2 platform-python[39877]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:46 managed-node2 sudo[40008]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubjcpvqkodstxhlsbjwhddazysxbggp ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595686.427822-19841-159760384353159/AnsiballZ_command.py'\nOct 04 12:34:46 managed-node2 sudo[40008]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:46 managed-node2 platform-python[40011]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:46 managed-node2 sudo[40008]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:34:47 managed-node2 platform-python[40137]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:47 managed-node2 platform-python[40263]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:47 managed-node2 platform-python[40389]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:48 managed-node2 platform-python[40513]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:48 managed-node2 platform-python[40637]: ansible-uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:34:51 managed-node2 platform-python[40886]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:52 managed-node2 platform-python[41015]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:55 managed-node2 platform-python[41140]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nOct 04 12:34:56 managed-node2 platform-python[41264]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:56 managed-node2 platform-python[41389]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:57 managed-node2 platform-python[41513]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:58 managed-node2 platform-python[41637]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:34:58 managed-node2 platform-python[41761]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:34:58 managed-node2 sudo[41886]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzgxgxnqwlywntbcwmbxfvzqsvbvyldz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595698.7728405-20488-3389549821227/AnsiballZ_systemd.py'\nOct 04 12:34:58 managed-node2 sudo[41886]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:34:59 managed-node2 platform-python[41889]: ansible-systemd Invoked with name= scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:34:59 managed-node2 systemd[25493]: Reloading.\nOct 04 12:34:59 managed-node2 systemd[25493]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nOct 04 12:34:59 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state\nOct 04 12:34:59 managed-node2 kernel: device veth938ef76c left promiscuous mode\nOct 04 12:34:59 managed-node2 kernel: cni-podman1: port 1(veth938ef76c) entered disabled state\nOct 04 12:34:59 managed-node2 podman[42042]: time=\"2025-10-04T12:34:59-04:00\" level=error msg=\"container not running\"\nOct 04 12:34:59 managed-node2 podman[41905]: Pods stopped:\nOct 04 12:34:59 managed-node2 podman[41905]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb\nOct 04 12:34:59 managed-node2 podman[41905]: Pods removed:\nOct 04 12:34:59 managed-node2 podman[41905]: e2136086e2f9ee2160cfb4def2ccdfca084fdbe4f7fe9cb8a2664a8c62dbb7fb\nOct 04 12:34:59 managed-node2 podman[41905]: Secrets removed:\nOct 04 12:34:59 managed-node2 podman[41905]: Volumes removed:\nOct 04 12:34:59 managed-node2 systemd[25493]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:34:59 managed-node2 sudo[41886]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:00 managed-node2 platform-python[42189]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:00 managed-node2 sudo[42314]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrcjhxluknhgauyehthbegbfnykwuipf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595700.2224684-20562-98442846681196/AnsiballZ_podman_play.py'\nOct 04 12:35:00 managed-node2 sudo[42314]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nOct 04 12:35:00 managed-node2 systemd[25493]: Started podman-42325.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nOct 04 12:35:00 managed-node2 platform-python[42317]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nOct 04 12:35:00 managed-node2 sudo[42314]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:00 managed-node2 platform-python[42454]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:02 managed-node2 platform-python[42577]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:02 managed-node2 platform-python[42701]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:04 managed-node2 platform-python[42826]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:04 managed-node2 platform-python[42950]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:35:04 managed-node2 systemd[1]: Reloading.\nOct 04 12:35:04 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun shutting down.\nOct 04 12:35:04 managed-node2 systemd[1]: libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope has successfully entered the 'dead' state.\nOct 04 12:35:04 managed-node2 systemd[1]: libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope: Consumed 34ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c43a91cf5c2d4a53c7ecb317550b807b3b54b4b2d53450c46ffd7559e1d92e28.scope completed and consumed the indicated resources.\nOct 04 12:35:04 managed-node2 systemd[1]: libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope has successfully entered the 'dead' state.\nOct 04 12:35:04 managed-node2 systemd[1]: libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782.scope completed and consumed the indicated resources.\nOct 04 12:35:04 managed-node2 systemd[1]: var-lib-containers-storage-overlay-e31deca989ff1b9cab2066515ef70e9fb506731c52bd3eea5fcc524723f3fd95-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-e31deca989ff1b9cab2066515ef70e9fb506731c52bd3eea5fcc524723f3fd95-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 dnsmasq[29797]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nOct 04 12:35:05 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state\nOct 04 12:35:05 managed-node2 kernel: device veth44fc3814 left promiscuous mode\nOct 04 12:35:05 managed-node2 kernel: cni-podman1: port 1(veth44fc3814) entered disabled state\nOct 04 12:35:05 managed-node2 systemd[1]: run-netns-netns\\x2d1f7b53eb\\x2d816f\\x2d29e7\\x2dfe7f\\x2d6eb0cf8f8502.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d1f7b53eb\\x2d816f\\x2d29e7\\x2dfe7f\\x2d6eb0cf8f8502.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-d443a4ba410c5bb2ba4ca8873ef1df86e7bbcae023c279d137064ec4ecfe4782-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay-db61c157c6f5df8b7c955b07912536ab089af634fcd5e6ba324c673952746b22-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-db61c157c6f5df8b7c955b07912536ab089af634fcd5e6ba324c673952746b22-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice.\n-- Subject: Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice has finished shutting down.\nOct 04 12:35:05 managed-node2 systemd[1]: machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice: Consumed 67ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a.slice completed and consumed the indicated resources.\nOct 04 12:35:05 managed-node2 podman[42986]: Pods stopped:\nOct 04 12:35:05 managed-node2 podman[42986]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a\nOct 04 12:35:05 managed-node2 podman[42986]: Pods removed:\nOct 04 12:35:05 managed-node2 podman[42986]: 981f83566d24fbf125c6eda4af5e2c534c705d1c536fc032c177a6145a4b1e3a\nOct 04 12:35:05 managed-node2 podman[42986]: Secrets removed:\nOct 04 12:35:05 managed-node2 podman[42986]: Volumes removed:\nOct 04 12:35:05 managed-node2 systemd[1]: libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope: Consumed 32ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1.scope completed and consumed the indicated resources.\nOct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-2887384cee50d7de6b1b1b8ae2fbdc50141eb8dfe5ae8f16f2676dcd29cdf6c1-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 dnsmasq[29797]: exiting on receipt of SIGTERM\nOct 04 12:35:05 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state.\nOct 04 12:35:05 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished shutting down.\nOct 04 12:35:05 managed-node2 platform-python[43261]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:05 managed-node2 systemd[1]: var-lib-containers-storage-overlay-4f2be2d0d065dff6d155c9d9c0fcaafe14b4a59e471bfad994f4271172df41f6-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-4f2be2d0d065dff6d155c9d9c0fcaafe14b4a59e471bfad994f4271172df41f6-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nOct 04 12:35:06 managed-node2 platform-python[43386]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nOct 04 12:35:06 managed-node2 platform-python[43522]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:08 managed-node2 platform-python[43645]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:09 managed-node2 platform-python[43770]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:10 managed-node2 platform-python[43894]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:35:10 managed-node2 systemd[1]: Reloading.\nOct 04 12:35:10 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun shutting down.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46.scope completed and consumed the indicated resources.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope: Consumed 32ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-d11f4bddc79cfdcc14ada4b6526c7b140266f67a67f7571a286ba6859b2e234a.scope completed and consumed the indicated resources.\nOct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay-41fd0b8b2abd57633d991528a5058698098ba238146a9b3301e23d7fc73f3208-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-41fd0b8b2abd57633d991528a5058698098ba238146a9b3301e23d7fc73f3208-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state\nOct 04 12:35:10 managed-node2 kernel: device vethe1bf25d0 left promiscuous mode\nOct 04 12:35:10 managed-node2 kernel: cni-podman1: port 2(vethe1bf25d0) entered disabled state\nOct 04 12:35:10 managed-node2 systemd[1]: run-netns-netns\\x2d027c972b\\x2d4f60\\x2dd6f9\\x2d5e22\\x2d75c001071f96.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d027c972b\\x2d4f60\\x2dd6f9\\x2d5e22\\x2d75c001071f96.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-8f7c1c062a8b3169e60885440558c7c779d21eb45e6b24abd1f9bce35f76cf46-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay-4d023b27109b3ccd34a210cfed9c7806ca6f6666b5b632942f94ae65bcc6121b-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-4d023b27109b3ccd34a210cfed9c7806ca6f6666b5b632942f94ae65bcc6121b-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice.\n-- Subject: Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice has finished shutting down.\nOct 04 12:35:10 managed-node2 systemd[1]: machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice: Consumed 65ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c.slice completed and consumed the indicated resources.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope: Consumed 34ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12.scope completed and consumed the indicated resources.\nOct 04 12:35:10 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-33f0431433b5b7b44e5aa8ba5b64efa28eedba5b1e5ae0dfc34223450d2bca12-userdata-shm.mount has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 podman[43930]: Pods stopped:\nOct 04 12:35:10 managed-node2 podman[43930]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c\nOct 04 12:35:10 managed-node2 podman[43930]: Pods removed:\nOct 04 12:35:10 managed-node2 podman[43930]: 6fcb6b17e0fe17ff60abc512da78df280658963eb559304b8d9aea330d3c900c\nOct 04 12:35:10 managed-node2 podman[43930]: Secrets removed:\nOct 04 12:35:10 managed-node2 podman[43930]: Volumes removed:\nOct 04 12:35:10 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state.\nOct 04 12:35:10 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished shutting down.\nOct 04 12:35:11 managed-node2 platform-python[44199]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay-92a24157aa21e0a67c0987f9e002760f709781079b00f0e3baf2a1840c17ef8f-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-92a24157aa21e0a67c0987f9e002760f709781079b00f0e3baf2a1840c17ef8f-merged.mount has successfully entered the 'dead' state.\nOct 04 12:35:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:35:11 managed-node2 platform-python[44324]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nOct 04 12:35:11 managed-node2 platform-python[44324]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml\nOct 04 12:35:12 managed-node2 platform-python[44460]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:13 managed-node2 platform-python[44583]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nOct 04 12:35:13 managed-node2 platform-python[44707]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:14 managed-node2 sudo[44832]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yohamxjcuwwqowlxqaokqdnnkiwnlnpj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595713.8242128-21262-181373190609036/AnsiballZ_podman_container_info.py'\nOct 04 12:35:14 managed-node2 sudo[44832]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:14 managed-node2 platform-python[44835]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None\nOct 04 12:35:14 managed-node2 systemd[25493]: Started podman-44837.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:14 managed-node2 sudo[44832]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:14 managed-node2 sudo[44966]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkavwbxyujjrxdsuzqtbvginlcolirvx ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595714.3794177-21283-159383776115316/AnsiballZ_command.py'\nOct 04 12:35:14 managed-node2 sudo[44966]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:14 managed-node2 platform-python[44969]: ansible-command Invoked with _raw_params=podman network ls -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:14 managed-node2 systemd[25493]: Started podman-44971.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:14 managed-node2 sudo[44966]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:14 managed-node2 sudo[45126]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzvbfpfbqkpdwzlmyrlntccwniydmtz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595714.849909-21318-262235972613194/AnsiballZ_command.py'\nOct 04 12:35:14 managed-node2 sudo[45126]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:15 managed-node2 platform-python[45129]: ansible-command Invoked with _raw_params=podman secret ls -n -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:15 managed-node2 systemd[25493]: Started podman-45131.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:15 managed-node2 sudo[45126]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:15 managed-node2 platform-python[45261]: ansible-command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None\nOct 04 12:35:15 managed-node2 systemd[1]: Stopping User Manager for UID 3001...\n-- Subject: Unit user@3001.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has begun shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopping podman-pause-f03acc05.scope.\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Default.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopping D-Bus User Message Bus...\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Removed slice podman\\x2dkube.slice.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped D-Bus User Message Bus.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Basic System.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Timers.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped Mark boot as successful after the user session has run 2 minutes.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Paths.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped target Sockets.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Closed D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Stopped podman-pause-f03acc05.scope.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Removed slice user.slice.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[25493]: Reached target Shutdown.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:15 managed-node2 systemd[25493]: Started Exit the Session.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:15 managed-node2 systemd[25493]: Reached target Exit the Session.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:15 managed-node2 systemd[1]: user@3001.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit user@3001.service has successfully entered the 'dead' state.\nOct 04 12:35:15 managed-node2 systemd[1]: Stopped User Manager for UID 3001.\n-- Subject: Unit user@3001.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[1]: Stopping User runtime directory /run/user/3001...\n-- Subject: Unit user-runtime-dir@3001.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has begun shutting down.\nOct 04 12:35:15 managed-node2 systemd[1]: run-user-3001.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-user-3001.mount has successfully entered the 'dead' state.\nOct 04 12:35:15 managed-node2 systemd[1]: user-runtime-dir@3001.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit user-runtime-dir@3001.service has successfully entered the 'dead' state.\nOct 04 12:35:15 managed-node2 systemd[1]: Stopped User runtime directory /run/user/3001.\n-- Subject: Unit user-runtime-dir@3001.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has finished shutting down.\nOct 04 12:35:15 managed-node2 systemd[1]: Removed slice User Slice of UID 3001.\n-- Subject: Unit user-3001.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-3001.slice has finished shutting down.\nOct 04 12:35:15 managed-node2 platform-python[45395]: ansible-command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:16 managed-node2 sudo[45519]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtaohsaclvtebxmwbqjbotdzsbchavvn ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595716.4392693-21370-112299601135578/AnsiballZ_command.py'\nOct 04 12:35:16 managed-node2 sudo[45519]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:16 managed-node2 platform-python[45522]: ansible-command Invoked with _raw_params=podman pod exists httpd1 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:16 managed-node2 sudo[45519]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:17 managed-node2 platform-python[45652]: ansible-command Invoked with _raw_params=podman pod exists httpd2 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:17 managed-node2 platform-python[45782]: ansible-command Invoked with _raw_params=podman pod exists httpd3 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:17 managed-node2 sudo[45913]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmaujacgizsrobtsjzpmjnnqwleohar ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1759595717.6763232-21443-47795602630278/AnsiballZ_command.py'\nOct 04 12:35:17 managed-node2 sudo[45913]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nOct 04 12:35:17 managed-node2 platform-python[45916]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:17 managed-node2 sudo[45913]: pam_unix(sudo:session): session closed for user podman_basic_user\nOct 04 12:35:18 managed-node2 platform-python[46042]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:18 managed-node2 platform-python[46168]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:19 managed-node2 platform-python[46294]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:21 managed-node2 platform-python[46542]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:22 managed-node2 platform-python[46671]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:23 managed-node2 platform-python[46795]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:26 managed-node2 platform-python[46920]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nOct 04 12:35:26 managed-node2 platform-python[47044]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:27 managed-node2 platform-python[47169]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:27 managed-node2 platform-python[47293]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:28 managed-node2 platform-python[47417]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:29 managed-node2 platform-python[47541]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:29 managed-node2 platform-python[47664]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:30 managed-node2 platform-python[47787]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:31 managed-node2 platform-python[47910]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:31 managed-node2 platform-python[48034]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:33 managed-node2 platform-python[48159]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:33 managed-node2 platform-python[48283]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:35:34 managed-node2 platform-python[48410]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:34 managed-node2 platform-python[48533]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:35 managed-node2 platform-python[48656]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:37 managed-node2 platform-python[48781]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:37 managed-node2 platform-python[48905]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nOct 04 12:35:38 managed-node2 platform-python[49032]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:38 managed-node2 platform-python[49155]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:39 managed-node2 platform-python[49278]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nOct 04 12:35:40 managed-node2 platform-python[49402]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:41 managed-node2 platform-python[49525]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:41 managed-node2 platform-python[49648]: ansible-file Invoked with path=/tmp/lsr_herecvo4_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:35:42 managed-node2 sshd[49669]: Accepted publickey for root from 10.31.11.222 port 49618 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nOct 04 12:35:42 managed-node2 systemd[1]: Started Session 9 of user root.\n-- Subject: Unit session-9.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-9.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:42 managed-node2 systemd-logind[598]: New session 9 of user root.\n-- Subject: A new session 9 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 9 has been created for the user root.\n-- \n-- The leading process of the session is 49669.\nOct 04 12:35:42 managed-node2 sshd[49669]: pam_unix(sshd:session): session opened for user root by (uid=0)\nOct 04 12:35:42 managed-node2 sshd[49672]: Received disconnect from 10.31.11.222 port 49618:11: disconnected by user\nOct 04 12:35:42 managed-node2 sshd[49672]: Disconnected from user root 10.31.11.222 port 49618\nOct 04 12:35:42 managed-node2 sshd[49669]: pam_unix(sshd:session): session closed for user root\nOct 04 12:35:42 managed-node2 systemd[1]: session-9.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-9.scope has successfully entered the 'dead' state.\nOct 04 12:35:42 managed-node2 systemd-logind[598]: Session 9 logged out. Waiting for processes to exit.\nOct 04 12:35:42 managed-node2 systemd-logind[598]: Removed session 9.\n-- Subject: Session 9 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 9 has been terminated.\nOct 04 12:35:44 managed-node2 platform-python[49834]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nOct 04 12:35:44 managed-node2 platform-python[49961]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:45 managed-node2 platform-python[50084]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:47 managed-node2 platform-python[50332]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:48 managed-node2 platform-python[50461]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:48 managed-node2 platform-python[50585]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:50 managed-node2 sshd[50608]: Accepted publickey for root from 10.31.11.222 port 49628 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nOct 04 12:35:50 managed-node2 systemd[1]: Started Session 10 of user root.\n-- Subject: Unit session-10.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-10.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:35:50 managed-node2 systemd-logind[598]: New session 10 of user root.\n-- Subject: A new session 10 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 10 has been created for the user root.\n-- \n-- The leading process of the session is 50608.\nOct 04 12:35:50 managed-node2 sshd[50608]: pam_unix(sshd:session): session opened for user root by (uid=0)\nOct 04 12:35:50 managed-node2 sshd[50611]: Received disconnect from 10.31.11.222 port 49628:11: disconnected by user\nOct 04 12:35:50 managed-node2 sshd[50611]: Disconnected from user root 10.31.11.222 port 49628\nOct 04 12:35:50 managed-node2 sshd[50608]: pam_unix(sshd:session): session closed for user root\nOct 04 12:35:50 managed-node2 systemd[1]: session-10.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-10.scope has successfully entered the 'dead' state.\nOct 04 12:35:50 managed-node2 systemd-logind[598]: Session 10 logged out. Waiting for processes to exit.\nOct 04 12:35:50 managed-node2 systemd-logind[598]: Removed session 10.\n-- Subject: Session 10 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 10 has been terminated.\nOct 04 12:35:51 managed-node2 platform-python[50773]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nOct 04 12:35:54 managed-node2 platform-python[50925]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:55 managed-node2 platform-python[51048]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:35:57 managed-node2 platform-python[51296]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:35:58 managed-node2 platform-python[51425]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:35:58 managed-node2 platform-python[51549]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:02 managed-node2 sshd[51572]: Accepted publickey for root from 10.31.11.222 port 39572 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nOct 04 12:36:02 managed-node2 systemd[1]: Started Session 11 of user root.\n-- Subject: Unit session-11.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-11.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:02 managed-node2 systemd-logind[598]: New session 11 of user root.\n-- Subject: A new session 11 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 11 has been created for the user root.\n-- \n-- The leading process of the session is 51572.\nOct 04 12:36:02 managed-node2 sshd[51572]: pam_unix(sshd:session): session opened for user root by (uid=0)\nOct 04 12:36:02 managed-node2 sshd[51575]: Received disconnect from 10.31.11.222 port 39572:11: disconnected by user\nOct 04 12:36:02 managed-node2 sshd[51575]: Disconnected from user root 10.31.11.222 port 39572\nOct 04 12:36:02 managed-node2 sshd[51572]: pam_unix(sshd:session): session closed for user root\nOct 04 12:36:02 managed-node2 systemd[1]: session-11.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-11.scope has successfully entered the 'dead' state.\nOct 04 12:36:02 managed-node2 systemd-logind[598]: Session 11 logged out. Waiting for processes to exit.\nOct 04 12:36:02 managed-node2 systemd-logind[598]: Removed session 11.\n-- Subject: Session 11 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 11 has been terminated.\nOct 04 12:36:04 managed-node2 platform-python[51737]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nOct 04 12:36:04 managed-node2 platform-python[51889]: ansible-user Invoked with name=lsr_multiple_user1 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None\nOct 04 12:36:04 managed-node2 useradd[51893]: new group: name=lsr_multiple_user1, GID=3002\nOct 04 12:36:04 managed-node2 useradd[51893]: new user: name=lsr_multiple_user1, UID=3002, GID=3002, home=/home/lsr_multiple_user1, shell=/bin/bash\nOct 04 12:36:05 managed-node2 platform-python[52021]: ansible-user Invoked with name=lsr_multiple_user2 state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on managed-node2 update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None\nOct 04 12:36:05 managed-node2 useradd[52025]: new group: name=lsr_multiple_user2, GID=3003\nOct 04 12:36:05 managed-node2 useradd[52025]: new user: name=lsr_multiple_user2, UID=3003, GID=3003, home=/home/lsr_multiple_user2, shell=/bin/bash\nOct 04 12:36:06 managed-node2 platform-python[52153]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:06 managed-node2 platform-python[52276]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:09 managed-node2 platform-python[52524]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:09 managed-node2 sshd[52551]: Accepted publickey for root from 10.31.11.222 port 39576 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nOct 04 12:36:09 managed-node2 systemd[1]: Started Session 12 of user root.\n-- Subject: Unit session-12.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-12.scope has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:09 managed-node2 systemd-logind[598]: New session 12 of user root.\n-- Subject: A new session 12 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 12 has been created for the user root.\n-- \n-- The leading process of the session is 52551.\nOct 04 12:36:09 managed-node2 sshd[52551]: pam_unix(sshd:session): session opened for user root by (uid=0)\nOct 04 12:36:09 managed-node2 sshd[52554]: Received disconnect from 10.31.11.222 port 39576:11: disconnected by user\nOct 04 12:36:09 managed-node2 sshd[52554]: Disconnected from user root 10.31.11.222 port 39576\nOct 04 12:36:09 managed-node2 sshd[52551]: pam_unix(sshd:session): session closed for user root\nOct 04 12:36:09 managed-node2 systemd[1]: session-12.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-12.scope has successfully entered the 'dead' state.\nOct 04 12:36:09 managed-node2 systemd-logind[598]: Session 12 logged out. Waiting for processes to exit.\nOct 04 12:36:09 managed-node2 systemd-logind[598]: Removed session 12.\n-- Subject: Session 12 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 12 has been terminated.\nOct 04 12:36:11 managed-node2 platform-python[52716]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nOct 04 12:36:12 managed-node2 platform-python[52868]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:12 managed-node2 platform-python[52991]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:13 managed-node2 platform-python[53115]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:36:15 managed-node2 chronyd[603]: Source 74.208.25.46 replaced with 163.123.152.14 (2.centos.pool.ntp.org)\nOct 04 12:36:16 managed-node2 platform-python[53244]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nOct 04 12:36:18 managed-node2 systemd[1]: Reloading.\nOct 04 12:36:18 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.\n-- Subject: Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:18 managed-node2 systemd[1]: Starting man-db-cache-update.service...\n-- Subject: Unit man-db-cache-update.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has begun starting up.\nOct 04 12:36:19 managed-node2 systemd[1]: Reloading.\nOct 04 12:36:19 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit man-db-cache-update.service has successfully entered the 'dead' state.\nOct 04 12:36:19 managed-node2 systemd[1]: Started man-db-cache-update.service.\n-- Subject: Unit man-db-cache-update.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:19 managed-node2 systemd[1]: run-ra349d219a6fb4468acd54152311c9c85.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-ra349d219a6fb4468acd54152311c9c85.service has successfully entered the 'dead' state.\nOct 04 12:36:20 managed-node2 platform-python[53877]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:20 managed-node2 platform-python[54000]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:21 managed-node2 platform-python[54123]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:36:21 managed-node2 systemd[1]: Reloading.\nOct 04 12:36:21 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment...\n-- Subject: Unit certmonger.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit certmonger.service has begun starting up.\nOct 04 12:36:21 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment.\n-- Subject: Unit certmonger.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit certmonger.service has finished starting up.\n-- \n-- The start-up result is done.\nOct 04 12:36:22 managed-node2 platform-python[54316]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=#\n # Ansible managed\n #\n # system_role:certificate\n booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 certmonger[54332]: Certificate in file \"/etc/pki/tls/certs/quadlet_demo.crt\" issued by CA and saved.\nOct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:22 managed-node2 platform-python[54454]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt\nOct 04 12:36:23 managed-node2 platform-python[54577]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key\nOct 04 12:36:23 managed-node2 platform-python[54700]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt\nOct 04 12:36:24 managed-node2 platform-python[54823]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:24 managed-node2 certmonger[54159]: 2025-10-04 12:36:24 [54159] Wrote to /var/lib/certmonger/requests/20251004163622\nOct 04 12:36:24 managed-node2 platform-python[54947]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:24 managed-node2 platform-python[55070]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:25 managed-node2 platform-python[55193]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nOct 04 12:36:25 managed-node2 platform-python[55316]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:26 managed-node2 platform-python[55439]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:28 managed-node2 platform-python[55687]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:29 managed-node2 platform-python[55816]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nOct 04 12:36:29 managed-node2 platform-python[55940]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:31 managed-node2 platform-python[56065]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:32 managed-node2 platform-python[56188]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:32 managed-node2 platform-python[56311]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:33 managed-node2 platform-python[56435]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:36:36 managed-node2 platform-python[56558]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:36:36 managed-node2 platform-python[56685]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:36:37 managed-node2 platform-python[56812]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:36:37 managed-node2 platform-python[56935]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:36:40 managed-node2 platform-python[57058]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:40 managed-node2 platform-python[57182]: ansible-command Invoked with _raw_params=podman ps -a warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-metacopy\\x2dcheck3122420482-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-metacopy\\x2dcheck3122420482-merged.mount has successfully entered the 'dead' state.\nOct 04 12:36:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nOct 04 12:36:41 managed-node2 platform-python[57312]: ansible-command Invoked with _raw_params=podman pod ps --ctr-ids --ctr-names --ctr-status warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:41 managed-node2 platform-python[57442]: ansible-command Invoked with _raw_params=set -euo pipefail; systemctl list-units --all | grep quadlet _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:41 managed-node2 platform-python[57568]: ansible-command Invoked with _raw_params=ls -alrtF /etc/systemd/system warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:44 managed-node2 platform-python[57817]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:45 managed-node2 platform-python[57946]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nOct 04 12:36:47 managed-node2 platform-python[58071]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nOct 04 12:36:50 managed-node2 platform-python[58194]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nOct 04 12:36:51 managed-node2 platform-python[58321]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nOct 04 12:36:51 managed-node2 platform-python[58448]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:36:52 managed-node2 platform-python[58571]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nOct 04 12:36:54 managed-node2 platform-python[58694]: ansible-command Invoked with _raw_params=exec 1>&2\n set -x\n set -o pipefail\n systemctl list-units --plain -l --all | grep quadlet || :\n systemctl list-unit-files --all | grep quadlet || :\n systemctl list-units --plain --failed -l --all | grep quadlet || :\n _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nOct 04 12:36:54 managed-node2 platform-python[58824]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None", "task_name": "Get journald", "task_path": "/tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:209" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 04 October 2025 12:36:54 -0400 (0:00:00.390) 0:00:44.316 ****** =============================================================================== fedora.linux_system_roles.certificate : Ensure provider packages are installed --- 3.92s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:15 fedora.linux_system_roles.certificate : Ensure certificate role dependencies are installed --- 3.20s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:5 fedora.linux_system_roles.firewall : Install firewalld ------------------ 2.45s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:51 fedora.linux_system_roles.firewall : Install firewalld ------------------ 2.44s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:51 fedora.linux_system_roles.firewall : Configure firewall ----------------- 2.42s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:74 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.73s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.51s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.certificate : Slurp the contents of the files --- 1.24s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:143 fedora.linux_system_roles.certificate : Remove files -------------------- 1.09s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:174 fedora.linux_system_roles.firewall : Enable and start firewalld service --- 1.03s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:30 fedora.linux_system_roles.firewall : Unmask firewalld service ----------- 1.02s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:24 fedora.linux_system_roles.certificate : Ensure provider service is running --- 0.98s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:76 Gathering Facts --------------------------------------------------------- 0.95s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:9 fedora.linux_system_roles.certificate : Ensure certificate requests ----- 0.84s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:86 Debug ------------------------------------------------------------------- 0.73s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:199 fedora.linux_system_roles.podman : See if getsubids exists -------------- 0.47s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 fedora.linux_system_roles.certificate : Ensure pre-scripts hooks directory exists --- 0.46s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:25 fedora.linux_system_roles.podman : Get user information ----------------- 0.46s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 fedora.linux_system_roles.certificate : Check if system is ostree ------- 0.42s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:10 Check ------------------------------------------------------------------- 0.42s /tmp/collections-gxA/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:148 -- Logs begin at Sat 2025-10-04 12:26:12 EDT, end at Sat 2025-10-04 12:36:55 EDT. -- Oct 04 12:36:11 managed-node2 platform-python[52716]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Oct 04 12:36:12 managed-node2 platform-python[52868]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:12 managed-node2 platform-python[52991]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:13 managed-node2 platform-python[53115]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:15 managed-node2 chronyd[603]: Source 74.208.25.46 replaced with 163.123.152.14 (2.centos.pool.ntp.org) Oct 04 12:36:16 managed-node2 platform-python[53244]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Oct 04 12:36:18 managed-node2 systemd[1]: Reloading. Oct 04 12:36:18 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-ra349d219a6fb4468acd54152311c9c85.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:18 managed-node2 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Oct 04 12:36:19 managed-node2 systemd[1]: Reloading. Oct 04 12:36:19 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Oct 04 12:36:19 managed-node2 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:19 managed-node2 systemd[1]: run-ra349d219a6fb4468acd54152311c9c85.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ra349d219a6fb4468acd54152311c9c85.service has successfully entered the 'dead' state. Oct 04 12:36:20 managed-node2 platform-python[53877]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:20 managed-node2 platform-python[54000]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:21 managed-node2 platform-python[54123]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:36:21 managed-node2 systemd[1]: Reloading. Oct 04 12:36:21 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment... -- Subject: Unit certmonger.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has begun starting up. Oct 04 12:36:21 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment. -- Subject: Unit certmonger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has finished starting up. -- -- The start-up result is done. Oct 04 12:36:22 managed-node2 platform-python[54316]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=# # Ansible managed # # system_role:certificate booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 certmonger[54332]: Certificate in file "/etc/pki/tls/certs/quadlet_demo.crt" issued by CA and saved. Oct 04 12:36:22 managed-node2 certmonger[54159]: 2025-10-04 12:36:22 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:22 managed-node2 platform-python[54454]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Oct 04 12:36:23 managed-node2 platform-python[54577]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key Oct 04 12:36:23 managed-node2 platform-python[54700]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Oct 04 12:36:24 managed-node2 platform-python[54823]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:24 managed-node2 certmonger[54159]: 2025-10-04 12:36:24 [54159] Wrote to /var/lib/certmonger/requests/20251004163622 Oct 04 12:36:24 managed-node2 platform-python[54947]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:24 managed-node2 platform-python[55070]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:25 managed-node2 platform-python[55193]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Oct 04 12:36:25 managed-node2 platform-python[55316]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:26 managed-node2 platform-python[55439]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:28 managed-node2 platform-python[55687]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:29 managed-node2 platform-python[55816]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Oct 04 12:36:29 managed-node2 platform-python[55940]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:31 managed-node2 platform-python[56065]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:32 managed-node2 platform-python[56188]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:32 managed-node2 platform-python[56311]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:33 managed-node2 platform-python[56435]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:36 managed-node2 platform-python[56558]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:36:36 managed-node2 platform-python[56685]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:36:37 managed-node2 platform-python[56812]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:37 managed-node2 platform-python[56935]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:40 managed-node2 platform-python[57058]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:40 managed-node2 platform-python[57182]: ansible-command Invoked with _raw_params=podman ps -a warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3122420482-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-metacopy\x2dcheck3122420482-merged.mount has successfully entered the 'dead' state. Oct 04 12:36:40 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Oct 04 12:36:41 managed-node2 platform-python[57312]: ansible-command Invoked with _raw_params=podman pod ps --ctr-ids --ctr-names --ctr-status warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:41 managed-node2 platform-python[57442]: ansible-command Invoked with _raw_params=set -euo pipefail; systemctl list-units --all | grep quadlet _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:41 managed-node2 platform-python[57568]: ansible-command Invoked with _raw_params=ls -alrtF /etc/systemd/system warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:44 managed-node2 platform-python[57817]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:45 managed-node2 platform-python[57946]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 04 12:36:47 managed-node2 platform-python[58071]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 04 12:36:50 managed-node2 platform-python[58194]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Oct 04 12:36:51 managed-node2 platform-python[58321]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Oct 04 12:36:51 managed-node2 platform-python[58448]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:52 managed-node2 platform-python[58571]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Oct 04 12:36:54 managed-node2 platform-python[58694]: ansible-command Invoked with _raw_params=exec 1>&2 set -x set -o pipefail systemctl list-units --plain -l --all | grep quadlet || : systemctl list-unit-files --all | grep quadlet || : systemctl list-units --plain --failed -l --all | grep quadlet || : _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:54 managed-node2 platform-python[58824]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 04 12:36:55 managed-node2 sshd[58846]: Accepted publickey for root from 10.31.11.222 port 43280 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:36:55 managed-node2 systemd[1]: Started Session 13 of user root. -- Subject: Unit session-13.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-13.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:36:55 managed-node2 systemd-logind[598]: New session 13 of user root. -- Subject: A new session 13 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 13 has been created for the user root. -- -- The leading process of the session is 58846. Oct 04 12:36:55 managed-node2 sshd[58846]: pam_unix(sshd:session): session opened for user root by (uid=0) Oct 04 12:36:55 managed-node2 sshd[58849]: Received disconnect from 10.31.11.222 port 43280:11: disconnected by user Oct 04 12:36:55 managed-node2 sshd[58849]: Disconnected from user root 10.31.11.222 port 43280 Oct 04 12:36:55 managed-node2 sshd[58846]: pam_unix(sshd:session): session closed for user root Oct 04 12:36:55 managed-node2 systemd[1]: session-13.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-13.scope has successfully entered the 'dead' state. Oct 04 12:36:55 managed-node2 systemd-logind[598]: Session 13 logged out. Waiting for processes to exit. Oct 04 12:36:55 managed-node2 systemd-logind[598]: Removed session 13. -- Subject: Session 13 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 13 has been terminated. Oct 04 12:36:55 managed-node2 sshd[58870]: Accepted publickey for root from 10.31.11.222 port 43290 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Oct 04 12:36:55 managed-node2 systemd[1]: Started Session 14 of user root. -- Subject: Unit session-14.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-14.scope has finished starting up. -- -- The start-up result is done. Oct 04 12:36:55 managed-node2 systemd-logind[598]: New session 14 of user root. -- Subject: A new session 14 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 14 has been created for the user root. -- -- The leading process of the session is 58870. Oct 04 12:36:55 managed-node2 sshd[58870]: pam_unix(sshd:session): session opened for user root by (uid=0)