ansible-playbook 2.9.27 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.9.19 (main, May 16 2024, 11:40:09) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] No config file found; using defaults [WARNING]: running playbook inside collection fedora.linux_system_roles Skipping callback 'actionable', as we already have a stdout callback. Skipping callback 'counter_enabled', as we already have a stdout callback. Skipping callback 'debug', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'dense', as we already have a stdout callback. Skipping callback 'full_skip', as we already have a stdout callback. Skipping callback 'json', as we already have a stdout callback. Skipping callback 'jsonl', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'null', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. Skipping callback 'selective', as we already have a stdout callback. Skipping callback 'skippy', as we already have a stdout callback. Skipping callback 'stderr', as we already have a stdout callback. Skipping callback 'unixy', as we already have a stdout callback. Skipping callback 'yaml', as we already have a stdout callback. PLAYBOOK: tests_quadlet_demo.yml *********************************************** 2 plays in /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml PLAY [all] ********************************************************************* META: ran handlers TASK [Include vault variables] ************************************************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:5 Saturday 02 August 2025 12:41:42 -0400 (0:00:00.037) 0:00:00.037 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_test_password": { "__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n35383939616163653333633431363463313831383037386236646138333162396161356130303461\n3932623930643263313563336163316337643562333936360a363538636631313039343233383732\n38666530383538656639363465313230343533386130303833336434303438333161656262346562\n3362626538613031640a663330613638366132356534363534353239616666653466353961323533\n6565\n" }, "mysql_container_root_password": { "__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n61333932373230333539663035366431326163363166363036323963623131363530326231303634\n6635326161643165363366323062333334363730376631660a393566366139353861656364656661\n38653463363837336639363032646433666361646535366137303464623261313663643336306465\n6264663730656337310a343962353137386238383064646533366433333437303566656433386233\n34343235326665646661623131643335313236313131353661386338343366316261643634653633\n3832313034366536616531323963333234326461353130303532\n" } }, "ansible_included_var_files": [ "/tmp/podman-fei/tests/vars/vault-variables.yml" ], "changed": false } META: ran handlers META: ran handlers PLAY [Deploy the quadlet demo app] ********************************************* TASK [Gathering Facts] ********************************************************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:9 Saturday 02 August 2025 12:41:42 -0400 (0:00:00.031) 0:00:00.068 ******* ok: [managed-node2] META: ran handlers TASK [Test is only supported on x86_64] **************************************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:38 Saturday 02 August 2025 12:41:43 -0400 (0:00:00.968) 0:00:01.036 ******* skipping: [managed-node2] => {} META: TASK [Generate certificates] *************************************************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:51 Saturday 02 August 2025 12:41:43 -0400 (0:00:00.067) 0:00:01.104 ******* TASK [fedora.linux_system_roles.certificate : Set version specific variables] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:2 Saturday 02 August 2025 12:41:43 -0400 (0:00:00.043) 0:00:01.147 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.certificate : Ensure ansible_facts used by role] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:2 Saturday 02 August 2025 12:41:43 -0400 (0:00:00.026) 0:00:01.174 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Check if system is ostree] ******* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:10 Saturday 02 August 2025 12:41:43 -0400 (0:00:00.019) 0:00:01.193 ******* ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.certificate : Set flag to indicate system is ostree] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:15 Saturday 02 August 2025 12:41:44 -0400 (0:00:00.477) 0:00:01.670 ******* ok: [managed-node2] => { "ansible_facts": { "__certificate_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.certificate : Run systemctl] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:22 Saturday 02 August 2025 12:41:44 -0400 (0:00:00.033) 0:00:01.704 ******* ok: [managed-node2] => { "changed": false, "cmd": [ "systemctl", "is-system-running" ], "delta": "0:00:00.007671", "end": "2025-08-02 12:41:44.508128", "failed_when_result": false, "rc": 0, "start": "2025-08-02 12:41:44.500457" } STDOUT: running TASK [fedora.linux_system_roles.certificate : Require installed systemd] ******* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:30 Saturday 02 August 2025 12:41:44 -0400 (0:00:00.488) 0:00:02.192 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:35 Saturday 02 August 2025 12:41:44 -0400 (0:00:00.019) 0:00:02.212 ******* ok: [managed-node2] => { "ansible_facts": { "__certificate_is_booted": true }, "changed": false } TASK [fedora.linux_system_roles.certificate : Set platform/version specific variables] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:40 Saturday 02 August 2025 12:41:44 -0400 (0:00:00.021) 0:00:02.233 ******* skipping: [managed-node2] => (item=RedHat.yml) => { "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_8.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=CentOS_8.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Ensure certificate role dependencies are installed] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:5 Saturday 02 August 2025 12:41:44 -0400 (0:00:00.044) 0:00:02.278 ******* changed: [managed-node2] => { "changed": true, "rc": 0, "results": [ "Installed: python3-pyasn1-0.3.7-6.el8.noarch" ] } lsrpackages: python3-cryptography python3-dbus python3-pyasn1 TASK [fedora.linux_system_roles.certificate : Ensure provider packages are installed] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:15 Saturday 02 August 2025 12:41:48 -0400 (0:00:03.724) 0:00:06.002 ******* changed: [managed-node2] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": true, "rc": 0, "results": [ "Installed: xmlrpc-c-client-1.51.0-9.el8.x86_64", "Installed: xmlrpc-c-1.51.0-9.el8.x86_64", "Installed: certmonger-0.79.17-2.el8.x86_64" ] } lsrpackages: certmonger TASK [fedora.linux_system_roles.certificate : Ensure pre-scripts hooks directory exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:25 Saturday 02 August 2025 12:41:52 -0400 (0:00:04.455) 0:00:10.457 ******* changed: [managed-node2] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": true, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/certmonger//pre-scripts", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.certificate : Ensure post-scripts hooks directory exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:49 Saturday 02 August 2025 12:41:53 -0400 (0:00:00.578) 0:00:11.036 ******* changed: [managed-node2] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": true, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/certmonger//post-scripts", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [fedora.linux_system_roles.certificate : Ensure provider service is running] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:76 Saturday 02 August 2025 12:41:53 -0400 (0:00:00.399) 0:00:11.436 ******* changed: [managed-node2] => (item=certmonger) => { "__certificate_provider": "certmonger", "ansible_loop_var": "__certificate_provider", "changed": true, "enabled": true, "name": "certmonger", "state": "started", "status": { "ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "dbus.service basic.target sysinit.target network.target dbus.socket syslog.target systemd-journald.socket system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedorahosted.certmonger", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "Certificate monitoring and PKI enrollment", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/certmonger (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/sbin/certmonger ; argv[]=/usr/sbin/certmonger -S -p /run/certmonger.pid -n $OPTS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/certmonger.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "certmonger.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "[not set]", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "certmonger.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PIDFile": "/run/certmonger.pid", "PartOf": "dbus.service", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target system.slice dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestampMonotonic": "0", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "[not set]", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "UtmpMode": "init", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.certificate : Ensure certificate requests] ***** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:86 Saturday 02 August 2025 12:41:54 -0400 (0:00:01.075) 0:00:12.511 ******* changed: [managed-node2] => (item={'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}) => { "ansible_loop_var": "item", "changed": true, "item": { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } } MSG: Certificate requested (new). TASK [fedora.linux_system_roles.certificate : Check if test mode is supported] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:138 Saturday 02 August 2025 12:41:55 -0400 (0:00:01.053) 0:00:13.564 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.certificate : Slurp the contents of the files] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:143 Saturday 02 August 2025 12:41:55 -0400 (0:00:00.035) 0:00:13.600 ******* ok: [managed-node2] => (item=['cert', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnakNDQW1xZ0F3SUJBZ0lRTUlEVEtSSS9SMzZiczlFdER1L1FrakFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVNBd0hnWURWUVFEREJkTWIyTmhiQ0JUYVdkdWFXNW5JRUYxZEdodmNtbDBlVEVzTUNvR0ExVUVBd3dqTXpBNApNR1F6TWprdE1USXpaalEzTjJVdE9XSmlNMlF4TW1RdE1HVmxabVF3T1RFd0hoY05NalV3T0RBeU1UWTBNVFUxCldoY05Nall3T0RBeU1UWTBNVFUwV2pBVU1SSXdFQVlEVlFRREV3bHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFhtSVRWekVROHBkZ0FsU09pcWdqSVN3RXBHNko5Nmc1QQo4L1l0MWhqZHVyTGY5cy9lM2hTT2JhZDBkNHlzSFVRT21aWmdBZVdKWTl1eWhZMGxRYTR1R3RjdDMzMzBwS2poCjdkZ3FYUTFyNURZZlcyVSszSzdjRGxHVW9WOXpJbmdMRkJQcjhRNkFRZktJNEZ6U0ZCS0liZkJDYzVLWS9sOHUKQW5uK0xvaGRjOCtkUHkybitwMDdKZnFXQmc1WHk4VkdyL2RqaURIM2dFbm9sajgrUDY1dXQrMFA0cXpkZVlweQpNaDlaSWlPRHNYV0RBb29YOE5ZNE0wUlZZNVAyamJJaW9xL21PQ2ZkVysxMVFINUhYL2JyWWNxVHhYb3JwYjNUCktidkFVYjBOV1V4bnM2UFlncEM3ZFcvTzNCWjV5dVE3cEFlU054bWRBblgxUXdEQUwvMEpBZ01CQUFHamdaTXcKZ1pBd0N3WURWUjBQQkFRREFnV2dNQlFHQTFVZEVRUU5NQXVDQ1d4dlkyRnNhRzl6ZERBZEJnTlZIU1VFRmpBVQpCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVU5Tk1OCldnclNiTGN5dGhBbndVM3pUNmU2Mm5Vd0h3WURWUjBqQkJnd0ZvQVVqUjU3eUdMcm1Eb1NqSWFaUE1yQkpuRHcKd3BRd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBMVpRd1pIWU1zWVNhV2V6MVlnSHhhQUVwUGtJVXp2a015agpoK0pWK1dMODI1U0NzZWJSVjJncmlOMkI3N01ScjQ2SEo4eUNZbStwMURrdUlSRytFRHVMZSszZWhxMGNHMklqClIvbnRXTHRNOXp3YkExSnh0SkJ3dWpRaEpqMlVKejhkdmd6OXMwSmpmblV1Ym9RdnV3bVVvUkVEQzNMdHc3bGYKcGxJOUVaZzJWb3R1Nno4MVR4SmJvQzFkeGlXZDBqS0xQSy9OYW1zSlBTOXNIUk5JTUdZYzIvNk5iZWdJVUsrVwpBdVk3SHFpVmxZOWMwZ0dNN0VHLzlmV3A2UzRBNUV1MklvYTlsQVl2ZXVYMm1aSUdFbi9DT0diUEduZGNyQ25NCjJHQnRReUcwOWJWNG0waGtvZlgyd1hiMUR4Z2lQVzFjUVlEdHlUR0xJVnMyWTdheTRDYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": [ "cert", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/certs/quadlet_demo.crt" } ok: [managed-node2] => (item=['key', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRRFhtSVRWekVROHBkZ0EKbFNPaXFnaklTd0VwRzZKOTZnNUE4L1l0MWhqZHVyTGY5cy9lM2hTT2JhZDBkNHlzSFVRT21aWmdBZVdKWTl1eQpoWTBsUWE0dUd0Y3QzMzMwcEtqaDdkZ3FYUTFyNURZZlcyVSszSzdjRGxHVW9WOXpJbmdMRkJQcjhRNkFRZktJCjRGelNGQktJYmZCQ2M1S1kvbDh1QW5uK0xvaGRjOCtkUHkybitwMDdKZnFXQmc1WHk4VkdyL2RqaURIM2dFbm8KbGo4K1A2NXV0KzBQNHF6ZGVZcHlNaDlaSWlPRHNYV0RBb29YOE5ZNE0wUlZZNVAyamJJaW9xL21PQ2ZkVysxMQpRSDVIWC9iclljcVR4WG9ycGIzVEtidkFVYjBOV1V4bnM2UFlncEM3ZFcvTzNCWjV5dVE3cEFlU054bWRBblgxClF3REFMLzBKQWdNQkFBRUNnZ0VBTmpBSGN4VWNNWUlkZ0VmNzVPNkh0by9qZ3NtSFZjNTJOcTBhdXZxTTFXNFAKZDJzNVkvSzloKzdYbjlaTWJSWU4vUDF0WmtRVHhTeHNFN3F0ZHlmQzk2T1hZZGhDZURMS243NkFEbVhFOGRFeQpLVDkzZXcxTWhkS3pmbi93MWFkY05LeWZOVFlwUnVOTWFrcTZDYk14MVVpTWtxY3B1WHAxd2NLdGxzMXJTTkZVCm4ra2lRa1JHWU9Tdy9rRHNRZWw4UGdCSkVDQTI3b0VHZkcyQ0J3T2U5c0xoSEtUd1R5aDlvQjRQTUUxY1pHeHkKOTlLSDM4K2VZaG9MQUt5V3BNejhIZnRpZzRTK0lSaWhxTVoxL1g2OWpCYUJScHdyZ1VaaS82V1ZoZFd5bFBQMAprN0tkVmNjdXNRaWV1R1d3WEVsNjZqL0ZBVDRuTFcyVi9DOTlGRzJ1NFFLQmdRRDRFWjE4WFhGSk1zZk5PemJRCnhtZWViNnZ6SVltMTlrQkErVjlaR1o5eUZVTUVtS3VRRjRXa2RFQTgxN0ZwRkZ1U3pEVHd0SXJRcWtvdmFqbzAKdDZJcGlsSkR3U2ZDNFRJeEZaTi8vM2EzZmdhWWVnSG5wcjFUMWpWNjBqMS9LM1FJVUx3UFpBcTFGWEZUUi9zTQo1ZVczQllWeitZenpxZ1hTd1A5amsvendId0tCZ1FEZWZSNmMyUXMrME40MXczZXI3azF0ZC9TZExoVEJKMHZrCnEwUlowM2s2RXc1V0FuOWxrWGpjQ1RPcTFJeEJ6aEJFM3NDc1grelRhL1lPcU9ZZWNuWW1VNUR4Y2lsY0VSSHkKRURuQnEyMTliTXZCMzUyK0V6MGtaSU9IUDdjY1JZSW1RV0NBRHZGaXEzOVVGbW9LS3kzTWpCMlg3cWRJTlk0TgptdUNnT0ZkTjF3S0JnUUNoUW5vNC9WbUdkdjlSbDl1emJqYXYxUEpYbEFhOGhmOFEvY3NRMWNwRDFEU0R5V2RGCnZUVEFTbDN6NzFkQjh0endtZFVVWUprWXVvcU5OaVh1WFMyS2lZT2V6ZksyQ2NTaUNkK2Z4b2I3RTI3Z01mZ0oKQ2VocmxvV2ZlUXBISUExRzFvemFDSE81Y2Q0QWdIdGYwQmM3bWRnK0l3eVEzWWI1a1VLMERlRFFpd0tCZ0Z6Ywo2RndiRTJDQ21WempXeDI5OXo5THBDTyt5aGJjcWdhbG5YL0lqbjY0MlhENDFlZTAwamMwK0FYRGRVODZEUHhSCjVTV05YREhhaS9jT2RBNGRSRWMyOWZadzZlWnRrWW54VDhvUUhVRU9tZlV2dW8xTlJtWGNOakhMWEVoR2tzNFkKMTRoYnRGQzB1QTZHMUhldUVnMmdKZkgyUUlnWklsTjNZMjQ4VmVROUFvR0JBT2J2and4Si8vNWYvR3JKcHIvagpDcDBtVitjM1hsbFluVXk5MlVIem9iSWxNVTQ2YUlLcm9RMC9IejhiZGROZElXU0czeTVuK3FpUEZ0ZGlJZ0VmCmFWOG5laCtxM3lXOFQ1YXYvT2JTRnQ4akJCWDlOK3pRWVpPaWFJc2gzemxudjVYOTlYcjJvWEVqVzEvZ3oyeUoKbWtwRTgxVU5BUDFZZUhCdmdGSlVlb0dFCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K", "encoding": "base64", "item": [ "key", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/private/quadlet_demo.key" } ok: [managed-node2] => (item=['ca', {'name': 'quadlet_demo', 'dns': ['localhost'], 'ca': 'self-sign'}]) => { "ansible_loop_var": "item", "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURnakNDQW1xZ0F3SUJBZ0lRTUlEVEtSSS9SMzZiczlFdER1L1FrakFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVNBd0hnWURWUVFEREJkTWIyTmhiQ0JUYVdkdWFXNW5JRUYxZEdodmNtbDBlVEVzTUNvR0ExVUVBd3dqTXpBNApNR1F6TWprdE1USXpaalEzTjJVdE9XSmlNMlF4TW1RdE1HVmxabVF3T1RFd0hoY05NalV3T0RBeU1UWTBNVFUxCldoY05Nall3T0RBeU1UWTBNVFUwV2pBVU1SSXdFQVlEVlFRREV3bHNiMk5oYkdodmMzUXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFhtSVRWekVROHBkZ0FsU09pcWdqSVN3RXBHNko5Nmc1QQo4L1l0MWhqZHVyTGY5cy9lM2hTT2JhZDBkNHlzSFVRT21aWmdBZVdKWTl1eWhZMGxRYTR1R3RjdDMzMzBwS2poCjdkZ3FYUTFyNURZZlcyVSszSzdjRGxHVW9WOXpJbmdMRkJQcjhRNkFRZktJNEZ6U0ZCS0liZkJDYzVLWS9sOHUKQW5uK0xvaGRjOCtkUHkybitwMDdKZnFXQmc1WHk4VkdyL2RqaURIM2dFbm9sajgrUDY1dXQrMFA0cXpkZVlweQpNaDlaSWlPRHNYV0RBb29YOE5ZNE0wUlZZNVAyamJJaW9xL21PQ2ZkVysxMVFINUhYL2JyWWNxVHhYb3JwYjNUCktidkFVYjBOV1V4bnM2UFlncEM3ZFcvTzNCWjV5dVE3cEFlU054bWRBblgxUXdEQUwvMEpBZ01CQUFHamdaTXcKZ1pBd0N3WURWUjBQQkFRREFnV2dNQlFHQTFVZEVRUU5NQXVDQ1d4dlkyRnNhRzl6ZERBZEJnTlZIU1VFRmpBVQpCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVU5Tk1OCldnclNiTGN5dGhBbndVM3pUNmU2Mm5Vd0h3WURWUjBqQkJnd0ZvQVVqUjU3eUdMcm1Eb1NqSWFaUE1yQkpuRHcKd3BRd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBMVpRd1pIWU1zWVNhV2V6MVlnSHhhQUVwUGtJVXp2a015agpoK0pWK1dMODI1U0NzZWJSVjJncmlOMkI3N01ScjQ2SEo4eUNZbStwMURrdUlSRytFRHVMZSszZWhxMGNHMklqClIvbnRXTHRNOXp3YkExSnh0SkJ3dWpRaEpqMlVKejhkdmd6OXMwSmpmblV1Ym9RdnV3bVVvUkVEQzNMdHc3bGYKcGxJOUVaZzJWb3R1Nno4MVR4SmJvQzFkeGlXZDBqS0xQSy9OYW1zSlBTOXNIUk5JTUdZYzIvNk5iZWdJVUsrVwpBdVk3SHFpVmxZOWMwZ0dNN0VHLzlmV3A2UzRBNUV1MklvYTlsQVl2ZXVYMm1aSUdFbi9DT0diUEduZGNyQ25NCjJHQnRReUcwOWJWNG0waGtvZlgyd1hiMUR4Z2lQVzFjUVlEdHlUR0xJVnMyWTdheTRDYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": [ "ca", { "ca": "self-sign", "dns": [ "localhost" ], "name": "quadlet_demo" } ], "source": "/etc/pki/tls/certs/quadlet_demo.crt" } TASK [fedora.linux_system_roles.certificate : Reset certificate_test_certs] **** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:151 Saturday 02 August 2025 12:41:57 -0400 (0:00:01.270) 0:00:14.870 ******* ok: [managed-node2] => { "ansible_facts": { "certificate_test_certs": {} }, "changed": false } TASK [fedora.linux_system_roles.certificate : Create return data] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:155 Saturday 02 August 2025 12:41:57 -0400 (0:00:00.034) 0:00:14.904 ******* ok: [managed-node2] => (item=quadlet_demo) => { "ansible_facts": { "certificate_test_certs": { "quadlet_demo": { "ca": "/etc/pki/tls/certs/quadlet_demo.crt", "ca_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQMIDTKRI/R36bs9EtDu/QkjANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMzA4\nMGQzMjktMTIzZjQ3N2UtOWJiM2QxMmQtMGVlZmQwOTEwHhcNMjUwODAyMTY0MTU1\nWhcNMjYwODAyMTY0MTU0WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDXmITVzEQ8pdgAlSOiqgjISwEpG6J96g5A\n8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uyhY0lQa4uGtct3330pKjh\n7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI4FzSFBKIbfBCc5KY/l8u\nAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEnolj8+P65ut+0P4qzdeYpy\nMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11QH5HX/brYcqTxXorpb3T\nKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1QwDAL/0JAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU9NMN\nWgrSbLcythAnwU3zT6e62nUwHwYDVR0jBBgwFoAUjR57yGLrmDoSjIaZPMrBJnDw\nwpQwDQYJKoZIhvcNAQELBQADggEBAA1ZQwZHYMsYSaWez1YgHxaAEpPkIUzvkMyj\nh+JV+WL825SCsebRV2griN2B77MRr46HJ8yCYm+p1DkuIRG+EDuLe+3ehq0cG2Ij\nR/ntWLtM9zwbA1JxtJBwujQhJj2UJz8dvgz9s0JjfnUuboQvuwmUoREDC3Ltw7lf\nplI9EZg2Votu6z81TxJboC1dxiWd0jKLPK/NamsJPS9sHRNIMGYc2/6NbegIUK+W\nAuY7HqiVlY9c0gGM7EG/9fWp6S4A5Eu2Ioa9lAYveuX2mZIGEn/COGbPGndcrCnM\n2GBtQyG09bV4m0hkofX2wXb1DxgiPW1cQYDtyTGLIVs2Y7ay4Cc=\n-----END CERTIFICATE-----\n", "cert": "/etc/pki/tls/certs/quadlet_demo.crt", "cert_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQMIDTKRI/R36bs9EtDu/QkjANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMzA4\nMGQzMjktMTIzZjQ3N2UtOWJiM2QxMmQtMGVlZmQwOTEwHhcNMjUwODAyMTY0MTU1\nWhcNMjYwODAyMTY0MTU0WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDXmITVzEQ8pdgAlSOiqgjISwEpG6J96g5A\n8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uyhY0lQa4uGtct3330pKjh\n7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI4FzSFBKIbfBCc5KY/l8u\nAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEnolj8+P65ut+0P4qzdeYpy\nMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11QH5HX/brYcqTxXorpb3T\nKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1QwDAL/0JAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU9NMN\nWgrSbLcythAnwU3zT6e62nUwHwYDVR0jBBgwFoAUjR57yGLrmDoSjIaZPMrBJnDw\nwpQwDQYJKoZIhvcNAQELBQADggEBAA1ZQwZHYMsYSaWez1YgHxaAEpPkIUzvkMyj\nh+JV+WL825SCsebRV2griN2B77MRr46HJ8yCYm+p1DkuIRG+EDuLe+3ehq0cG2Ij\nR/ntWLtM9zwbA1JxtJBwujQhJj2UJz8dvgz9s0JjfnUuboQvuwmUoREDC3Ltw7lf\nplI9EZg2Votu6z81TxJboC1dxiWd0jKLPK/NamsJPS9sHRNIMGYc2/6NbegIUK+W\nAuY7HqiVlY9c0gGM7EG/9fWp6S4A5Eu2Ioa9lAYveuX2mZIGEn/COGbPGndcrCnM\n2GBtQyG09bV4m0hkofX2wXb1DxgiPW1cQYDtyTGLIVs2Y7ay4Cc=\n-----END CERTIFICATE-----\n", "key": "/etc/pki/tls/private/quadlet_demo.key", "key_content": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDXmITVzEQ8pdgA\nlSOiqgjISwEpG6J96g5A8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uy\nhY0lQa4uGtct3330pKjh7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI\n4FzSFBKIbfBCc5KY/l8uAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEno\nlj8+P65ut+0P4qzdeYpyMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11\nQH5HX/brYcqTxXorpb3TKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1\nQwDAL/0JAgMBAAECggEANjAHcxUcMYIdgEf75O6Hto/jgsmHVc52Nq0auvqM1W4P\nd2s5Y/K9h+7Xn9ZMbRYN/P1tZkQTxSxsE7qtdyfC96OXYdhCeDLKn76ADmXE8dEy\nKT93ew1MhdKzfn/w1adcNKyfNTYpRuNMakq6CbMx1UiMkqcpuXp1wcKtls1rSNFU\nn+kiQkRGYOSw/kDsQel8PgBJECA27oEGfG2CBwOe9sLhHKTwTyh9oB4PME1cZGxy\n99KH38+eYhoLAKyWpMz8Hftig4S+IRihqMZ1/X69jBaBRpwrgUZi/6WVhdWylPP0\nk7KdVccusQieuGWwXEl66j/FAT4nLW2V/C99FG2u4QKBgQD4EZ18XXFJMsfNOzbQ\nxmeeb6vzIYm19kBA+V9ZGZ9yFUMEmKuQF4WkdEA817FpFFuSzDTwtIrQqkovajo0\nt6IpilJDwSfC4TIxFZN//3a3fgaYegHnpr1T1jV60j1/K3QIULwPZAq1FXFTR/sM\n5eW3BYVz+YzzqgXSwP9jk/zwHwKBgQDefR6c2Qs+0N41w3er7k1td/SdLhTBJ0vk\nq0RZ03k6Ew5WAn9lkXjcCTOq1IxBzhBE3sCsX+zTa/YOqOYecnYmU5DxcilcERHy\nEDnBq219bMvB352+Ez0kZIOHP7ccRYImQWCADvFiq39UFmoKKy3MjB2X7qdINY4N\nmuCgOFdN1wKBgQChQno4/VmGdv9Rl9uzbjav1PJXlAa8hf8Q/csQ1cpD1DSDyWdF\nvTTASl3z71dB8tzwmdUUYJkYuoqNNiXuXS2KiYOezfK2CcSiCd+fxob7E27gMfgJ\nCehrloWfeQpHIA1G1ozaCHO5cd4AgHtf0Bc7mdg+IwyQ3Yb5kUK0DeDQiwKBgFzc\n6FwbE2CCmVzjWx299z9LpCO+yhbcqgalnX/Ijn642XD41ee00jc0+AXDdU86DPxR\n5SWNXDHai/cOdA4dREc29fZw6eZtkYnxT8oQHUEOmfUvuo1NRmXcNjHLXEhGks4Y\n14hbtFC0uA6G1HeuEg2gJfH2QIgZIlN3Y248VeQ9AoGBAObvjwxJ//5f/GrJpr/j\nCp0mV+c3XllYnUy92UHzobIlMU46aIKroQ0/Hz8bddNdIWSG3y5n+qiPFtdiIgEf\naV8neh+q3yW8T5av/ObSFt8jBBX9N+zQYZOiaIsh3zlnv5X99Xr2oXEjW1/gz2yJ\nmkpE81UNAP1YeHBvgFJUeoGE\n-----END PRIVATE KEY-----\n" } } }, "ansible_loop_var": "cert_name", "cert_name": "quadlet_demo", "changed": false } TASK [fedora.linux_system_roles.certificate : Stop tracking certificates] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:169 Saturday 02 August 2025 12:41:57 -0400 (0:00:00.067) 0:00:14.971 ******* ok: [managed-node2] => (item={'cert': '/etc/pki/tls/certs/quadlet_demo.crt', 'key': '/etc/pki/tls/private/quadlet_demo.key', 'ca': '/etc/pki/tls/certs/quadlet_demo.crt', 'cert_content': '-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQMIDTKRI/R36bs9EtDu/QkjANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMzA4\nMGQzMjktMTIzZjQ3N2UtOWJiM2QxMmQtMGVlZmQwOTEwHhcNMjUwODAyMTY0MTU1\nWhcNMjYwODAyMTY0MTU0WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDXmITVzEQ8pdgAlSOiqgjISwEpG6J96g5A\n8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uyhY0lQa4uGtct3330pKjh\n7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI4FzSFBKIbfBCc5KY/l8u\nAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEnolj8+P65ut+0P4qzdeYpy\nMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11QH5HX/brYcqTxXorpb3T\nKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1QwDAL/0JAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU9NMN\nWgrSbLcythAnwU3zT6e62nUwHwYDVR0jBBgwFoAUjR57yGLrmDoSjIaZPMrBJnDw\nwpQwDQYJKoZIhvcNAQELBQADggEBAA1ZQwZHYMsYSaWez1YgHxaAEpPkIUzvkMyj\nh+JV+WL825SCsebRV2griN2B77MRr46HJ8yCYm+p1DkuIRG+EDuLe+3ehq0cG2Ij\nR/ntWLtM9zwbA1JxtJBwujQhJj2UJz8dvgz9s0JjfnUuboQvuwmUoREDC3Ltw7lf\nplI9EZg2Votu6z81TxJboC1dxiWd0jKLPK/NamsJPS9sHRNIMGYc2/6NbegIUK+W\nAuY7HqiVlY9c0gGM7EG/9fWp6S4A5Eu2Ioa9lAYveuX2mZIGEn/COGbPGndcrCnM\n2GBtQyG09bV4m0hkofX2wXb1DxgiPW1cQYDtyTGLIVs2Y7ay4Cc=\n-----END CERTIFICATE-----\n', 'key_content': '-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDXmITVzEQ8pdgA\nlSOiqgjISwEpG6J96g5A8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uy\nhY0lQa4uGtct3330pKjh7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI\n4FzSFBKIbfBCc5KY/l8uAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEno\nlj8+P65ut+0P4qzdeYpyMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11\nQH5HX/brYcqTxXorpb3TKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1\nQwDAL/0JAgMBAAECggEANjAHcxUcMYIdgEf75O6Hto/jgsmHVc52Nq0auvqM1W4P\nd2s5Y/K9h+7Xn9ZMbRYN/P1tZkQTxSxsE7qtdyfC96OXYdhCeDLKn76ADmXE8dEy\nKT93ew1MhdKzfn/w1adcNKyfNTYpRuNMakq6CbMx1UiMkqcpuXp1wcKtls1rSNFU\nn+kiQkRGYOSw/kDsQel8PgBJECA27oEGfG2CBwOe9sLhHKTwTyh9oB4PME1cZGxy\n99KH38+eYhoLAKyWpMz8Hftig4S+IRihqMZ1/X69jBaBRpwrgUZi/6WVhdWylPP0\nk7KdVccusQieuGWwXEl66j/FAT4nLW2V/C99FG2u4QKBgQD4EZ18XXFJMsfNOzbQ\nxmeeb6vzIYm19kBA+V9ZGZ9yFUMEmKuQF4WkdEA817FpFFuSzDTwtIrQqkovajo0\nt6IpilJDwSfC4TIxFZN//3a3fgaYegHnpr1T1jV60j1/K3QIULwPZAq1FXFTR/sM\n5eW3BYVz+YzzqgXSwP9jk/zwHwKBgQDefR6c2Qs+0N41w3er7k1td/SdLhTBJ0vk\nq0RZ03k6Ew5WAn9lkXjcCTOq1IxBzhBE3sCsX+zTa/YOqOYecnYmU5DxcilcERHy\nEDnBq219bMvB352+Ez0kZIOHP7ccRYImQWCADvFiq39UFmoKKy3MjB2X7qdINY4N\nmuCgOFdN1wKBgQChQno4/VmGdv9Rl9uzbjav1PJXlAa8hf8Q/csQ1cpD1DSDyWdF\nvTTASl3z71dB8tzwmdUUYJkYuoqNNiXuXS2KiYOezfK2CcSiCd+fxob7E27gMfgJ\nCehrloWfeQpHIA1G1ozaCHO5cd4AgHtf0Bc7mdg+IwyQ3Yb5kUK0DeDQiwKBgFzc\n6FwbE2CCmVzjWx299z9LpCO+yhbcqgalnX/Ijn642XD41ee00jc0+AXDdU86DPxR\n5SWNXDHai/cOdA4dREc29fZw6eZtkYnxT8oQHUEOmfUvuo1NRmXcNjHLXEhGks4Y\n14hbtFC0uA6G1HeuEg2gJfH2QIgZIlN3Y248VeQ9AoGBAObvjwxJ//5f/GrJpr/j\nCp0mV+c3XllYnUy92UHzobIlMU46aIKroQ0/Hz8bddNdIWSG3y5n+qiPFtdiIgEf\naV8neh+q3yW8T5av/ObSFt8jBBX9N+zQYZOiaIsh3zlnv5X99Xr2oXEjW1/gz2yJ\nmkpE81UNAP1YeHBvgFJUeoGE\n-----END PRIVATE KEY-----\n', 'ca_content': '-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQMIDTKRI/R36bs9EtDu/QkjANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMzA4\nMGQzMjktMTIzZjQ3N2UtOWJiM2QxMmQtMGVlZmQwOTEwHhcNMjUwODAyMTY0MTU1\nWhcNMjYwODAyMTY0MTU0WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDXmITVzEQ8pdgAlSOiqgjISwEpG6J96g5A\n8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uyhY0lQa4uGtct3330pKjh\n7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI4FzSFBKIbfBCc5KY/l8u\nAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEnolj8+P65ut+0P4qzdeYpy\nMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11QH5HX/brYcqTxXorpb3T\nKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1QwDAL/0JAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU9NMN\nWgrSbLcythAnwU3zT6e62nUwHwYDVR0jBBgwFoAUjR57yGLrmDoSjIaZPMrBJnDw\nwpQwDQYJKoZIhvcNAQELBQADggEBAA1ZQwZHYMsYSaWez1YgHxaAEpPkIUzvkMyj\nh+JV+WL825SCsebRV2griN2B77MRr46HJ8yCYm+p1DkuIRG+EDuLe+3ehq0cG2Ij\nR/ntWLtM9zwbA1JxtJBwujQhJj2UJz8dvgz9s0JjfnUuboQvuwmUoREDC3Ltw7lf\nplI9EZg2Votu6z81TxJboC1dxiWd0jKLPK/NamsJPS9sHRNIMGYc2/6NbegIUK+W\nAuY7HqiVlY9c0gGM7EG/9fWp6S4A5Eu2Ioa9lAYveuX2mZIGEn/COGbPGndcrCnM\n2GBtQyG09bV4m0hkofX2wXb1DxgiPW1cQYDtyTGLIVs2Y7ay4Cc=\n-----END CERTIFICATE-----\n'}) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "getcert", "stop-tracking", "-f", "/etc/pki/tls/certs/quadlet_demo.crt" ], "delta": "0:00:00.031644", "end": "2025-08-02 12:41:57.699615", "item": { "ca": "/etc/pki/tls/certs/quadlet_demo.crt", "ca_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQMIDTKRI/R36bs9EtDu/QkjANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMzA4\nMGQzMjktMTIzZjQ3N2UtOWJiM2QxMmQtMGVlZmQwOTEwHhcNMjUwODAyMTY0MTU1\nWhcNMjYwODAyMTY0MTU0WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDXmITVzEQ8pdgAlSOiqgjISwEpG6J96g5A\n8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uyhY0lQa4uGtct3330pKjh\n7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI4FzSFBKIbfBCc5KY/l8u\nAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEnolj8+P65ut+0P4qzdeYpy\nMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11QH5HX/brYcqTxXorpb3T\nKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1QwDAL/0JAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU9NMN\nWgrSbLcythAnwU3zT6e62nUwHwYDVR0jBBgwFoAUjR57yGLrmDoSjIaZPMrBJnDw\nwpQwDQYJKoZIhvcNAQELBQADggEBAA1ZQwZHYMsYSaWez1YgHxaAEpPkIUzvkMyj\nh+JV+WL825SCsebRV2griN2B77MRr46HJ8yCYm+p1DkuIRG+EDuLe+3ehq0cG2Ij\nR/ntWLtM9zwbA1JxtJBwujQhJj2UJz8dvgz9s0JjfnUuboQvuwmUoREDC3Ltw7lf\nplI9EZg2Votu6z81TxJboC1dxiWd0jKLPK/NamsJPS9sHRNIMGYc2/6NbegIUK+W\nAuY7HqiVlY9c0gGM7EG/9fWp6S4A5Eu2Ioa9lAYveuX2mZIGEn/COGbPGndcrCnM\n2GBtQyG09bV4m0hkofX2wXb1DxgiPW1cQYDtyTGLIVs2Y7ay4Cc=\n-----END CERTIFICATE-----\n", "cert": "/etc/pki/tls/certs/quadlet_demo.crt", "cert_content": "-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIQMIDTKRI/R36bs9EtDu/QkjANBgkqhkiG9w0BAQsFADBQ\nMSAwHgYDVQQDDBdMb2NhbCBTaWduaW5nIEF1dGhvcml0eTEsMCoGA1UEAwwjMzA4\nMGQzMjktMTIzZjQ3N2UtOWJiM2QxMmQtMGVlZmQwOTEwHhcNMjUwODAyMTY0MTU1\nWhcNMjYwODAyMTY0MTU0WjAUMRIwEAYDVQQDEwlsb2NhbGhvc3QwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDXmITVzEQ8pdgAlSOiqgjISwEpG6J96g5A\n8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uyhY0lQa4uGtct3330pKjh\n7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI4FzSFBKIbfBCc5KY/l8u\nAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEnolj8+P65ut+0P4qzdeYpy\nMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11QH5HX/brYcqTxXorpb3T\nKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1QwDAL/0JAgMBAAGjgZMw\ngZAwCwYDVR0PBAQDAgWgMBQGA1UdEQQNMAuCCWxvY2FsaG9zdDAdBgNVHSUEFjAU\nBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU9NMN\nWgrSbLcythAnwU3zT6e62nUwHwYDVR0jBBgwFoAUjR57yGLrmDoSjIaZPMrBJnDw\nwpQwDQYJKoZIhvcNAQELBQADggEBAA1ZQwZHYMsYSaWez1YgHxaAEpPkIUzvkMyj\nh+JV+WL825SCsebRV2griN2B77MRr46HJ8yCYm+p1DkuIRG+EDuLe+3ehq0cG2Ij\nR/ntWLtM9zwbA1JxtJBwujQhJj2UJz8dvgz9s0JjfnUuboQvuwmUoREDC3Ltw7lf\nplI9EZg2Votu6z81TxJboC1dxiWd0jKLPK/NamsJPS9sHRNIMGYc2/6NbegIUK+W\nAuY7HqiVlY9c0gGM7EG/9fWp6S4A5Eu2Ioa9lAYveuX2mZIGEn/COGbPGndcrCnM\n2GBtQyG09bV4m0hkofX2wXb1DxgiPW1cQYDtyTGLIVs2Y7ay4Cc=\n-----END CERTIFICATE-----\n", "key": "/etc/pki/tls/private/quadlet_demo.key", "key_content": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDXmITVzEQ8pdgA\nlSOiqgjISwEpG6J96g5A8/Yt1hjdurLf9s/e3hSObad0d4ysHUQOmZZgAeWJY9uy\nhY0lQa4uGtct3330pKjh7dgqXQ1r5DYfW2U+3K7cDlGUoV9zIngLFBPr8Q6AQfKI\n4FzSFBKIbfBCc5KY/l8uAnn+Lohdc8+dPy2n+p07JfqWBg5Xy8VGr/djiDH3gEno\nlj8+P65ut+0P4qzdeYpyMh9ZIiODsXWDAooX8NY4M0RVY5P2jbIioq/mOCfdW+11\nQH5HX/brYcqTxXorpb3TKbvAUb0NWUxns6PYgpC7dW/O3BZ5yuQ7pAeSNxmdAnX1\nQwDAL/0JAgMBAAECggEANjAHcxUcMYIdgEf75O6Hto/jgsmHVc52Nq0auvqM1W4P\nd2s5Y/K9h+7Xn9ZMbRYN/P1tZkQTxSxsE7qtdyfC96OXYdhCeDLKn76ADmXE8dEy\nKT93ew1MhdKzfn/w1adcNKyfNTYpRuNMakq6CbMx1UiMkqcpuXp1wcKtls1rSNFU\nn+kiQkRGYOSw/kDsQel8PgBJECA27oEGfG2CBwOe9sLhHKTwTyh9oB4PME1cZGxy\n99KH38+eYhoLAKyWpMz8Hftig4S+IRihqMZ1/X69jBaBRpwrgUZi/6WVhdWylPP0\nk7KdVccusQieuGWwXEl66j/FAT4nLW2V/C99FG2u4QKBgQD4EZ18XXFJMsfNOzbQ\nxmeeb6vzIYm19kBA+V9ZGZ9yFUMEmKuQF4WkdEA817FpFFuSzDTwtIrQqkovajo0\nt6IpilJDwSfC4TIxFZN//3a3fgaYegHnpr1T1jV60j1/K3QIULwPZAq1FXFTR/sM\n5eW3BYVz+YzzqgXSwP9jk/zwHwKBgQDefR6c2Qs+0N41w3er7k1td/SdLhTBJ0vk\nq0RZ03k6Ew5WAn9lkXjcCTOq1IxBzhBE3sCsX+zTa/YOqOYecnYmU5DxcilcERHy\nEDnBq219bMvB352+Ez0kZIOHP7ccRYImQWCADvFiq39UFmoKKy3MjB2X7qdINY4N\nmuCgOFdN1wKBgQChQno4/VmGdv9Rl9uzbjav1PJXlAa8hf8Q/csQ1cpD1DSDyWdF\nvTTASl3z71dB8tzwmdUUYJkYuoqNNiXuXS2KiYOezfK2CcSiCd+fxob7E27gMfgJ\nCehrloWfeQpHIA1G1ozaCHO5cd4AgHtf0Bc7mdg+IwyQ3Yb5kUK0DeDQiwKBgFzc\n6FwbE2CCmVzjWx299z9LpCO+yhbcqgalnX/Ijn642XD41ee00jc0+AXDdU86DPxR\n5SWNXDHai/cOdA4dREc29fZw6eZtkYnxT8oQHUEOmfUvuo1NRmXcNjHLXEhGks4Y\n14hbtFC0uA6G1HeuEg2gJfH2QIgZIlN3Y248VeQ9AoGBAObvjwxJ//5f/GrJpr/j\nCp0mV+c3XllYnUy92UHzobIlMU46aIKroQ0/Hz8bddNdIWSG3y5n+qiPFtdiIgEf\naV8neh+q3yW8T5av/ObSFt8jBBX9N+zQYZOiaIsh3zlnv5X99Xr2oXEjW1/gz2yJ\nmkpE81UNAP1YeHBvgFJUeoGE\n-----END PRIVATE KEY-----\n" }, "rc": 0, "start": "2025-08-02 12:41:57.667971" } STDOUT: Request "20250802164155" removed. TASK [fedora.linux_system_roles.certificate : Remove files] ******************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:174 Saturday 02 August 2025 12:41:57 -0400 (0:00:00.428) 0:00:15.399 ******* changed: [managed-node2] => (item=/etc/pki/tls/certs/quadlet_demo.crt) => { "ansible_loop_var": "item", "changed": true, "item": "/etc/pki/tls/certs/quadlet_demo.crt", "path": "/etc/pki/tls/certs/quadlet_demo.crt", "state": "absent" } changed: [managed-node2] => (item=/etc/pki/tls/private/quadlet_demo.key) => { "ansible_loop_var": "item", "changed": true, "item": "/etc/pki/tls/private/quadlet_demo.key", "path": "/etc/pki/tls/private/quadlet_demo.key", "state": "absent" } ok: [managed-node2] => (item=/etc/pki/tls/certs/quadlet_demo.crt) => { "ansible_loop_var": "item", "changed": false, "item": "/etc/pki/tls/certs/quadlet_demo.crt", "path": "/etc/pki/tls/certs/quadlet_demo.crt", "state": "absent" } TASK [Run the role] ************************************************************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:62 Saturday 02 August 2025 12:41:58 -0400 (0:00:01.152) 0:00:16.552 ******* TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 02 August 2025 12:41:59 -0400 (0:00:00.094) 0:00:16.647 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 02 August 2025 12:41:59 -0400 (0:00:00.027) 0:00:16.675 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 02 August 2025 12:41:59 -0400 (0:00:00.020) 0:00:16.695 ******* ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 02 August 2025 12:41:59 -0400 (0:00:00.344) 0:00:17.040 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 02 August 2025 12:41:59 -0400 (0:00:00.038) 0:00:17.079 ******* ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 02 August 2025 12:41:59 -0400 (0:00:00.372) 0:00:17.452 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_is_transactional": false }, "changed": false } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 02 August 2025 12:41:59 -0400 (0:00:00.036) 0:00:17.489 ******* ok: [managed-node2] => (item=RedHat.yml) => { "ansible_facts": { "__podman_packages": [ "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed-node2] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node2] => (item=CentOS_8.yml) => { "ansible_facts": { "__podman_packages": [ "crun", "podman", "podman-plugins", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } ok: [managed-node2] => (item=CentOS_8.yml) => { "ansible_facts": { "__podman_packages": [ "crun", "podman", "podman-plugins", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 02 August 2025 12:41:59 -0400 (0:00:00.069) 0:00:17.558 ******* ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 02 August 2025 12:42:01 -0400 (0:00:01.819) 0:00:19.378 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 02 August 2025 12:42:01 -0400 (0:00:00.049) 0:00:19.427 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 02 August 2025 12:42:01 -0400 (0:00:00.047) 0:00:19.475 ******* skipping: [managed-node2] => {} TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 02 August 2025 12:42:01 -0400 (0:00:00.035) 0:00:19.511 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 02 August 2025 12:42:01 -0400 (0:00:00.035) 0:00:19.546 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 02 August 2025 12:42:01 -0400 (0:00:00.034) 0:00:19.581 ******* ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.034150", "end": "2025-08-02 12:42:02.281676", "rc": 0, "start": "2025-08-02 12:42:02.247526" } STDOUT: podman version 4.9.4-dev TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 02 August 2025 12:42:02 -0400 (0:00:00.405) 0:00:19.987 ******* ok: [managed-node2] => { "ansible_facts": { "podman_version": "4.9.4-dev" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 02 August 2025 12:42:02 -0400 (0:00:00.036) 0:00:20.023 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 02 August 2025 12:42:02 -0400 (0:00:00.035) 0:00:20.059 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } META: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 02 August 2025 12:42:02 -0400 (0:00:00.146) 0:00:20.206 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } META: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 02 August 2025 12:42:02 -0400 (0:00:00.088) 0:00:20.295 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 02 August 2025 12:42:02 -0400 (0:00:00.098) 0:00:20.393 ******* ok: [managed-node2] => { "ansible_facts": { "getent_passwd": { "root": [ "x", "0", "0", "root", "/root", "/bin/bash" ] } }, "changed": false } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 02 August 2025 12:42:03 -0400 (0:00:00.509) 0:00:20.903 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 02 August 2025 12:42:03 -0400 (0:00:00.086) 0:00:20.990 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 02 August 2025 12:42:03 -0400 (0:00:00.078) 0:00:21.069 ******* ok: [managed-node2] => { "changed": false, "stat": { "atime": 1754152551.4765396, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "bb5b46ffbafcaa8c4021f3c8b3cb8594f48ef34b", "ctime": 1754152522.3404324, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6986657, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-sharedlib", "mode": "0755", "mtime": 1700557386.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 12640, "uid": 0, "version": "135203907", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 02 August 2025 12:42:03 -0400 (0:00:00.419) 0:00:21.488 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 02 August 2025 12:42:03 -0400 (0:00:00.069) 0:00:21.557 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 02 August 2025 12:42:03 -0400 (0:00:00.062) 0:00:21.620 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.065) 0:00:21.686 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.061) 0:00:21.748 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.064) 0:00:21.812 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.062) 0:00:21.875 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.065) 0:00:21.940 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.071) 0:00:22.012 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.152) 0:00:22.165 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.119) 0:00:22.284 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.082) 0:00:22.366 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.046) 0:00:22.413 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.076) 0:00:22.490 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.036) 0:00:22.526 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 02 August 2025 12:42:04 -0400 (0:00:00.053) 0:00:22.579 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.111) 0:00:22.691 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.064) 0:00:22.755 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.061) 0:00:22.816 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.131) 0:00:22.948 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.060) 0:00:23.009 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.063) 0:00:23.073 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.069) 0:00:23.143 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.059) 0:00:23.202 ******* TASK [fedora.linux_system_roles.firewall : Setup firewalld] ******************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:2 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.253) 0:00:23.456 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml for managed-node2 TASK [fedora.linux_system_roles.firewall : Ensure ansible_facts used by role] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:2 Saturday 02 August 2025 12:42:05 -0400 (0:00:00.117) 0:00:23.573 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if system is ostree] ********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:10 Saturday 02 August 2025 12:42:06 -0400 (0:00:00.072) 0:00:23.645 ******* ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.firewall : Set flag to indicate system is ostree] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:15 Saturday 02 August 2025 12:42:06 -0400 (0:00:00.422) 0:00:24.067 ******* ok: [managed-node2] => { "ansible_facts": { "__firewall_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.firewall : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:22 Saturday 02 August 2025 12:42:06 -0400 (0:00:00.065) 0:00:24.133 ******* ok: [managed-node2] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.firewall : Set flag if transactional-update exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:27 Saturday 02 August 2025 12:42:06 -0400 (0:00:00.426) 0:00:24.559 ******* ok: [managed-node2] => { "ansible_facts": { "__firewall_is_transactional": false }, "changed": false } TASK [fedora.linux_system_roles.firewall : Run systemctl] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:34 Saturday 02 August 2025 12:42:06 -0400 (0:00:00.065) 0:00:24.625 ******* ok: [managed-node2] => { "changed": false, "cmd": [ "systemctl", "is-system-running" ], "delta": "0:00:00.007431", "end": "2025-08-02 12:42:07.323864", "failed_when_result": false, "rc": 0, "start": "2025-08-02 12:42:07.316433" } STDOUT: running TASK [fedora.linux_system_roles.firewall : Require installed systemd] ********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:41 Saturday 02 August 2025 12:42:07 -0400 (0:00:00.425) 0:00:25.051 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:46 Saturday 02 August 2025 12:42:07 -0400 (0:00:00.060) 0:00:25.112 ******* ok: [managed-node2] => { "ansible_facts": { "__firewall_is_booted": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Install firewalld] ****************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:51 Saturday 02 August 2025 12:42:07 -0400 (0:00:00.068) 0:00:25.180 ******* ok: [managed-node2] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: firewalld TASK [fedora.linux_system_roles.firewall : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:63 Saturday 02 August 2025 12:42:10 -0400 (0:00:02.909) 0:00:28.089 ******* skipping: [managed-node2] => {} TASK [fedora.linux_system_roles.firewall : Reboot transactional update systems] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:68 Saturday 02 August 2025 12:42:10 -0400 (0:00:00.053) 0:00:28.142 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Fail if reboot is needed and not set] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:73 Saturday 02 August 2025 12:42:10 -0400 (0:00:00.073) 0:00:28.216 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check which conflicting services are enabled] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:5 Saturday 02 August 2025 12:42:10 -0400 (0:00:00.059) 0:00:28.275 ******* skipping: [managed-node2] => (item=nftables) => { "ansible_loop_var": "item", "changed": false, "item": "nftables", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=iptables) => { "ansible_loop_var": "item", "changed": false, "item": "iptables", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=ufw) => { "ansible_loop_var": "item", "changed": false, "item": "ufw", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Attempt to stop and disable conflicting services] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:14 Saturday 02 August 2025 12:42:10 -0400 (0:00:00.100) 0:00:28.376 ******* skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'nftables', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "nftables", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'iptables', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "iptables", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'ufw', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "ufw", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Unmask firewalld service] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:24 Saturday 02 August 2025 12:42:10 -0400 (0:00:00.061) 0:00:28.437 ******* ok: [managed-node2] => { "changed": false, "name": "firewalld", "status": { "ActiveEnterTimestamp": "Sat 2025-08-02 12:35:59 EDT", "ActiveEnterTimestampMonotonic": "323943793", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target sysinit.target dbus.service polkit.service dbus.socket system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-08-02 12:35:58 EDT", "AssertTimestampMonotonic": "323267235", "Before": "shutdown.target network-pre.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-08-02 12:35:58 EDT", "ConditionTimestampMonotonic": "323267233", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target nftables.service ipset.service iptables.service ip6tables.service ebtables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "12989", "ExecMainStartTimestamp": "Sat 2025-08-02 12:35:58 EDT", "ExecMainStartTimestampMonotonic": "323278597", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-08-02 12:35:58 EDT", "InactiveExitTimestampMonotonic": "323278631", "InvocationID": "5c543fdf07b74af08a33ac740bfb5bdf", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "12989", "MemoryAccounting": "yes", "MemoryCurrent": "41259008", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target system.slice dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-08-02 12:35:59 EDT", "StateChangeTimestampMonotonic": "323943793", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Sat 2025-08-02 12:35:59 EDT", "WatchdogTimestampMonotonic": "323943790", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Enable and start firewalld service] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:30 Saturday 02 August 2025 12:42:11 -0400 (0:00:00.517) 0:00:28.955 ******* ok: [managed-node2] => { "changed": false, "enabled": true, "name": "firewalld", "state": "started", "status": { "ActiveEnterTimestamp": "Sat 2025-08-02 12:35:59 EDT", "ActiveEnterTimestampMonotonic": "323943793", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target sysinit.target dbus.service polkit.service dbus.socket system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-08-02 12:35:58 EDT", "AssertTimestampMonotonic": "323267235", "Before": "shutdown.target network-pre.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-08-02 12:35:58 EDT", "ConditionTimestampMonotonic": "323267233", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target nftables.service ipset.service iptables.service ip6tables.service ebtables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "12989", "ExecMainStartTimestamp": "Sat 2025-08-02 12:35:58 EDT", "ExecMainStartTimestampMonotonic": "323278597", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-08-02 12:35:58 EDT", "InactiveExitTimestampMonotonic": "323278631", "InvocationID": "5c543fdf07b74af08a33ac740bfb5bdf", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "12989", "MemoryAccounting": "yes", "MemoryCurrent": "41259008", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target system.slice dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-08-02 12:35:59 EDT", "StateChangeTimestampMonotonic": "323943793", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Sat 2025-08-02 12:35:59 EDT", "WatchdogTimestampMonotonic": "323943790", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Check if previous replaced is defined] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:36 Saturday 02 August 2025 12:42:11 -0400 (0:00:00.542) 0:00:29.497 ******* ok: [managed-node2] => { "ansible_facts": { "__firewall_previous_replaced": false, "__firewall_python_cmd": "/usr/libexec/platform-python", "__firewall_report_changed": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Get config files, checksums before and remove] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:45 Saturday 02 August 2025 12:42:11 -0400 (0:00:00.054) 0:00:29.551 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Tell firewall module it is able to report changed] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:58 Saturday 02 August 2025 12:42:11 -0400 (0:00:00.043) 0:00:29.594 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Configure firewall] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:74 Saturday 02 August 2025 12:42:11 -0400 (0:00:00.035) 0:00:29.630 ******* changed: [managed-node2] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "__firewall_changed": true, "ansible_loop_var": "item", "changed": true, "item": { "port": "8000/tcp", "state": "enabled" } } changed: [managed-node2] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "__firewall_changed": true, "ansible_loop_var": "item", "changed": true, "item": { "port": "9000/tcp", "state": "enabled" } } TASK [fedora.linux_system_roles.firewall : Gather firewall config information] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:126 Saturday 02 August 2025 12:42:13 -0400 (0:00:01.413) 0:00:31.044 ******* skipping: [managed-node2] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "item": { "port": "8000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "item": { "port": "9000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:137 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.100) 0:00:31.145 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Gather firewall config if no arguments] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:146 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.071) 0:00:31.216 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:152 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.047) 0:00:31.264 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Get config files, checksums after] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:161 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.046) 0:00:31.310 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Calculate what has changed] ********* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:172 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.052) 0:00:31.363 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Show diffs] ************************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:178 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.044) 0:00:31.407 ******* skipping: [managed-node2] => {} TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.036) 0:00:31.443 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.047) 0:00:31.491 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.033) 0:00:31.525 ******* skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.065) 0:00:31.590 ******* skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 02 August 2025 12:42:13 -0400 (0:00:00.032) 0:00:31.623 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.148) 0:00:31.771 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.038) 0:00:31.809 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.063) 0:00:31.873 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.042) 0:00:31.915 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.043) 0:00:31.959 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.048) 0:00:32.007 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.034) 0:00:32.041 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.035) 0:00:32.077 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.034) 0:00:32.111 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.034) 0:00:32.146 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.035) 0:00:32.181 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.034) 0:00:32.215 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.035) 0:00:32.251 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.034) 0:00:32.285 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.065) 0:00:32.351 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.045) 0:00:32.397 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.065) 0:00:32.462 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.045) 0:00:32.507 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.036) 0:00:32.543 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 02 August 2025 12:42:14 -0400 (0:00:00.048) 0:00:32.592 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 02 August 2025 12:42:15 -0400 (0:00:00.045) 0:00:32.637 ******* fatal: [managed-node2]: FAILED! => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result" } TASK [Dump journal] ************************************************************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:142 Saturday 02 August 2025 12:42:15 -0400 (0:00:00.069) 0:00:32.707 ******* fatal: [managed-node2]: FAILED! => { "changed": false, "cmd": [ "journalctl", "-ex" ], "delta": "0:00:00.028703", "end": "2025-08-02 12:42:15.455975", "failed_when_result": true, "rc": 0, "start": "2025-08-02 12:42:15.427272" } STDOUT: -- Logs begin at Sat 2025-08-02 12:30:34 EDT, end at Sat 2025-08-02 12:42:15 EDT. -- Aug 02 12:36:00 managed-node2 platform-python[13179]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:36:01 managed-node2 platform-python[13302]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:01 managed-node2 platform-python[13425]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:02 managed-node2 platform-python[13548]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:05 managed-node2 platform-python[13671]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:08 managed-node2 platform-python[13794]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:10 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:36:10 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:36:10 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-ra2afdf9e5f5f4df293ca569a6cdf6359.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-ra2afdf9e5f5f4df293ca569a6cdf6359.service has finished starting up. -- -- The start-up result is done. Aug 02 12:36:10 managed-node2 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Aug 02 12:36:11 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Aug 02 12:36:11 managed-node2 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Aug 02 12:36:11 managed-node2 systemd[1]: run-ra2afdf9e5f5f4df293ca569a6cdf6359.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-ra2afdf9e5f5f4df293ca569a6cdf6359.service has successfully entered the 'dead' state. Aug 02 12:36:11 managed-node2 platform-python[14399]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:36:12 managed-node2 platform-python[14547]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:36:13 managed-node2 platform-python[14671]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:36:15 managed-node2 kernel: SELinux: Converting 460 SID table entries... Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability network_peer_controls=1 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability open_perms=1 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability extended_socket_class=1 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability always_check_network=0 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 02 12:36:15 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:36:15 managed-node2 platform-python[14798]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:36:20 managed-node2 platform-python[14921]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:22 managed-node2 platform-python[15046]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:23 managed-node2 platform-python[15169]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:36:23 managed-node2 platform-python[15292]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:36:23 managed-node2 platform-python[15391]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/nopull.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152583.1815786-10272-87697288283027/source _original_basename=tmp987jyyff follow=False checksum=d5dc917e3cae36de03aa971a17ac473f86fdf934 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:36:24 managed-node2 platform-python[15516]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:36:24 managed-node2 kernel: evm: overlay not supported Aug 02 12:36:24 managed-node2 systemd[1]: Created slice machine.slice. -- Subject: Unit machine.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:36:24 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice. -- Subject: Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:36:25 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:36:29 managed-node2 platform-python[15841]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:36:31 managed-node2 platform-python[15970]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:34 managed-node2 platform-python[16095]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:37 managed-node2 platform-python[16218]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:36:38 managed-node2 platform-python[16345]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:36:38 managed-node2 platform-python[16472]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:36:41 managed-node2 platform-python[16595]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:44 managed-node2 platform-python[16718]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:47 managed-node2 platform-python[16841]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:50 managed-node2 platform-python[16964]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:36:51 managed-node2 platform-python[17112]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:36:52 managed-node2 platform-python[17235]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:36:57 managed-node2 platform-python[17358]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:59 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:37:00 managed-node2 platform-python[17620]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:00 managed-node2 platform-python[17743]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:37:01 managed-node2 platform-python[17866]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:37:01 managed-node2 platform-python[17965]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/bogus.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152620.9003708-11840-265132279831358/source _original_basename=tmp6af94dg8 follow=False checksum=f8266a972ed3be7e204d2a67883fe3a22b8dbf18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:37:02 managed-node2 platform-python[18090]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:37:02 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice. -- Subject: Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:37:02 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:37:05 managed-node2 platform-python[18377]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:37:06 managed-node2 platform-python[18506]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:10 managed-node2 platform-python[18631]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:13 managed-node2 platform-python[18754]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:37:14 managed-node2 platform-python[18881]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:37:14 managed-node2 platform-python[19008]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:37:16 managed-node2 platform-python[19131]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:19 managed-node2 platform-python[19254]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:22 managed-node2 platform-python[19377]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:25 managed-node2 platform-python[19500]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:37:27 managed-node2 platform-python[19648]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:37:28 managed-node2 platform-python[19771]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:37:32 managed-node2 platform-python[19894]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:33 managed-node2 platform-python[20019]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:37:34 managed-node2 platform-python[20143]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:37:35 managed-node2 platform-python[20270]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:35 managed-node2 platform-python[20395]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:37:35 managed-node2 platform-python[20395]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/nopull.yml Aug 02 12:37:35 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice. -- Subject: Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished shutting down. Aug 02 12:37:35 managed-node2 systemd[1]: machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice completed and consumed the indicated resources. Aug 02 12:37:35 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:37:36 managed-node2 platform-python[20533]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:37:36 managed-node2 platform-python[20656]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:37:39 managed-node2 platform-python[20911]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:37:41 managed-node2 platform-python[21040]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:45 managed-node2 platform-python[21165]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:48 managed-node2 platform-python[21288]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:37:48 managed-node2 platform-python[21415]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:37:49 managed-node2 platform-python[21542]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:37:51 managed-node2 platform-python[21665]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:54 managed-node2 platform-python[21788]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:57 managed-node2 platform-python[21911]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:00 managed-node2 platform-python[22034]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:38:02 managed-node2 platform-python[22182]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:38:03 managed-node2 platform-python[22305]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:38:08 managed-node2 platform-python[22428]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:09 managed-node2 platform-python[22553]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:10 managed-node2 platform-python[22677]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:38:10 managed-node2 platform-python[22804]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:11 managed-node2 platform-python[22929]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:38:11 managed-node2 platform-python[22929]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/bogus.yml Aug 02 12:38:11 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice. -- Subject: Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished shutting down. Aug 02 12:38:11 managed-node2 systemd[1]: machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice completed and consumed the indicated resources. Aug 02 12:38:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:38:12 managed-node2 platform-python[23068]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:38:12 managed-node2 platform-python[23191]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:16 managed-node2 platform-python[23446]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:17 managed-node2 platform-python[23575]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:21 managed-node2 platform-python[23700]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:24 managed-node2 platform-python[23823]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:38:24 managed-node2 platform-python[23950]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:38:25 managed-node2 platform-python[24077]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:38:27 managed-node2 platform-python[24200]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:30 managed-node2 platform-python[24323]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:33 managed-node2 platform-python[24446]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:36 managed-node2 platform-python[24569]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:38:38 managed-node2 platform-python[24717]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:38:38 managed-node2 platform-python[24840]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:38:43 managed-node2 platform-python[24963]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Aug 02 12:38:43 managed-node2 platform-python[25087]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:44 managed-node2 platform-python[25212]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:44 managed-node2 platform-python[25336]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:46 managed-node2 platform-python[25460]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:47 managed-node2 platform-python[25584]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Aug 02 12:38:47 managed-node2 systemd[1]: Created slice User Slice of UID 3001. -- Subject: Unit user-3001.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-3001.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[1]: Starting User runtime directory /run/user/3001... -- Subject: Unit user-runtime-dir@3001.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has begun starting up. Aug 02 12:38:47 managed-node2 systemd[1]: Started User runtime directory /run/user/3001. -- Subject: Unit user-runtime-dir@3001.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[1]: Starting User Manager for UID 3001... -- Subject: Unit user@3001.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has begun starting up. Aug 02 12:38:47 managed-node2 systemd[25590]: pam_unix(systemd-user:session): session opened for user podman_basic_user by (uid=0) Aug 02 12:38:47 managed-node2 systemd[25590]: Starting D-Bus User Message Bus Socket. -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Aug 02 12:38:47 managed-node2 systemd[25590]: Started Mark boot as successful after the user session has run 2 minutes. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Paths. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Timers. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Listening on D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Sockets. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Basic System. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Default. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Startup finished in 32ms. -- Subject: User manager start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The user manager instance for user 3001 has been started. All services queued -- for starting have been started. Note that other services might still be starting -- up or be started at any later time. -- -- Startup of the manager took 32456 microseconds. Aug 02 12:38:47 managed-node2 systemd[1]: Started User Manager for UID 3001. -- Subject: Unit user@3001.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has finished starting up. -- -- The start-up result is done. Aug 02 12:38:48 managed-node2 platform-python[25725]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:38:48 managed-node2 platform-python[25848]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:38:48 managed-node2 sudo[25971]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbiyvlbwevndfyvleplnipyfcreleuz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152728.615097-16354-140653512589780/AnsiballZ_podman_image.py' Aug 02 12:38:48 managed-node2 sudo[25971]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:49 managed-node2 systemd[25590]: Started D-Bus User Message Bus. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Created slice user.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Started podman-25984.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Started podman-pause-9fcbd008.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Started podman-26000.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Started podman-26016.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:50 managed-node2 sudo[25971]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:50 managed-node2 platform-python[26145]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:50 managed-node2 platform-python[26268]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:38:51 managed-node2 platform-python[26391]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:38:51 managed-node2 platform-python[26490]: ansible-copy Invoked with dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml owner=podman_basic_user group=3001 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152731.0942633-16483-51427114771259/source _original_basename=tmpz7phazza follow=False checksum=41ba442683d49d3571d4ddce7f5dc14c85104270 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:38:51 managed-node2 sudo[26615]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtlljryarlxiuspbmjgrszvqdzysmgm ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152731.799995-16513-128710640317424/AnsiballZ_podman_play.py' Aug 02 12:38:51 managed-node2 sudo[26615]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:38:52 managed-node2 systemd[25590]: Started podman-26626.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:52 managed-node2 kernel: tun: Universal TUN/TAP device driver, 1.6 Aug 02 12:38:52 managed-node2 systemd[25590]: Started rootless-netns-6da9f76b.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:52 managed-node2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethff7bc329: link is not ready Aug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered blocking state Aug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state Aug 02 12:38:52 managed-node2 kernel: device vethff7bc329 entered promiscuous mode Aug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Aug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethff7bc329: link becomes ready Aug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered blocking state Aug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered forwarding state Aug 02 12:38:52 managed-node2 dnsmasq[26814]: listening on cni-podman1(#3): 10.89.0.1 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: started, version 2.79 cachesize 150 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using local addresses only for domain dns.podman Aug 02 12:38:52 managed-node2 dnsmasq[26816]: reading /etc/resolv.conf Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using local addresses only for domain dns.podman Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.0.2.3#53 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.29.169.13#53 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.29.170.12#53 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.2.32.1#53 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:38:52 managed-node2 conmon[26830]: conmon af16b69d72cc4526d63a : failed to write to /proc/self/oom_score_adj: Permission denied Aug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach} Aug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : terminal_ctrl_fd: 14 Aug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : winsz read side: 17, winsz write side: 18 Aug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : container PID: 26841 Aug 02 12:38:52 managed-node2 conmon[26851]: conmon 98c476488369c461640e : failed to write to /proc/self/oom_score_adj: Permission denied Aug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach} Aug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : terminal_ctrl_fd: 13 Aug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : winsz read side: 16, winsz write side: 17 Aug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : container PID: 26862 Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f Container: 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-08-02T12:38:52-04:00" level=info msg="/bin/podman filtering at log level debug" time="2025-08-02T12:38:52-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-08-02T12:38:52-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-08-02T12:38:52-04:00" level=info msg="Using sqlite as database backend" time="2025-08-02T12:38:52-04:00" level=debug msg="Using graph driver overlay" time="2025-08-02T12:38:52-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2025-08-02T12:38:52-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2025-08-02T12:38:52-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2025-08-02T12:38:52-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2025-08-02T12:38:52-04:00" level=debug msg="Using transient store: false" time="2025-08-02T12:38:52-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2025-08-02T12:38:52-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2025-08-02T12:38:52-04:00" level=debug msg="Initializing event backend file" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-08-02T12:38:52-04:00" level=info msg="Setting parallel job count to 7" time="2025-08-02T12:38:52-04:00" level=debug msg="Successfully loaded 1 networks" time="2025-08-02T12:38:52-04:00" level=debug msg="found free device name cni-podman1" time="2025-08-02T12:38:52-04:00" level=debug msg="found free ipv4 network subnet 10.89.0.0/24" time="2025-08-02T12:38:52-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:38:52.156183843 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-08-02T12:38:52-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="reference \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" does not resolve to an image ID" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="reference \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" does not resolve to an image ID" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="FROM \"scratch\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2025-08-02T12:38:52-04:00" level=debug msg="Check for idmapped mounts support " time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="overlay: test mount indicated that volatile is being used" time="2025-08-02T12:38:52-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/empty,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/work,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c153,c335\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container ID: 0aa3a7343fabfcbf854b2db926e8ef83982bdca7985f430aea1f98b430ae4469" time="2025-08-02T12:38:52-04:00" level=debug msg="Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:copy Args:[/usr/libexec/podman/catatonit /catatonit] Flags:[] Attrs:map[] Message:COPY /usr/libexec/podman/catatonit /catatonit Heredocs:[] Original:COPY /usr/libexec/podman/catatonit /catatonit}" time="2025-08-02T12:38:52-04:00" level=debug msg="COPY []string(nil), imagebuilder.Copy{FromFS:false, From:\"\", Src:[]string{\"/usr/libexec/podman/catatonit\"}, Dest:\"/catatonit\", Download:false, Chown:\"\", Chmod:\"\", Checksum:\"\", Files:[]imagebuilder.File(nil)}" time="2025-08-02T12:38:52-04:00" level=debug msg="added content file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd" time="2025-08-02T12:38:52-04:00" level=debug msg="Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:entrypoint Args:[/catatonit -P] Flags:[] Attrs:map[json:true] Message:ENTRYPOINT /catatonit -P Heredocs:[] Original:ENTRYPOINT [\"/catatonit\", \"-P\"]}" time="2025-08-02T12:38:52-04:00" level=debug msg="COMMIT localhost/podman-pause:4.9.4-dev-1708535009" time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-08-02T12:38:52-04:00" level=debug msg="COMMIT \"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-08-02T12:38:52-04:00" level=debug msg="committing image with reference \"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" is allowed by policy" time="2025-08-02T12:38:52-04:00" level=debug msg="layer list: [\"5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1\"]" time="2025-08-02T12:38:52-04:00" level=debug msg="using \"/var/tmp/buildah3803674644\" to hold temporary data" time="2025-08-02T12:38:52-04:00" level=debug msg="Tar with options on /home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/diff" time="2025-08-02T12:38:52-04:00" level=debug msg="layer \"5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1\" size is 767488 bytes, uncompressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690, possibly-compressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690" time="2025-08-02T12:38:52-04:00" level=debug msg="OCIv1 config = {\"created\":\"2025-08-02T16:38:52.296533662Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/catatonit\",\"-P\"],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"]},\"history\":[{\"created\":\"2025-08-02T16:38:52.295942274Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \",\"empty_layer\":true},{\"created\":\"2025-08-02T16:38:52.299650983Z\",\"created_by\":\"/bin/sh -c #(nop) ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]\"}]}" time="2025-08-02T12:38:52-04:00" level=debug msg="OCIv1 manifest = {\"schemaVersion\":2,\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"config\":{\"mediaType\":\"application/vnd.oci.image.config.v1+json\",\"digest\":\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\",\"size\":668},\"layers\":[{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar\",\"digest\":\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\",\"size\":767488}],\"annotations\":{\"org.opencontainers.image.base.digest\":\"\",\"org.opencontainers.image.base.name\":\"\"}}" time="2025-08-02T12:38:52-04:00" level=debug msg="Docker v2s2 config = {\"created\":\"2025-08-02T16:38:52.296533662Z\",\"container\":\"0aa3a7343fabfcbf854b2db926e8ef83982bdca7985f430aea1f98b430ae4469\",\"container_config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[],\"Image\":\"\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":[\"/catatonit\",\"-P\"],\"OnBuild\":[],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[],\"Image\":\"\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":[\"/catatonit\",\"-P\"],\"OnBuild\":[],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"architecture\":\"amd64\",\"os\":\"linux\",\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"]},\"history\":[{\"created\":\"2025-08-02T16:38:52.295942274Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \",\"empty_layer\":true},{\"created\":\"2025-08-02T16:38:52.299650983Z\",\"created_by\":\"/bin/sh -c #(nop) ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]\"}]}" time="2025-08-02T12:38:52-04:00" level=debug msg="Docker v2s2 manifest = {\"schemaVersion\":2,\"mediaType\":\"application/vnd.docker.distribution.manifest.v2+json\",\"config\":{\"mediaType\":\"application/vnd.docker.container.image.v1+json\",\"size\":1342,\"digest\":\"sha256:69b1a52f65cb5e3fa99e89b61152bda48cb5524edcedfdf2eac76a30c6778813\"},\"layers\":[{\"mediaType\":\"application/vnd.docker.image.rootfs.diff.tar\",\"size\":767488,\"digest\":\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"}]}" time="2025-08-02T12:38:52-04:00" level=debug msg="Using SQLite blob info cache at /home/podman_basic_user/.local/share/containers/cache/blob-info-cache-v1.sqlite" time="2025-08-02T12:38:52-04:00" level=debug msg="IsRunningImageAllowed for image containers-storage:" time="2025-08-02T12:38:52-04:00" level=debug msg=" Using transport \"containers-storage\" policy section " time="2025-08-02T12:38:52-04:00" level=debug msg=" Requirement 0: allowed" time="2025-08-02T12:38:52-04:00" level=debug msg="Overall: allowed" time="2025-08-02T12:38:52-04:00" level=debug msg="start reading config" time="2025-08-02T12:38:52-04:00" level=debug msg="finished reading config" time="2025-08-02T12:38:52-04:00" level=debug msg="Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]" time="2025-08-02T12:38:52-04:00" level=debug msg="... will first try using the original manifest unmodified" time="2025-08-02T12:38:52-04:00" level=debug msg="Checking if we can reuse blob sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690: general substitution = true, compression for MIME type \"application/vnd.oci.image.layer.v1.tar\" = true" time="2025-08-02T12:38:52-04:00" level=debug msg="reading layer \"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"" time="2025-08-02T12:38:52-04:00" level=debug msg="No compression detected" time="2025-08-02T12:38:52-04:00" level=debug msg="Using original blob without modification" time="2025-08-02T12:38:52-04:00" level=debug msg="Applying tar in /home/podman_basic_user/.local/share/containers/storage/overlay/d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690/diff" time="2025-08-02T12:38:52-04:00" level=debug msg="finished reading layer \"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"" time="2025-08-02T12:38:52-04:00" level=debug msg="No compression detected" time="2025-08-02T12:38:52-04:00" level=debug msg="Compression change for blob sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778 (\"application/vnd.oci.image.config.v1+json\") not supported" time="2025-08-02T12:38:52-04:00" level=debug msg="Using original blob without modification" time="2025-08-02T12:38:52-04:00" level=debug msg="setting image creation date to 2025-08-02 16:38:52.296533662 +0000 UTC" time="2025-08-02T12:38:52-04:00" level=debug msg="created new image ID \"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\" with metadata \"{}\"" time="2025-08-02T12:38:52-04:00" level=debug msg="added name \"localhost/podman-pause:4.9.4-dev-1708535009\" to image \"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-08-02T12:38:52-04:00" level=debug msg="printing final image id \"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Pod using bridge network mode" time="2025-08-02T12:38:52-04:00" level=debug msg="Got pod cgroup as /libpod_parent/191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778" time="2025-08-02T12:38:52-04:00" level=debug msg="using systemd mode: false" time="2025-08-02T12:38:52-04:00" level=debug msg="setting container name 191a369333e4-infra" time="2025-08-02T12:38:52-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Allocated lock 1 for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae" time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created container \"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container \"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\" has work directory \"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container \"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\" has run directory \"/run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:38:52-04:00" level=debug msg="using systemd mode: false" time="2025-08-02T12:38:52-04:00" level=debug msg="adding container to pod httpd1" time="2025-08-02T12:38:52-04:00" level=debug msg="setting container name httpd1-httpd1" time="2025-08-02T12:38:52-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-08-02T12:38:52-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /proc" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /dev" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /dev/pts" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /dev/mqueue" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /sys" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2025-08-02T12:38:52-04:00" level=debug msg="Allocated lock 2 for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939" time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created container \"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container \"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\" has work directory \"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container \"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\" has run directory \"/run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Strongconnecting node af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae" time="2025-08-02T12:38:52-04:00" level=debug msg="Pushed af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae onto stack" time="2025-08-02T12:38:52-04:00" level=debug msg="Finishing node af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae. Popped af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae off stack" time="2025-08-02T12:38:52-04:00" level=debug msg="Strongconnecting node 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939" time="2025-08-02T12:38:52-04:00" level=debug msg="Pushed 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 onto stack" time="2025-08-02T12:38:52-04:00" level=debug msg="Finishing node 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939. Popped 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 off stack" time="2025-08-02T12:38:52-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/4MRAZCR7JRY45YIIWXX5WJJ6A6,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/work,userxattr,context=\"system_u:object_r:container_file_t:s0:c389,c456\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Made network namespace at /run/user/3001/netns/netns-6723b79b-4d64-cc19-6a91-87394c058c53 for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae" time="2025-08-02T12:38:52-04:00" level=debug msg="Mounted container \"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\" at \"/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created root filesystem for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae at /home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged" time="2025-08-02T12:38:52-04:00" level=debug msg="creating rootless network namespace with name \"rootless-netns-d22c9f230d0691b8f418\"" time="2025-08-02T12:38:52-04:00" level=debug msg="slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/3001/netns/rootless-netns-d22c9f230d0691b8f418 tap0" time="2025-08-02T12:38:52-04:00" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/etc/resolv.conf\"" time="2025-08-02T12:38:52-04:00" level=debug msg="cni result for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:1e:08:d6:95:5e:f1 Sandbox:} {Name:vethff7bc329 Mac:26:19:1e:a6:0a:11 Sandbox:} {Name:eth0 Mac:1e:8a:1a:f5:d1:2a Sandbox:/run/user/3001/netns/netns-6723b79b-4d64-cc19-6a91-87394c058c53}] [{Version:4 Interface:0xc000b96228 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Starting parent driver\"\ntime=\"2025-08-02T12:38:52-04:00\" level=info msg=\"opaque=map[builtin.readypipepath:/run/user/3001/libpod/tmp/rootlessport3029357974/.bp-ready.pipe builtin.socketpath:/run/user/3001/libpod/tmp/rootlessport3029357974/.bp.sock]\"\ntime=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Starting child driver in child netns (\\\"/proc/self/exe\\\" [rootlessport-child])\"\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Waiting for initComplete\"\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"initComplete is closed; parent and child established the communication channel\"\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Exposing ports [{ 80 15001 1 tcp}]\"\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=Ready\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport is ready" time="2025-08-02T12:38:52-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-08-02T12:38:52-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-08-02T12:38:52-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created OCI spec for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/config.json" time="2025-08-02T12:38:52-04:00" level=debug msg="Got pod cgroup as " time="2025-08-02T12:38:52-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-08-02T12:38:52-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae -u af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata -p /run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/pidfile -n 191a369333e4-infra --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae]" time="2025-08-02T12:38:52-04:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied time="2025-08-02T12:38:52-04:00" level=debug msg="Received: 26841" time="2025-08-02T12:38:52-04:00" level=info msg="Got Conmon PID as 26831" time="2025-08-02T12:38:52-04:00" level=debug msg="Created container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae in OCI runtime" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2025-08-02T12:38:52-04:00" level=debug msg="Starting container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae with command [/catatonit -P]" time="2025-08-02T12:38:52-04:00" level=debug msg="Started container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae" time="2025-08-02T12:38:52-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/S5QNMEV2IMLZOTXAJ3H4ZQCILN,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/work,userxattr,context=\"system_u:object_r:container_file_t:s0:c389,c456\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Mounted container \"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\" at \"/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/merged\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created root filesystem for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 at /home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/merged" time="2025-08-02T12:38:52-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-08-02T12:38:52-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-08-02T12:38:52-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2025-08-02T12:38:52-04:00" level=debug msg="Created OCI spec for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/config.json" time="2025-08-02T12:38:52-04:00" level=debug msg="Got pod cgroup as " time="2025-08-02T12:38:52-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-08-02T12:38:52-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 -u 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata -p /run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/pidfile -n httpd1-httpd1 --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939]" time="2025-08-02T12:38:52-04:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied time="2025-08-02T12:38:52-04:00" level=debug msg="Received: 26862" time="2025-08-02T12:38:52-04:00" level=info msg="Got Conmon PID as 26852" time="2025-08-02T12:38:52-04:00" level=debug msg="Created container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 in OCI runtime" time="2025-08-02T12:38:52-04:00" level=debug msg="Starting container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 with command [/bin/busybox-extras httpd -f -p 80]" time="2025-08-02T12:38:52-04:00" level=debug msg="Started container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939" time="2025-08-02T12:38:52-04:00" level=debug msg="Called kube.PersistentPostRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-08-02T12:38:52-04:00" level=debug msg="Shutting down engines" Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Aug 02 12:38:52 managed-node2 sudo[26615]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:53 managed-node2 sudo[26993]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbaqkfutgdfbadtjjacfxjqxvpyoigvo ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152733.1876235-16558-123988354219606/AnsiballZ_systemd.py' Aug 02 12:38:53 managed-node2 sudo[26993]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:53 managed-node2 platform-python[26996]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Aug 02 12:38:53 managed-node2 systemd[25590]: Reloading. Aug 02 12:38:53 managed-node2 sudo[26993]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:53 managed-node2 sudo[27130]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmqxburdibhftsxxfjnfharsaboqrlrj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152733.8051448-16591-22459124935909/AnsiballZ_systemd.py' Aug 02 12:38:53 managed-node2 sudo[27130]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:54 managed-node2 platform-python[27133]: ansible-systemd Invoked with name= scope=user enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Aug 02 12:38:54 managed-node2 systemd[25590]: Reloading. Aug 02 12:38:54 managed-node2 sudo[27130]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:54 managed-node2 dnsmasq[26816]: listening on cni-podman1(#3): fe80::1c08:d6ff:fe95:5ef1%cni-podman1 Aug 02 12:38:54 managed-node2 sudo[27269]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bebktvlckbeotrlmhvsnejtmeicquqjz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152734.429291-16621-12283105958346/AnsiballZ_systemd.py' Aug 02 12:38:54 managed-node2 sudo[27269]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:54 managed-node2 platform-python[27272]: ansible-systemd Invoked with name= scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Aug 02 12:38:54 managed-node2 systemd[25590]: Created slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:54 managed-node2 systemd[25590]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Aug 02 12:38:54 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : container 26841 exited with status 137 Aug 02 12:38:54 managed-node2 conmon[26852]: conmon 98c476488369c461640e : container 26862 exited with status 137 Aug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:54-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae)" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:54-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:54-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:54-04:00" level=info msg="Using sqlite as database backend" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:54-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939)" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:54-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:54-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:54-04:00" level=info msg="Using sqlite as database backend" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using graph driver overlay" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using run root /run/user/3001/containers" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using transient store: false" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that metacopy is not being used" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that native-diff is usable" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Initializing event backend file" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=info msg="Setting parallel job count to 7" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using graph driver overlay" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using run root /run/user/3001/containers" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using transient store: false" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that metacopy is not being used" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that native-diff is usable" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Initializing event backend file" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=info msg="Setting parallel job count to 7" Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state Aug 02 12:38:55 managed-node2 kernel: device vethff7bc329 left promiscuous mode Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939)" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Shutting down engines" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae)" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Shutting down engines" Aug 02 12:38:55 managed-node2 podman[27278]: Pods stopped: Aug 02 12:38:55 managed-node2 podman[27278]: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f Aug 02 12:38:55 managed-node2 podman[27278]: Pods removed: Aug 02 12:38:55 managed-node2 podman[27278]: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f Aug 02 12:38:55 managed-node2 podman[27278]: Secrets removed: Aug 02 12:38:55 managed-node2 podman[27278]: Volumes removed: Aug 02 12:38:55 managed-node2 systemd[25590]: Started rootless-netns-dd6b3697.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:55 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethfa4f074b: link is not ready Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered blocking state Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state Aug 02 12:38:55 managed-node2 kernel: device vethfa4f074b entered promiscuous mode Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered blocking state Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered forwarding state Aug 02 12:38:55 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfa4f074b: link becomes ready Aug 02 12:38:55 managed-node2 dnsmasq[27525]: listening on cni-podman1(#3): 10.89.0.1 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: started, version 2.79 cachesize 150 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using local addresses only for domain dns.podman Aug 02 12:38:55 managed-node2 dnsmasq[27527]: reading /etc/resolv.conf Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using local addresses only for domain dns.podman Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.0.2.3#53 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.29.169.13#53 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.29.170.12#53 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.2.32.1#53 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:38:55 managed-node2 podman[27278]: Pod: Aug 02 12:38:55 managed-node2 podman[27278]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a Aug 02 12:38:55 managed-node2 podman[27278]: Container: Aug 02 12:38:55 managed-node2 podman[27278]: bc86eb03c7fb7110b2363dd55ed2866f782f16e8d8374c8a82784079a47558f1 Aug 02 12:38:55 managed-node2 systemd[25590]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:55 managed-node2 sudo[27269]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:56 managed-node2 platform-python[27703]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:38:56 managed-node2 dnsmasq[27527]: listening on cni-podman1(#3): fe80::a0c6:53ff:fed6:1184%cni-podman1 Aug 02 12:38:57 managed-node2 platform-python[27827]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:58 managed-node2 platform-python[27952]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:59 managed-node2 platform-python[28076]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:00 managed-node2 platform-python[28199]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:39:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:39:01 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:39:01 managed-node2 platform-python[28489]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:02 managed-node2 platform-python[28612]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:02 managed-node2 platform-python[28735]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:39:03 managed-node2 platform-python[28834]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152742.5335138-17006-203119541001881/source _original_basename=tmpvkt7buq9 follow=False checksum=2a8a08ffe6bf0159dd7563e043ed3c303a77cff4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:39:03 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:39:03 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice. -- Subject: Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.7856] manager: (cni-podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3) Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.7870] manager: (veth502e5636): new Veth device (/org/freedesktop/NetworkManager/Devices/4) Aug 02 12:39:03 managed-node2 systemd-udevd[29006]: Using default interface naming scheme 'rhel-8.0'. Aug 02 12:39:03 managed-node2 systemd-udevd[29006]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:03 managed-node2 systemd-udevd[29006]: Could not generate persistent MAC address for cni-podman1: No such file or directory Aug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth502e5636: link is not ready Aug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered blocking state Aug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state Aug 02 12:39:03 managed-node2 kernel: device veth502e5636 entered promiscuous mode Aug 02 12:39:03 managed-node2 systemd-udevd[29007]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:03 managed-node2 systemd-udevd[29007]: Could not generate persistent MAC address for veth502e5636: No such file or directory Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8196] device (cni-podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8201] device (cni-podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8209] device (cni-podman1): Activation: starting connection 'cni-podman1' (0ddcaf44-4d9a-41cb-acd9-42060ce7dc76) Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8210] device (cni-podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8212] device (cni-podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8215] device (cni-podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8217] device (cni-podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 dbus-daemon[595]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=665 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Aug 02 12:39:03 managed-node2 systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has begun starting up. Aug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Aug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth502e5636: link becomes ready Aug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered blocking state Aug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered forwarding state Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8506] device (veth502e5636): carrier: link connected Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8508] device (cni-podman1): carrier: link connected Aug 02 12:39:03 managed-node2 dbus-daemon[595]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8678] device (cni-podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8680] device (cni-podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8684] device (cni-podman1): Activation: successful, device activated. Aug 02 12:39:03 managed-node2 systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Aug 02 12:39:03 managed-node2 dnsmasq[29128]: listening on cni-podman1(#3): 10.89.0.1 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: started, version 2.79 cachesize 150 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using local addresses only for domain dns.podman Aug 02 12:39:03 managed-node2 dnsmasq[29132]: reading /etc/resolv.conf Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using local addresses only for domain dns.podman Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.29.169.13#53 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.29.170.12#53 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.2.32.1#53 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:39:04 managed-node2 systemd[1]: Started libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope. -- Subject: Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach} Aug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : terminal_ctrl_fd: 13 Aug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : winsz read side: 17, winsz write side: 18 Aug 02 12:39:04 managed-node2 systemd[1]: Started libcontainer container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89. -- Subject: Unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : container PID: 29144 Aug 02 12:39:04 managed-node2 systemd[1]: Started libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope. -- Subject: Unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach} Aug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : terminal_ctrl_fd: 12 Aug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : winsz read side: 16, winsz write side: 17 Aug 02 12:39:04 managed-node2 systemd[1]: Started libcontainer container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb. -- Subject: Unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : container PID: 29166 Aug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Aug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496 Container: 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb Aug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-08-02T12:39:03-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2025-08-02T12:39:03-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-08-02T12:39:03-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-08-02T12:39:03-04:00" level=info msg="Using sqlite as database backend" time="2025-08-02T12:39:03-04:00" level=debug msg="Using graph driver overlay" time="2025-08-02T12:39:03-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Using run root /run/containers/storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2025-08-02T12:39:03-04:00" level=debug msg="Using tmp dir /run/libpod" time="2025-08-02T12:39:03-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2025-08-02T12:39:03-04:00" level=debug msg="Using transient store: false" time="2025-08-02T12:39:03-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2025-08-02T12:39:03-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2025-08-02T12:39:03-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2025-08-02T12:39:03-04:00" level=debug msg="Initializing event backend file" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-08-02T12:39:03-04:00" level=info msg="Setting parallel job count to 7" time="2025-08-02T12:39:03-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:36:24.472660556 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-08-02T12:39:03-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Pod using bridge network mode" time="2025-08-02T12:39:03-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496" time="2025-08-02T12:39:03-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:03-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb" time="2025-08-02T12:39:03-04:00" level=debug msg="using systemd mode: false" time="2025-08-02T12:39:03-04:00" level=debug msg="setting container name 90922c8ca930-infra" time="2025-08-02T12:39:03-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Allocated lock 1 for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2025-08-02T12:39:03-04:00" level=debug msg="Check for idmapped mounts support " time="2025-08-02T12:39:03-04:00" level=debug msg="Created container \"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Container \"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\" has work directory \"/var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Container \"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\" has run directory \"/run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:39:03-04:00" level=debug msg="using systemd mode: false" time="2025-08-02T12:39:03-04:00" level=debug msg="adding container to pod httpd2" time="2025-08-02T12:39:03-04:00" level=debug msg="setting container name httpd2-httpd2" time="2025-08-02T12:39:03-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-08-02T12:39:03-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /proc" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /dev" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /dev/pts" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /dev/mqueue" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /sys" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2025-08-02T12:39:03-04:00" level=debug msg="Allocated lock 2 for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb" time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Created container \"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Container \"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\" has work directory \"/var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Container \"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\" has run directory \"/run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Strongconnecting node ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:03-04:00" level=debug msg="Pushed ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 onto stack" time="2025-08-02T12:39:03-04:00" level=debug msg="Finishing node ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89. Popped ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 off stack" time="2025-08-02T12:39:03-04:00" level=debug msg="Strongconnecting node 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb" time="2025-08-02T12:39:03-04:00" level=debug msg="Pushed 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb onto stack" time="2025-08-02T12:39:03-04:00" level=debug msg="Finishing node 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb. Popped 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb off stack" time="2025-08-02T12:39:03-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/ZR5XOSU7O7VXY2BDL65A7UWKU6,upperdir=/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/diff,workdir=/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c784,c888\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Mounted container \"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\" at \"/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Created root filesystem for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 at /var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged" time="2025-08-02T12:39:03-04:00" level=debug msg="Made network namespace at /run/netns/netns-057bdf77-0e93-7270-6a44-66c62177cd73 for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:03-04:00" level=debug msg="cni result for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:96:46:b4:0c:81:50 Sandbox:} {Name:veth502e5636 Mac:4a:ea:32:89:32:4a Sandbox:} {Name:eth0 Mac:ae:ce:ef:99:2c:87 Sandbox:/run/netns/netns-057bdf77-0e93-7270-6a44-66c62177cd73}] [{Version:4 Interface:0xc00087bc58 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}" time="2025-08-02T12:39:04-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-08-02T12:39:04-04:00" level=debug msg="Setting Cgroups for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 to machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice:libpod:ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:04-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-08-02T12:39:04-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\"" time="2025-08-02T12:39:04-04:00" level=debug msg="Created OCI spec for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 at /var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/config.json" time="2025-08-02T12:39:04-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496" time="2025-08-02T12:39:04-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:04-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:04-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-08-02T12:39:04-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 -u ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata -p /run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/pidfile -n 90922c8ca930-infra --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89]" time="2025-08-02T12:39:04-04:00" level=info msg="Running conmon under slice machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice and unitName libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope" time="2025-08-02T12:39:04-04:00" level=debug msg="Received: 29144" time="2025-08-02T12:39:04-04:00" level=info msg="Got Conmon PID as 29134" time="2025-08-02T12:39:04-04:00" level=debug msg="Created container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 in OCI runtime" time="2025-08-02T12:39:04-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2025-08-02T12:39:04-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2025-08-02T12:39:04-04:00" level=debug msg="Starting container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 with command [/catatonit -P]" time="2025-08-02T12:39:04-04:00" level=debug msg="Started container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:04-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/HKP6QAO57O46FRNHGFBKAKZZRC,upperdir=/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/diff,workdir=/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c784,c888\"" time="2025-08-02T12:39:04-04:00" level=debug msg="Mounted container \"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\" at \"/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/merged\"" time="2025-08-02T12:39:04-04:00" level=debug msg="Created root filesystem for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb at /var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/merged" time="2025-08-02T12:39:04-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-08-02T12:39:04-04:00" level=debug msg="Setting Cgroups for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb to machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice:libpod:071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb" time="2025-08-02T12:39:04-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-08-02T12:39:04-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2025-08-02T12:39:04-04:00" level=debug msg="Created OCI spec for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb at /var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/config.json" time="2025-08-02T12:39:04-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496" time="2025-08-02T12:39:04-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:04-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:04-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-08-02T12:39:04-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb -u 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata -p /run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb]" time="2025-08-02T12:39:04-04:00" level=info msg="Running conmon under slice machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice and unitName libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope" time="2025-08-02T12:39:04-04:00" level=debug msg="Received: 29166" time="2025-08-02T12:39:04-04:00" level=info msg="Got Conmon PID as 29155" time="2025-08-02T12:39:04-04:00" level=debug msg="Created container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb in OCI runtime" time="2025-08-02T12:39:04-04:00" level=debug msg="Starting container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb with command [/bin/busybox-extras httpd -f -p 80]" time="2025-08-02T12:39:04-04:00" level=debug msg="Started container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb" time="2025-08-02T12:39:04-04:00" level=debug msg="Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-08-02T12:39:04-04:00" level=debug msg="Shutting down engines" Aug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Aug 02 12:39:04 managed-node2 platform-python[29297]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Aug 02 12:39:04 managed-node2 systemd[1]: Reloading. Aug 02 12:39:05 managed-node2 dnsmasq[29132]: listening on cni-podman1(#3): fe80::9446:b4ff:fe0c:8150%cni-podman1 Aug 02 12:39:05 managed-node2 platform-python[29458]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Aug 02 12:39:05 managed-node2 systemd[1]: Reloading. Aug 02 12:39:06 managed-node2 platform-python[29621]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Aug 02 12:39:06 managed-node2 systemd[1]: Created slice system-podman\x2dkube.slice. -- Subject: Unit system-podman\x2dkube.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit system-podman\x2dkube.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:06 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun starting up. Aug 02 12:39:06 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : container 29144 exited with status 137 Aug 02 12:39:06 managed-node2 systemd[1]: libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 systemd[1]: libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope completed and consumed the indicated resources. Aug 02 12:39:06 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : container 29166 exited with status 137 Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89)" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Aug 02 12:39:06 managed-node2 systemd[1]: libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=info msg="Using sqlite as database backend" Aug 02 12:39:06 managed-node2 systemd[1]: libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope completed and consumed the indicated resources. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb)" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=info msg="Using sqlite as database backend" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using graph driver overlay" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using run root /run/containers/storage" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using tmp dir /run/libpod" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using transient store: false" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that metacopy is being used" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Initializing event backend file" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=info msg="Setting parallel job count to 7" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using graph driver overlay" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using run root /run/containers/storage" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using tmp dir /run/libpod" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using transient store: false" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that metacopy is being used" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Initializing event backend file" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=info msg="Setting parallel job count to 7" Aug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767-merged.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb)" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Shutting down engines" Aug 02 12:39:06 managed-node2 systemd[1]: libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state Aug 02 12:39:06 managed-node2 kernel: device veth502e5636 left promiscuous mode Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state Aug 02 12:39:06 managed-node2 systemd[1]: run-netns-netns\x2d057bdf77\x2d0e93\x2d7270\x2d6a44\x2d66c62177cd73.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d057bdf77\x2d0e93\x2d7270\x2d6a44\x2d66c62177cd73.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay-8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1-merged.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89)" Aug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Shutting down engines" Aug 02 12:39:06 managed-node2 systemd[1]: libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 systemd[1]: Stopped libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope. -- Subject: Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished shutting down. Aug 02 12:39:06 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice. -- Subject: Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished shutting down. Aug 02 12:39:06 managed-node2 systemd[1]: machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice: Consumed 209ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice completed and consumed the indicated resources. Aug 02 12:39:06 managed-node2 podman[29628]: Pods stopped: Aug 02 12:39:06 managed-node2 podman[29628]: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496 Aug 02 12:39:06 managed-node2 podman[29628]: Pods removed: Aug 02 12:39:06 managed-node2 podman[29628]: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496 Aug 02 12:39:06 managed-node2 podman[29628]: Secrets removed: Aug 02 12:39:06 managed-node2 podman[29628]: Volumes removed: Aug 02 12:39:06 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice. -- Subject: Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:06 managed-node2 systemd[1]: Started libcontainer container 1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27. -- Subject: Unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:06 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethb3f38e19: link is not ready Aug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7477] manager: (vethb3f38e19): new Veth device (/org/freedesktop/NetworkManager/Devices/5) Aug 02 12:39:06 managed-node2 systemd-udevd[29789]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:06 managed-node2 systemd-udevd[29789]: Could not generate persistent MAC address for vethb3f38e19: No such file or directory Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state Aug 02 12:39:06 managed-node2 kernel: device vethb3f38e19 entered promiscuous mode Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered forwarding state Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state Aug 02 12:39:06 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb3f38e19: link becomes ready Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered forwarding state Aug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7761] device (vethb3f38e19): carrier: link connected Aug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7763] device (cni-podman1): carrier: link connected Aug 02 12:39:06 managed-node2 dnsmasq[29859]: listening on cni-podman1(#3): 10.89.0.1 Aug 02 12:39:06 managed-node2 dnsmasq[29859]: listening on cni-podman1(#3): fe80::9446:b4ff:fe0c:8150%cni-podman1 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: started, version 2.79 cachesize 150 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using local addresses only for domain dns.podman Aug 02 12:39:06 managed-node2 dnsmasq[29863]: reading /etc/resolv.conf Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using local addresses only for domain dns.podman Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.29.169.13#53 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.29.170.12#53 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.2.32.1#53 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:39:06 managed-node2 systemd[1]: Started libcontainer container 36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561. -- Subject: Unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:07 managed-node2 systemd[1]: Started libcontainer container 58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5. -- Subject: Unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:07 managed-node2 podman[29628]: Pod: Aug 02 12:39:07 managed-node2 podman[29628]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3 Aug 02 12:39:07 managed-node2 podman[29628]: Container: Aug 02 12:39:07 managed-node2 podman[29628]: 58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5 Aug 02 12:39:07 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished starting up. -- -- The start-up result is done. Aug 02 12:39:08 managed-node2 platform-python[30036]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:09 managed-node2 platform-python[30161]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:10 managed-node2 platform-python[30285]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:11 managed-node2 platform-python[30408]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:12 managed-node2 platform-python[30696]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:13 managed-node2 platform-python[30819]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:13 managed-node2 platform-python[30942]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:39:13 managed-node2 platform-python[31041]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152753.1569695-17471-186787888155164/source _original_basename=tmpca25d1vk follow=False checksum=0ee95d54856ad9dce4aa168ba4cfda0f7aaf74cc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:39:13 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Aug 02 12:39:14 managed-node2 platform-python[31167]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:39:14 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice. -- Subject: Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethe290c1c0: link is not ready Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state Aug 02 12:39:14 managed-node2 kernel: device vethe290c1c0 entered promiscuous mode Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered forwarding state Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state Aug 02 12:39:14 managed-node2 NetworkManager[665]: [1754152754.3788] manager: (vethe290c1c0): new Veth device (/org/freedesktop/NetworkManager/Devices/6) Aug 02 12:39:14 managed-node2 systemd-udevd[31214]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:14 managed-node2 systemd-udevd[31214]: Could not generate persistent MAC address for vethe290c1c0: No such file or directory Aug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Aug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe290c1c0: link becomes ready Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered forwarding state Aug 02 12:39:14 managed-node2 NetworkManager[665]: [1754152754.3907] device (vethe290c1c0): carrier: link connected Aug 02 12:39:14 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses Aug 02 12:39:14 managed-node2 systemd[1]: Started libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope. -- Subject: Unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:14 managed-node2 systemd[1]: Started libcontainer container 757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b. -- Subject: Unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:14 managed-node2 systemd[1]: Started libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope. -- Subject: Unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:14 managed-node2 systemd[1]: Started libcontainer container c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8. -- Subject: Unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:15 managed-node2 platform-python[31446]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Aug 02 12:39:15 managed-node2 systemd[1]: Reloading. Aug 02 12:39:15 managed-node2 platform-python[31599]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Aug 02 12:39:16 managed-node2 systemd[1]: Reloading. Aug 02 12:39:16 managed-node2 platform-python[31762]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Aug 02 12:39:16 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun starting up. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope completed and consumed the indicated resources. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope completed and consumed the indicated resources. Aug 02 12:39:16 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay-560605836dd57768a1625bf83fb2efde4d0b4be2bd75173250f2c981226dcfec-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-560605836dd57768a1625bf83fb2efde4d0b4be2bd75173250f2c981226dcfec-merged.mount has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state Aug 02 12:39:16 managed-node2 kernel: device vethe290c1c0 left promiscuous mode Aug 02 12:39:16 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state Aug 02 12:39:16 managed-node2 systemd[1]: run-netns-netns\x2d925a2bce\x2dbdb1\x2deec4\x2d32ca\x2dde6b846181b3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d925a2bce\x2dbdb1\x2deec4\x2d32ca\x2dde6b846181b3.mount has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay-aecbd6406a5b43a9f34c80dc98df8e948a604d44b70ab60cb4952ed4aea64143-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-aecbd6406a5b43a9f34c80dc98df8e948a604d44b70ab60cb4952ed4aea64143-merged.mount has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice. -- Subject: Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished shutting down. Aug 02 12:39:16 managed-node2 systemd[1]: machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice: Consumed 199ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice completed and consumed the indicated resources. Aug 02 12:39:17 managed-node2 podman[31769]: Pods stopped: Aug 02 12:39:17 managed-node2 podman[31769]: c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd Aug 02 12:39:17 managed-node2 podman[31769]: Pods removed: Aug 02 12:39:17 managed-node2 podman[31769]: c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd Aug 02 12:39:17 managed-node2 podman[31769]: Secrets removed: Aug 02 12:39:17 managed-node2 podman[31769]: Volumes removed: Aug 02 12:39:17 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice. -- Subject: Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124. -- Subject: Unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:17 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth69cd15af: link is not ready Aug 02 12:39:17 managed-node2 NetworkManager[665]: [1754152757.2610] manager: (veth69cd15af): new Veth device (/org/freedesktop/NetworkManager/Devices/7) Aug 02 12:39:17 managed-node2 systemd-udevd[31935]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:17 managed-node2 systemd-udevd[31935]: Could not generate persistent MAC address for veth69cd15af: No such file or directory Aug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered blocking state Aug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state Aug 02 12:39:17 managed-node2 kernel: device veth69cd15af entered promiscuous mode Aug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered blocking state Aug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered forwarding state Aug 02 12:39:17 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth69cd15af: link becomes ready Aug 02 12:39:17 managed-node2 NetworkManager[665]: [1754152757.2829] device (veth69cd15af): carrier: link connected Aug 02 12:39:17 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses Aug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container 686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c. -- Subject: Unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container 42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c. -- Subject: Unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:17 managed-node2 podman[31769]: Pod: Aug 02 12:39:17 managed-node2 podman[31769]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748 Aug 02 12:39:17 managed-node2 podman[31769]: Container: Aug 02 12:39:17 managed-node2 podman[31769]: 42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c Aug 02 12:39:17 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished starting up. -- -- The start-up result is done. Aug 02 12:39:18 managed-node2 sudo[32165]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqlhppzgtldczaoizfnuaorgtcfrvcv ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152758.2331746-17700-125571240428666/AnsiballZ_command.py' Aug 02 12:39:18 managed-node2 sudo[32165]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:39:18 managed-node2 platform-python[32168]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:18 managed-node2 systemd[25590]: Started podman-32177.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:18 managed-node2 sudo[32165]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:39:18 managed-node2 platform-python[32306]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:19 managed-node2 platform-python[32437]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:19 managed-node2 sudo[32575]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fljlyrnggisgqfazmxwyyzqdsrmpnozw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152759.5230556-17753-36771943884398/AnsiballZ_command.py' Aug 02 12:39:19 managed-node2 sudo[32575]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:39:19 managed-node2 platform-python[32578]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:19 managed-node2 sudo[32575]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:39:20 managed-node2 platform-python[32704]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:20 managed-node2 platform-python[32830]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:21 managed-node2 platform-python[32956]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:21 managed-node2 platform-python[33080]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:21 managed-node2 rsyslogd[1025]: imjournal: journal files changed, reloading... [v8.2102.0-15.el8 try https://www.rsyslog.com/e/0 ] Aug 02 12:39:22 managed-node2 platform-python[33205]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd1-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:22 managed-node2 platform-python[33329]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd2-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:22 managed-node2 platform-python[33453]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd3-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:25 managed-node2 platform-python[33702]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:26 managed-node2 platform-python[33831]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:30 managed-node2 platform-python[33956]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:39:33 managed-node2 platform-python[34079]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:39:33 managed-node2 platform-python[34206]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:39:34 managed-node2 platform-python[34333]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:39:36 managed-node2 platform-python[34456]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:39:39 managed-node2 platform-python[34579]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:39:42 managed-node2 platform-python[34702]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:39:45 managed-node2 platform-python[34825]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:39:47 managed-node2 platform-python[34986]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:39:48 managed-node2 platform-python[35109]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:39:53 managed-node2 platform-python[35232]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Aug 02 12:39:53 managed-node2 platform-python[35356]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:54 managed-node2 platform-python[35481]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:54 managed-node2 platform-python[35605]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:55 managed-node2 platform-python[35729]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:56 managed-node2 platform-python[35853]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Aug 02 12:39:57 managed-node2 platform-python[35976]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:57 managed-node2 platform-python[36099]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:58 managed-node2 sudo[36222]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgimaanjcdkkcebhasfqpdcfpgwkfami ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152797.8648844-19514-81631354606804/AnsiballZ_podman_image.py' Aug 02 12:39:58 managed-node2 sudo[36222]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36227.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36235.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36243.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36251.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36259.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36268.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 sudo[36222]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:39:59 managed-node2 platform-python[36397]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:59 managed-node2 platform-python[36522]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:00 managed-node2 platform-python[36645]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:40:00 managed-node2 platform-python[36709]: ansible-file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=tmp5c1b5ldh recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:00 managed-node2 sudo[36832]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkfbzsoxqmfspcyzxykzglzhyzsybbor ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152800.4994206-19649-125491116375657/AnsiballZ_podman_play.py' Aug 02 12:40:00 managed-node2 sudo[36832]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:00 managed-node2 systemd[25590]: Started podman-36843.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-08-02T12:40:00-04:00" level=info msg="/bin/podman filtering at log level debug" time="2025-08-02T12:40:00-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-08-02T12:40:00-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-08-02T12:40:00-04:00" level=info msg="Using sqlite as database backend" time="2025-08-02T12:40:00-04:00" level=debug msg="Using graph driver overlay" time="2025-08-02T12:40:00-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2025-08-02T12:40:00-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2025-08-02T12:40:00-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2025-08-02T12:40:00-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2025-08-02T12:40:00-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2025-08-02T12:40:00-04:00" level=debug msg="Using transient store: false" time="2025-08-02T12:40:00-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-08-02T12:40:00-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:40:00-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:40:00-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2025-08-02T12:40:00-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2025-08-02T12:40:00-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2025-08-02T12:40:00-04:00" level=debug msg="Initializing event backend file" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-08-02T12:40:00-04:00" level=info msg="Setting parallel job count to 7" time="2025-08-02T12:40:00-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:38:52.156183843 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-08-02T12:40:00-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-08-02T12:40:00-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:40:00-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:40:00-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:40:00-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:40:00-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:40:00-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778)" time="2025-08-02T12:40:00-04:00" level=debug msg="exporting opaque data as blob \"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:40:00-04:00" level=debug msg="Pod using bridge network mode" time="2025-08-02T12:40:00-04:00" level=debug msg="Got pod cgroup as /libpod_parent/af868cea690b52212d50213e7cf00f2f99a7e0af0fbb1c22376a1c8272177aef" Error: adding pod to state: name "httpd1" is in use: pod already exists time="2025-08-02T12:40:00-04:00" level=debug msg="Shutting down engines" Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Aug 02 12:40:00 managed-node2 sudo[36832]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:01 managed-node2 platform-python[36997]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:40:02 managed-node2 platform-python[37121]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:03 managed-node2 platform-python[37246]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:04 managed-node2 platform-python[37370]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:05 managed-node2 platform-python[37493]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:06 managed-node2 platform-python[37784]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:07 managed-node2 platform-python[37909]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:07 managed-node2 platform-python[38032]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:40:07 managed-node2 platform-python[38096]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=tmp582cc1u4 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:08 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice. -- Subject: Unit machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-08-02T12:40:08-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2025-08-02T12:40:08-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-08-02T12:40:08-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-08-02T12:40:08-04:00" level=info msg="Using sqlite as database backend" time="2025-08-02T12:40:08-04:00" level=debug msg="Using graph driver overlay" time="2025-08-02T12:40:08-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2025-08-02T12:40:08-04:00" level=debug msg="Using run root /run/containers/storage" time="2025-08-02T12:40:08-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2025-08-02T12:40:08-04:00" level=debug msg="Using tmp dir /run/libpod" time="2025-08-02T12:40:08-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2025-08-02T12:40:08-04:00" level=debug msg="Using transient store: false" time="2025-08-02T12:40:08-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-08-02T12:40:08-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:40:08-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:40:08-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2025-08-02T12:40:08-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2025-08-02T12:40:08-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2025-08-02T12:40:08-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2025-08-02T12:40:08-04:00" level=debug msg="Initializing event backend file" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-08-02T12:40:08-04:00" level=info msg="Setting parallel job count to 7" time="2025-08-02T12:40:08-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:36:24.472660556 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-08-02T12:40:08-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-08-02T12:40:08-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:40:08-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:40:08-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:40:08-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:40:08-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:40:08-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)" time="2025-08-02T12:40:08-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:40:08-04:00" level=debug msg="Pod using bridge network mode" time="2025-08-02T12:40:08-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice for parent machine.slice and name libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d" time="2025-08-02T12:40:08-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice" time="2025-08-02T12:40:08-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice" Error: adding pod to state: name "httpd2" is in use: pod already exists time="2025-08-02T12:40:08-04:00" level=debug msg="Shutting down engines" Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Aug 02 12:40:09 managed-node2 platform-python[38380]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:11 managed-node2 platform-python[38505]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:12 managed-node2 platform-python[38629]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:12 managed-node2 platform-python[38752]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:14 managed-node2 platform-python[39041]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:14 managed-node2 platform-python[39166]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:15 managed-node2 platform-python[39289]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:40:15 managed-node2 platform-python[39353]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=tmp1h9opetg recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:15 managed-node2 platform-python[39476]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:15 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice. -- Subject: Unit machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:40:16 managed-node2 sudo[39637]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqvcznhcsisjvfqgdskaqojrqwhygefl ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152816.415957-20432-182641700984178/AnsiballZ_command.py' Aug 02 12:40:16 managed-node2 sudo[39637]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:16 managed-node2 platform-python[39640]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:16 managed-node2 systemd[25590]: Started podman-39648.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:16 managed-node2 sudo[39637]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:17 managed-node2 platform-python[39778]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:17 managed-node2 platform-python[39909]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:17 managed-node2 sudo[40040]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omdhsbbfmibalzpyvmgfxmtrjimxacss ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152817.7911968-20497-78215437807137/AnsiballZ_command.py' Aug 02 12:40:17 managed-node2 sudo[40040]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:18 managed-node2 platform-python[40043]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:18 managed-node2 sudo[40040]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:18 managed-node2 platform-python[40169]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:18 managed-node2 platform-python[40295]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:19 managed-node2 platform-python[40421]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:19 managed-node2 platform-python[40545]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:20 managed-node2 platform-python[40669]: ansible-uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:23 managed-node2 platform-python[40918]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:24 managed-node2 platform-python[41047]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:27 managed-node2 platform-python[41172]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Aug 02 12:40:28 managed-node2 platform-python[41296]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:28 managed-node2 platform-python[41421]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:29 managed-node2 platform-python[41545]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:30 managed-node2 platform-python[41669]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:31 managed-node2 platform-python[41793]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:31 managed-node2 sudo[41918]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxfxukflwegbkcgylwjhylqbzyvcoltj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152831.2720408-21172-116283974621483/AnsiballZ_systemd.py' Aug 02 12:40:31 managed-node2 sudo[41918]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:31 managed-node2 platform-python[41921]: ansible-systemd Invoked with name= scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:40:31 managed-node2 systemd[25590]: Reloading. Aug 02 12:40:31 managed-node2 systemd[25590]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Aug 02 12:40:31 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state Aug 02 12:40:31 managed-node2 kernel: device vethfa4f074b left promiscuous mode Aug 02 12:40:31 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state Aug 02 12:40:32 managed-node2 podman[41937]: Pods stopped: Aug 02 12:40:32 managed-node2 podman[41937]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a Aug 02 12:40:32 managed-node2 podman[41937]: Pods removed: Aug 02 12:40:32 managed-node2 podman[41937]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a Aug 02 12:40:32 managed-node2 podman[41937]: Secrets removed: Aug 02 12:40:32 managed-node2 podman[41937]: Volumes removed: Aug 02 12:40:32 managed-node2 systemd[25590]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:32 managed-node2 sudo[41918]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:32 managed-node2 platform-python[42211]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:32 managed-node2 sudo[42336]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvztponjzcqifptzcqpojjxosoczfayg ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152832.762454-21237-209128215005834/AnsiballZ_podman_play.py' Aug 02 12:40:32 managed-node2 sudo[42336]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Aug 02 12:40:33 managed-node2 systemd[25590]: Started podman-42347.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Aug 02 12:40:33 managed-node2 sudo[42336]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:33 managed-node2 platform-python[42476]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:34 managed-node2 platform-python[42599]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:40:35 managed-node2 platform-python[42723]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:36 managed-node2 platform-python[42848]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:37 managed-node2 platform-python[42972]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:40:37 managed-node2 systemd[1]: Reloading. Aug 02 12:40:37 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun shutting down. Aug 02 12:40:37 managed-node2 systemd[1]: libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope completed and consumed the indicated resources. Aug 02 12:40:37 managed-node2 systemd[1]: libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope completed and consumed the indicated resources. Aug 02 12:40:37 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:40:37 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state Aug 02 12:40:37 managed-node2 kernel: device vethb3f38e19 left promiscuous mode Aug 02 12:40:37 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state Aug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-5a0e125408b3f62c274917d9a997808220e1c2685a2d8ff8405416971a11f6c0-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-5a0e125408b3f62c274917d9a997808220e1c2685a2d8ff8405416971a11f6c0-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: run-netns-netns\x2de8368567\x2d59b6\x2d542f\x2d1a97\x2df2ca68e931e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2de8368567\x2d59b6\x2d542f\x2d1a97\x2df2ca68e931e3.mount has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-15c93535d4532096f93e6259ac42e3e35cf574dd19355624da5c37ad60d78144-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-15c93535d4532096f93e6259ac42e3e35cf574dd19355624da5c37ad60d78144-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice. -- Subject: Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished shutting down. Aug 02 12:40:37 managed-node2 systemd[1]: machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice: Consumed 67ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice completed and consumed the indicated resources. Aug 02 12:40:38 managed-node2 systemd[1]: libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has successfully entered the 'dead' state. Aug 02 12:40:38 managed-node2 systemd[1]: libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope completed and consumed the indicated resources. Aug 02 12:40:38 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:40:38 managed-node2 podman[43008]: Pods stopped: Aug 02 12:40:38 managed-node2 podman[43008]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3 Aug 02 12:40:38 managed-node2 podman[43008]: Pods removed: Aug 02 12:40:38 managed-node2 podman[43008]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3 Aug 02 12:40:38 managed-node2 podman[43008]: Secrets removed: Aug 02 12:40:38 managed-node2 podman[43008]: Volumes removed: Aug 02 12:40:38 managed-node2 dnsmasq[29863]: exiting on receipt of SIGTERM Aug 02 12:40:38 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state. Aug 02 12:40:38 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished shutting down. Aug 02 12:40:38 managed-node2 platform-python[43285]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:38 managed-node2 systemd[1]: var-lib-containers-storage-overlay-1ad187e54185c2c7cbe64b95feb5c0fbe8c425581baa88a9f71bb6eaaa92a272-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-1ad187e54185c2c7cbe64b95feb5c0fbe8c425581baa88a9f71bb6eaaa92a272-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:38 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Aug 02 12:40:39 managed-node2 platform-python[43546]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:40 managed-node2 platform-python[43669]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:42 managed-node2 platform-python[43794]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:42 managed-node2 platform-python[43918]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:40:42 managed-node2 systemd[1]: Reloading. Aug 02 12:40:43 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun shutting down. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope completed and consumed the indicated resources. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope: Consumed 34ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope completed and consumed the indicated resources. Aug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay-65e7014be45047102e9045dcdd9345e82206f8672e1c2920f53097bbdf3fcc43-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-65e7014be45047102e9045dcdd9345e82206f8672e1c2920f53097bbdf3fcc43-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state Aug 02 12:40:43 managed-node2 kernel: device veth69cd15af left promiscuous mode Aug 02 12:40:43 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state Aug 02 12:40:43 managed-node2 systemd[1]: run-netns-netns\x2dfd171033\x2dc8d0\x2d5ddd\x2d985b\x2d865fa20d123b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2dfd171033\x2dc8d0\x2d5ddd\x2d985b\x2d865fa20d123b.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay-bd51c7fcb084eb966e3f3b68637936b0c8d50499aba52b4ffce32c4ad877cf0d-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-bd51c7fcb084eb966e3f3b68637936b0c8d50499aba52b4ffce32c4ad877cf0d-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice. -- Subject: Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished shutting down. Aug 02 12:40:43 managed-node2 systemd[1]: machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice: Consumed 66ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice completed and consumed the indicated resources. Aug 02 12:40:43 managed-node2 podman[43954]: Pods stopped: Aug 02 12:40:43 managed-node2 podman[43954]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748 Aug 02 12:40:43 managed-node2 podman[43954]: Pods removed: Aug 02 12:40:43 managed-node2 podman[43954]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748 Aug 02 12:40:43 managed-node2 podman[43954]: Secrets removed: Aug 02 12:40:43 managed-node2 podman[43954]: Volumes removed: Aug 02 12:40:43 managed-node2 systemd[1]: libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope completed and consumed the indicated resources. Aug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished shutting down. Aug 02 12:40:44 managed-node2 platform-python[44224]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7cab78c749c43bcbce8111bde995b9f5abcdf949ac791b25a3b873fde27f6845-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-7cab78c749c43bcbce8111bde995b9f5abcdf949ac791b25a3b873fde27f6845-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:40:44 managed-node2 platform-python[44349]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:44 managed-node2 platform-python[44349]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml Aug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:40:44 managed-node2 platform-python[44486]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:45 managed-node2 platform-python[44609]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Aug 02 12:40:46 managed-node2 platform-python[44733]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:47 managed-node2 sudo[44858]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olrmvkvglxbsttmchncdtptzsqpgbnrc ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152846.6847095-21950-187401977583699/AnsiballZ_podman_container_info.py' Aug 02 12:40:47 managed-node2 sudo[44858]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:47 managed-node2 platform-python[44861]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Aug 02 12:40:47 managed-node2 systemd[25590]: Started podman-44863.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:47 managed-node2 sudo[44858]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:47 managed-node2 sudo[44992]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpjmlfvygeamsbgxiibmynquopakmqzw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152847.3914187-21987-273657899931369/AnsiballZ_command.py' Aug 02 12:40:47 managed-node2 sudo[44992]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:47 managed-node2 platform-python[44995]: ansible-command Invoked with _raw_params=podman network ls -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:47 managed-node2 systemd[25590]: Started podman-44997.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:47 managed-node2 sudo[44992]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:47 managed-node2 sudo[45152]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkszomlmijhetpvbizxheimorvpsvgdk ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152847.866539-22018-22833626881705/AnsiballZ_command.py' Aug 02 12:40:47 managed-node2 sudo[45152]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:48 managed-node2 platform-python[45155]: ansible-command Invoked with _raw_params=podman secret ls -n -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:48 managed-node2 systemd[25590]: Started podman-45157.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:48 managed-node2 sudo[45152]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:48 managed-node2 platform-python[45287]: ansible-command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Aug 02 12:40:48 managed-node2 systemd[1]: Stopping User Manager for UID 3001... -- Subject: Unit user@3001.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has begun shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopping podman-pause-9fcbd008.scope. -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Default. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopping D-Bus User Message Bus... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Removed slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped D-Bus User Message Bus. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Basic System. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Sockets. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Timers. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped Mark boot as successful after the user session has run 2 minutes. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Paths. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Closed D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped podman-pause-9fcbd008.scope. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Removed slice user.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Reached target Shutdown. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:48 managed-node2 systemd[25590]: Started Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:48 managed-node2 systemd[25590]: Reached target Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:48 managed-node2 systemd[1]: user@3001.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@3001.service has successfully entered the 'dead' state. Aug 02 12:40:48 managed-node2 systemd[1]: Stopped User Manager for UID 3001. -- Subject: Unit user@3001.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[1]: Stopping User runtime directory /run/user/3001... -- Subject: Unit user-runtime-dir@3001.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has begun shutting down. Aug 02 12:40:48 managed-node2 systemd[1]: run-user-3001.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-user-3001.mount has successfully entered the 'dead' state. Aug 02 12:40:48 managed-node2 systemd[1]: user-runtime-dir@3001.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@3001.service has successfully entered the 'dead' state. Aug 02 12:40:48 managed-node2 systemd[1]: Stopped User runtime directory /run/user/3001. -- Subject: Unit user-runtime-dir@3001.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[1]: Removed slice User Slice of UID 3001. -- Subject: Unit user-3001.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-3001.slice has finished shutting down. Aug 02 12:40:48 managed-node2 platform-python[45419]: ansible-command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:49 managed-node2 sudo[45543]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqbllrdzcpiqglhdxzdzawivaqzmpyy ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152849.62423-22100-37847572003711/AnsiballZ_command.py' Aug 02 12:40:49 managed-node2 sudo[45543]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:49 managed-node2 platform-python[45546]: ansible-command Invoked with _raw_params=podman pod exists httpd1 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:49 managed-node2 sudo[45543]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:50 managed-node2 platform-python[45676]: ansible-command Invoked with _raw_params=podman pod exists httpd2 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:40:50 managed-node2 platform-python[45806]: ansible-command Invoked with _raw_params=podman pod exists httpd3 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:40:50 managed-node2 sudo[45937]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kquekbnhuqoxfuhqzmvgbrdfuqcyvrpf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152850.8293648-22145-217225041127031/AnsiballZ_command.py' Aug 02 12:40:50 managed-node2 sudo[45937]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:51 managed-node2 platform-python[45940]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:51 managed-node2 sudo[45937]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:51 managed-node2 platform-python[46066]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:51 managed-node2 platform-python[46192]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:52 managed-node2 platform-python[46318]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:55 managed-node2 platform-python[46566]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:56 managed-node2 platform-python[46695]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:40:57 managed-node2 platform-python[46819]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:00 managed-node2 platform-python[46944]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Aug 02 12:41:01 managed-node2 platform-python[47068]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:01 managed-node2 platform-python[47193]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:01 managed-node2 platform-python[47317]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:03 managed-node2 platform-python[47441]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:03 managed-node2 platform-python[47565]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:04 managed-node2 platform-python[47688]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:04 managed-node2 platform-python[47811]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:06 managed-node2 platform-python[47934]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:41:06 managed-node2 platform-python[48058]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:08 managed-node2 platform-python[48183]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:08 managed-node2 platform-python[48307]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:41:09 managed-node2 platform-python[48434]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:09 managed-node2 platform-python[48557]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:11 managed-node2 platform-python[48680]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:12 managed-node2 platform-python[48805]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:13 managed-node2 platform-python[48929]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:41:14 managed-node2 platform-python[49056]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:14 managed-node2 platform-python[49179]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:15 managed-node2 platform-python[49302]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Aug 02 12:41:16 managed-node2 platform-python[49426]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:17 managed-node2 platform-python[49549]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:17 managed-node2 platform-python[49672]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:18 managed-node2 sshd[49693]: Accepted publickey for root from 10.31.46.71 port 34968 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:41:18 managed-node2 systemd-logind[591]: New session 9 of user root. -- Subject: A new session 9 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 9 has been created for the user root. -- -- The leading process of the session is 49693. Aug 02 12:41:18 managed-node2 systemd[1]: Started Session 9 of user root. -- Subject: Unit session-9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-9.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:41:18 managed-node2 sshd[49693]: pam_unix(sshd:session): session opened for user root by (uid=0) Aug 02 12:41:18 managed-node2 sshd[49696]: Received disconnect from 10.31.46.71 port 34968:11: disconnected by user Aug 02 12:41:18 managed-node2 sshd[49696]: Disconnected from user root 10.31.46.71 port 34968 Aug 02 12:41:18 managed-node2 sshd[49693]: pam_unix(sshd:session): session closed for user root Aug 02 12:41:18 managed-node2 systemd[1]: session-9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-9.scope has successfully entered the 'dead' state. Aug 02 12:41:18 managed-node2 systemd-logind[591]: Session 9 logged out. Waiting for processes to exit. Aug 02 12:41:18 managed-node2 systemd-logind[591]: Removed session 9. -- Subject: Session 9 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 9 has been terminated. Aug 02 12:41:20 managed-node2 platform-python[49858]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Aug 02 12:41:21 managed-node2 platform-python[49985]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:21 managed-node2 platform-python[50108]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:24 managed-node2 platform-python[50356]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:25 managed-node2 platform-python[50485]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:41:26 managed-node2 platform-python[50609]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:27 managed-node2 sshd[50632]: Accepted publickey for root from 10.31.46.71 port 55872 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:41:27 managed-node2 systemd-logind[591]: New session 10 of user root. -- Subject: A new session 10 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 10 has been created for the user root. -- -- The leading process of the session is 50632. Aug 02 12:41:27 managed-node2 systemd[1]: Started Session 10 of user root. -- Subject: Unit session-10.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-10.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:41:27 managed-node2 sshd[50632]: pam_unix(sshd:session): session opened for user root by (uid=0) Aug 02 12:41:27 managed-node2 sshd[50635]: Received disconnect from 10.31.46.71 port 55872:11: disconnected by user Aug 02 12:41:27 managed-node2 sshd[50635]: Disconnected from user root 10.31.46.71 port 55872 Aug 02 12:41:27 managed-node2 sshd[50632]: pam_unix(sshd:session): session closed for user root Aug 02 12:41:27 managed-node2 systemd[1]: session-10.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-10.scope has successfully entered the 'dead' state. Aug 02 12:41:27 managed-node2 systemd-logind[591]: Session 10 logged out. Waiting for processes to exit. Aug 02 12:41:27 managed-node2 systemd-logind[591]: Removed session 10. -- Subject: Session 10 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 10 has been terminated. Aug 02 12:41:29 managed-node2 platform-python[50797]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Aug 02 12:41:33 managed-node2 platform-python[50949]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:33 managed-node2 platform-python[51072]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:35 managed-node2 platform-python[51320]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:36 managed-node2 platform-python[51449]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:41:37 managed-node2 platform-python[51573]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:41 managed-node2 sshd[51596]: Accepted publickey for root from 10.31.46.71 port 38190 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:41:41 managed-node2 systemd[1]: Started Session 11 of user root. -- Subject: Unit session-11.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-11.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:41:41 managed-node2 systemd-logind[591]: New session 11 of user root. -- Subject: A new session 11 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 11 has been created for the user root. -- -- The leading process of the session is 51596. Aug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session opened for user root by (uid=0) Aug 02 12:41:41 managed-node2 sshd[51599]: Received disconnect from 10.31.46.71 port 38190:11: disconnected by user Aug 02 12:41:41 managed-node2 sshd[51599]: Disconnected from user root 10.31.46.71 port 38190 Aug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session closed for user root Aug 02 12:41:41 managed-node2 systemd[1]: session-11.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-11.scope has successfully entered the 'dead' state. Aug 02 12:41:41 managed-node2 systemd-logind[591]: Session 11 logged out. Waiting for processes to exit. Aug 02 12:41:41 managed-node2 systemd-logind[591]: Removed session 11. -- Subject: Session 11 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 11 has been terminated. Aug 02 12:41:43 managed-node2 platform-python[51761]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Aug 02 12:41:43 managed-node2 platform-python[51913]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:44 managed-node2 platform-python[52036]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:45 managed-node2 platform-python[52160]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:41:48 managed-node2 platform-python[52288]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 systemd[1]: Reloading. Aug 02 12:41:51 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:51 managed-node2 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Aug 02 12:41:52 managed-node2 systemd[1]: Reloading. Aug 02 12:41:52 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Aug 02 12:41:52 managed-node2 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:52 managed-node2 systemd[1]: run-r5b158d19759a4bbaa61aee183ab0cad0.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has successfully entered the 'dead' state. Aug 02 12:41:53 managed-node2 platform-python[52920]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:53 managed-node2 platform-python[53043]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:54 managed-node2 platform-python[53166]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:41:54 managed-node2 systemd[1]: Reloading. Aug 02 12:41:54 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment... -- Subject: Unit certmonger.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has begun starting up. Aug 02 12:41:54 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment. -- Subject: Unit certmonger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:55 managed-node2 platform-python[53359]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=# # Ansible managed # # system_role:certificate booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53375]: Certificate in file "/etc/pki/tls/certs/quadlet_demo.crt" issued by CA and saved. Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:56 managed-node2 platform-python[53497]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Aug 02 12:41:56 managed-node2 platform-python[53620]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key Aug 02 12:41:57 managed-node2 platform-python[53743]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Aug 02 12:41:57 managed-node2 platform-python[53866]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:57 managed-node2 certmonger[53202]: 2025-08-02 12:41:57 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:58 managed-node2 platform-python[53990]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:58 managed-node2 platform-python[54113]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:58 managed-node2 platform-python[54236]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:59 managed-node2 platform-python[54359]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:59 managed-node2 platform-python[54482]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:02 managed-node2 platform-python[54730]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:03 managed-node2 platform-python[54859]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:42:03 managed-node2 platform-python[54983]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:06 managed-node2 platform-python[55108]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:06 managed-node2 platform-python[55231]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:07 managed-node2 platform-python[55354]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:08 managed-node2 platform-python[55478]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:42:11 managed-node2 platform-python[55601]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:42:11 managed-node2 platform-python[55728]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:42:12 managed-node2 platform-python[55855]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:13 managed-node2 platform-python[55978]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:15 managed-node2 platform-python[56101]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None TASK [Check] ******************************************************************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:148 Saturday 02 August 2025 12:42:15 -0400 (0:00:00.498) 0:00:33.206 ******* ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "ps", "-a" ], "delta": "0:00:00.075855", "end": "2025-08-02 12:42:15.955093", "rc": 0, "start": "2025-08-02 12:42:15.879238" } STDOUT: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES TASK [Check pods] ************************************************************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:152 Saturday 02 August 2025 12:42:16 -0400 (0:00:00.475) 0:00:33.682 ******* ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "pod", "ps", "--ctr-ids", "--ctr-names", "--ctr-status" ], "delta": "0:00:00.035895", "end": "2025-08-02 12:42:16.420260", "failed_when_result": false, "rc": 0, "start": "2025-08-02 12:42:16.384365" } STDOUT: POD ID NAME STATUS CREATED INFRA ID IDS NAMES STATUS TASK [Check systemd] *********************************************************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:157 Saturday 02 August 2025 12:42:16 -0400 (0:00:00.442) 0:00:34.124 ******* ok: [managed-node2] => { "changed": false, "cmd": "set -euo pipefail; systemctl list-units --all | grep quadlet", "delta": "0:00:00.010289", "end": "2025-08-02 12:42:16.800371", "failed_when_result": false, "rc": 1, "start": "2025-08-02 12:42:16.790082" } MSG: non-zero return code TASK [LS] ********************************************************************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:165 Saturday 02 August 2025 12:42:16 -0400 (0:00:00.456) 0:00:34.580 ******* ok: [managed-node2] => { "changed": false, "cmd": [ "ls", "-alrtF", "/etc/systemd/system" ], "delta": "0:00:00.003681", "end": "2025-08-02 12:42:17.295264", "failed_when_result": false, "rc": 0, "start": "2025-08-02 12:42:17.291583" } STDOUT: total 8 lrwxrwxrwx. 1 root root 9 May 11 2019 systemd-timedated.service -> /dev/null drwxr-xr-x. 4 root root 169 May 29 2024 ../ lrwxrwxrwx. 1 root root 39 May 29 2024 syslog.service -> /usr/lib/systemd/system/rsyslog.service drwxr-xr-x. 2 root root 32 May 29 2024 getty.target.wants/ lrwxrwxrwx. 1 root root 37 May 29 2024 ctrl-alt-del.target -> /usr/lib/systemd/system/reboot.target lrwxrwxrwx. 1 root root 57 May 29 2024 dbus-org.freedesktop.nm-dispatcher.service -> /usr/lib/systemd/system/NetworkManager-dispatcher.service drwxr-xr-x. 2 root root 48 May 29 2024 network-online.target.wants/ lrwxrwxrwx. 1 root root 41 May 29 2024 dbus-org.freedesktop.timedate1.service -> /usr/lib/systemd/system/timedatex.service drwxr-xr-x. 2 root root 61 May 29 2024 timers.target.wants/ drwxr-xr-x. 2 root root 31 May 29 2024 basic.target.wants/ drwxr-xr-x. 2 root root 38 May 29 2024 dev-virtio\x2dports-org.qemu.guest_agent.0.device.wants/ lrwxrwxrwx. 1 root root 41 May 29 2024 default.target -> /usr/lib/systemd/system/multi-user.target drwxr-xr-x. 2 root root 51 May 29 2024 sockets.target.wants/ drwxr-xr-x. 2 root root 31 May 29 2024 remote-fs.target.wants/ drwxr-xr-x. 2 root root 59 May 29 2024 sshd-keygen@.service.d/ drwxr-xr-x. 2 root root 119 May 29 2024 cloud-init.target.wants/ drwxr-xr-x. 2 root root 181 May 29 2024 sysinit.target.wants/ lrwxrwxrwx. 1 root root 41 Aug 2 12:35 dbus-org.fedoraproject.FirewallD1.service -> /usr/lib/systemd/system/firewalld.service drwxr-xr-x. 13 root root 4096 Aug 2 12:40 ./ drwxr-xr-x. 2 root root 4096 Aug 2 12:41 multi-user.target.wants/ TASK [Cleanup] ***************************************************************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:172 Saturday 02 August 2025 12:42:17 -0400 (0:00:00.446) 0:00:35.027 ******* TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:3 Saturday 02 August 2025 12:42:17 -0400 (0:00:00.145) 0:00:35.172 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure ansible_facts used by role] **** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:3 Saturday 02 August 2025 12:42:17 -0400 (0:00:00.107) 0:00:35.280 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if system is ostree] ************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:11 Saturday 02 August 2025 12:42:17 -0400 (0:00:00.076) 0:00:35.357 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag to indicate system is ostree] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:16 Saturday 02 August 2025 12:42:17 -0400 (0:00:00.060) 0:00:35.417 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:23 Saturday 02 August 2025 12:42:17 -0400 (0:00:00.060) 0:00:35.479 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set flag if transactional-update exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:28 Saturday 02 August 2025 12:42:17 -0400 (0:00:00.060) 0:00:35.539 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set platform/version specific variables] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/set_vars.yml:32 Saturday 02 August 2025 12:42:17 -0400 (0:00:00.062) 0:00:35.601 ******* ok: [managed-node2] => (item=RedHat.yml) => { "ansible_facts": { "__podman_packages": [ "podman", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed-node2] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed-node2] => (item=CentOS_8.yml) => { "ansible_facts": { "__podman_packages": [ "crun", "podman", "podman-plugins", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } ok: [managed-node2] => (item=CentOS_8.yml) => { "ansible_facts": { "__podman_packages": [ "crun", "podman", "podman-plugins", "shadow-utils-subid" ] }, "ansible_included_var_files": [ "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/vars/CentOS_8.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_8.yml" } TASK [fedora.linux_system_roles.podman : Gather the package facts] ************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 Saturday 02 August 2025 12:42:18 -0400 (0:00:00.172) 0:00:35.773 ******* ok: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Enable copr if requested] ************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:10 Saturday 02 August 2025 12:42:19 -0400 (0:00:01.616) 0:00:37.390 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Ensure required packages are installed] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:14 Saturday 02 August 2025 12:42:19 -0400 (0:00:00.060) 0:00:37.450 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:28 Saturday 02 August 2025 12:42:19 -0400 (0:00:00.075) 0:00:37.525 ******* skipping: [managed-node2] => {} TASK [fedora.linux_system_roles.podman : Reboot transactional update systems] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:33 Saturday 02 August 2025 12:42:19 -0400 (0:00:00.060) 0:00:37.586 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if reboot is needed and not set] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:38 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.065) 0:00:37.651 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get podman version] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:46 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.063) 0:00:37.715 ******* ok: [managed-node2] => { "changed": false, "cmd": [ "podman", "--version" ], "delta": "0:00:00.027478", "end": "2025-08-02 12:42:20.461954", "rc": 0, "start": "2025-08-02 12:42:20.434476" } STDOUT: podman version 4.9.4-dev TASK [fedora.linux_system_roles.podman : Set podman version] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:52 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.466) 0:00:38.182 ******* ok: [managed-node2] => { "ansible_facts": { "podman_version": "4.9.4-dev" }, "changed": false } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.2 or later] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:56 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.042) 0:00:38.225 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Podman package version must be 4.4 or later for quadlet, secrets] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:63 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.047) 0:00:38.273 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } META: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Podman package version must be 5.0 or later for Pod quadlets] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:80 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.085) 0:00:38.359 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } META: end_host conditional evaluated to false, continuing execution for managed-node2 TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:109 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.106) 0:00:38.466 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.103) 0:00:38.569 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 02 August 2025 12:42:20 -0400 (0:00:00.043) 0:00:38.612 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.045) 0:00:38.658 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.048) 0:00:38.706 ******* ok: [managed-node2] => { "changed": false, "stat": { "atime": 1754152551.4765396, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 32, "charset": "binary", "checksum": "bb5b46ffbafcaa8c4021f3c8b3cb8594f48ef34b", "ctime": 1754152522.3404324, "dev": 51713, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 6986657, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "application/x-sharedlib", "mode": "0755", "mtime": 1700557386.0, "nlink": 1, "path": "/usr/bin/getsubids", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 12640, "uid": 0, "version": "135203907", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.412) 0:00:39.119 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.039) 0:00:39.158 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.049) 0:00:39.207 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.064) 0:00:39.272 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.061) 0:00:39.333 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.059) 0:00:39.392 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.043) 0:00:39.435 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.045) 0:00:39.481 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set config file paths] **************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:115 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.046) 0:00:39.528 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_container_conf_file": "/etc/containers/containers.conf.d/50-systemroles.conf", "__podman_parent_mode": "0755", "__podman_parent_path": "/etc/containers", "__podman_policy_json_file": "/etc/containers/policy.json", "__podman_registries_conf_file": "/etc/containers/registries.conf.d/50-systemroles.conf", "__podman_storage_conf_file": "/etc/containers/storage.conf" }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle container.conf.d] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:126 Saturday 02 August 2025 12:42:21 -0400 (0:00:00.086) 0:00:39.615 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure containers.d exists] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:5 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.068) 0:00:39.684 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update container config file] ********* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_container_conf_d.yml:13 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.041) 0:00:39.725 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle registries.conf.d] ************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:129 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.102) 0:00:39.828 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure registries.d exists] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:5 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.136) 0:00:39.964 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update registries config file] ******** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_registries_conf_d.yml:13 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.062) 0:00:40.027 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle storage.conf] ****************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:132 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.054) 0:00:40.081 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure storage.conf parent dir exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:7 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.094) 0:00:40.176 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Update storage config file] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_storage_conf.yml:15 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.042) 0:00:40.218 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Handle policy.json] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:135 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.040) 0:00:40.258 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Ensure policy.json parent dir exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:8 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.079) 0:00:40.338 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat the policy.json file] ************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:16 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.036) 0:00:40.374 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get the existing policy.json] ********* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:21 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.041) 0:00:40.416 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Write new policy.json file] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_policy_json.yml:27 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.060) 0:00:40.476 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Manage firewall for specified ports] ************************************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:141 Saturday 02 August 2025 12:42:22 -0400 (0:00:00.052) 0:00:40.528 ******* TASK [fedora.linux_system_roles.firewall : Setup firewalld] ******************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:2 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.154) 0:00:40.683 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml for managed-node2 TASK [fedora.linux_system_roles.firewall : Ensure ansible_facts used by role] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:2 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.103) 0:00:40.786 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if system is ostree] ********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:10 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.044) 0:00:40.830 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag to indicate system is ostree] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:15 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.037) 0:00:40.868 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check if transactional-update exists in /sbin] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:22 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.049) 0:00:40.917 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag if transactional-update exists] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:27 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.038) 0:00:40.956 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Run systemctl] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:34 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.036) 0:00:40.993 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Require installed systemd] ********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:41 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.036) 0:00:41.029 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Set flag to indicate that systemd runtime operations are available] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:46 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.040) 0:00:41.070 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Install firewalld] ****************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:51 Saturday 02 August 2025 12:42:23 -0400 (0:00:00.053) 0:00:41.124 ******* ok: [managed-node2] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: firewalld TASK [fedora.linux_system_roles.firewall : Notify user that reboot is needed to apply changes] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:63 Saturday 02 August 2025 12:42:26 -0400 (0:00:02.898) 0:00:44.022 ******* skipping: [managed-node2] => {} TASK [fedora.linux_system_roles.firewall : Reboot transactional update systems] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:68 Saturday 02 August 2025 12:42:26 -0400 (0:00:00.046) 0:00:44.069 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Fail if reboot is needed and not set] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:73 Saturday 02 August 2025 12:42:26 -0400 (0:00:00.043) 0:00:44.112 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Check which conflicting services are enabled] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:5 Saturday 02 August 2025 12:42:26 -0400 (0:00:00.035) 0:00:44.148 ******* skipping: [managed-node2] => (item=nftables) => { "ansible_loop_var": "item", "changed": false, "item": "nftables", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=iptables) => { "ansible_loop_var": "item", "changed": false, "item": "iptables", "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item=ufw) => { "ansible_loop_var": "item", "changed": false, "item": "ufw", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Attempt to stop and disable conflicting services] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:14 Saturday 02 August 2025 12:42:26 -0400 (0:00:00.048) 0:00:44.196 ******* skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'nftables', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "nftables", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'iptables', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "iptables", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'item': 'ufw', 'ansible_loop_var': 'item'}) => { "ansible_loop_var": "item", "changed": false, "item": { "ansible_loop_var": "item", "changed": false, "item": "ufw", "skip_reason": "Conditional result was False", "skipped": true }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Unmask firewalld service] *********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:24 Saturday 02 August 2025 12:42:26 -0400 (0:00:00.051) 0:00:44.248 ******* ok: [managed-node2] => { "changed": false, "name": "firewalld", "status": { "ActiveEnterTimestamp": "Sat 2025-08-02 12:35:59 EDT", "ActiveEnterTimestampMonotonic": "323943793", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target sysinit.target dbus.service polkit.service dbus.socket system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-08-02 12:35:58 EDT", "AssertTimestampMonotonic": "323267235", "Before": "shutdown.target network-pre.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-08-02 12:35:58 EDT", "ConditionTimestampMonotonic": "323267233", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target nftables.service ipset.service iptables.service ip6tables.service ebtables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "12989", "ExecMainStartTimestamp": "Sat 2025-08-02 12:35:58 EDT", "ExecMainStartTimestampMonotonic": "323278597", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-08-02 12:35:58 EDT", "InactiveExitTimestampMonotonic": "323278631", "InvocationID": "5c543fdf07b74af08a33ac740bfb5bdf", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "12989", "MemoryAccounting": "yes", "MemoryCurrent": "41484288", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target system.slice dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-08-02 12:35:59 EDT", "StateChangeTimestampMonotonic": "323943793", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Sat 2025-08-02 12:35:59 EDT", "WatchdogTimestampMonotonic": "323943790", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Enable and start firewalld service] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:30 Saturday 02 August 2025 12:42:27 -0400 (0:00:00.511) 0:00:44.759 ******* ok: [managed-node2] => { "changed": false, "enabled": true, "name": "firewalld", "state": "started", "status": { "ActiveEnterTimestamp": "Sat 2025-08-02 12:35:59 EDT", "ActiveEnterTimestampMonotonic": "323943793", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target sysinit.target dbus.service polkit.service dbus.socket system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Sat 2025-08-02 12:35:58 EDT", "AssertTimestampMonotonic": "323267235", "Before": "shutdown.target network-pre.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUAffinityFromNUMA": "no", "CPUQuotaPerSecUSec": "infinity", "CPUQuotaPeriodUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanFreeze": "yes", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Sat 2025-08-02 12:35:58 EDT", "ConditionTimestampMonotonic": "323267233", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target nftables.service ipset.service iptables.service ip6tables.service ebtables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "DefaultMemoryLow": "0", "DefaultMemoryMin": "0", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "12989", "ExecMainStartTimestamp": "Sat 2025-08-02 12:35:58 EDT", "ExecMainStartTimestampMonotonic": "323278597", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewalld.service", "FreezerState": "running", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Sat 2025-08-02 12:35:58 EDT", "InactiveExitTimestampMonotonic": "323278631", "InvocationID": "5c543fdf07b74af08a33ac740bfb5bdf", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "0", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "14003", "LimitNPROCSoft": "14003", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "14003", "LimitSIGPENDINGSoft": "14003", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "12989", "MemoryAccounting": "yes", "MemoryCurrent": "41484288", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemoryMin": "0", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "sysinit.target system.slice dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Sat 2025-08-02 12:35:59 EDT", "StateChangeTimestampMonotonic": "323943793", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "22405", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Sat 2025-08-02 12:35:59 EDT", "WatchdogTimestampMonotonic": "323943790", "WatchdogUSec": "0" } } TASK [fedora.linux_system_roles.firewall : Check if previous replaced is defined] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:36 Saturday 02 August 2025 12:42:27 -0400 (0:00:00.520) 0:00:45.279 ******* ok: [managed-node2] => { "ansible_facts": { "__firewall_previous_replaced": false, "__firewall_python_cmd": "/usr/libexec/platform-python", "__firewall_report_changed": true }, "changed": false } TASK [fedora.linux_system_roles.firewall : Get config files, checksums before and remove] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:45 Saturday 02 August 2025 12:42:27 -0400 (0:00:00.047) 0:00:45.327 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Tell firewall module it is able to report changed] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:58 Saturday 02 August 2025 12:42:27 -0400 (0:00:00.037) 0:00:45.364 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Configure firewall] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:74 Saturday 02 August 2025 12:42:27 -0400 (0:00:00.036) 0:00:45.400 ******* ok: [managed-node2] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "__firewall_changed": false, "ansible_loop_var": "item", "changed": false, "item": { "port": "8000/tcp", "state": "enabled" } } ok: [managed-node2] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "__firewall_changed": false, "ansible_loop_var": "item", "changed": false, "item": { "port": "9000/tcp", "state": "enabled" } } TASK [fedora.linux_system_roles.firewall : Gather firewall config information] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:126 Saturday 02 August 2025 12:42:28 -0400 (0:00:01.154) 0:00:46.555 ******* skipping: [managed-node2] => (item={'port': '8000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "item": { "port": "8000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } skipping: [managed-node2] => (item={'port': '9000/tcp', 'state': 'enabled'}) => { "ansible_loop_var": "item", "changed": false, "item": { "port": "9000/tcp", "state": "enabled" }, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:137 Saturday 02 August 2025 12:42:28 -0400 (0:00:00.058) 0:00:46.613 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Gather firewall config if no arguments] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:146 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.039) 0:00:46.653 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Update firewalld_config fact] ******* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:152 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.037) 0:00:46.690 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Get config files, checksums after] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:161 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.037) 0:00:46.728 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Calculate what has changed] ********* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:172 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.036) 0:00:46.764 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.firewall : Show diffs] ************************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:178 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.035) 0:00:46.799 ******* skipping: [managed-node2] => {} TASK [Manage selinux for specified ports] ************************************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:148 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.034) 0:00:46.834 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Keep track of users that need to cancel linger] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:155 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.035) 0:00:46.870 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_cancel_user_linger": [] }, "changed": false } TASK [fedora.linux_system_roles.podman : Handle certs.d files - present] ******* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:159 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.033) 0:00:46.903 ******* skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle credential files - present] **** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:168 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.037) 0:00:46.941 ******* skipping: [managed-node2] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.podman : Handle secrets] *********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:177 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.032) 0:00:46.974 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Set variables part 1] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:3 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.127) 0:00:47.102 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_user": "root" }, "changed": false } TASK [fedora.linux_system_roles.podman : Check user and group information] ***** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:7 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.068) 0:00:47.171 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Get user information] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.065) 0:00:47.236 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user does not exist] ********** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:17 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.045) 0:00:47.281 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set group for podman user] ************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:24 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.043) 0:00:47.325 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_group": "0" }, "changed": false } TASK [fedora.linux_system_roles.podman : See if getsubids exists] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:39 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.050) 0:00:47.375 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subuids] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:50 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.034) 0:00:47.410 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Check with getsubids for user subgids] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:55 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.035) 0:00:47.446 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:60 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.034) 0:00:47.481 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subuid file] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:73 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.036) 0:00:47.517 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Get subgid file] ********************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:78 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.036) 0:00:47.554 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set user subuid and subgid info] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:83 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.035) 0:00:47.589 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subuid file] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:93 Saturday 02 August 2025 12:42:29 -0400 (0:00:00.036) 0:00:47.626 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Fail if user not in subgid file] ****** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:100 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.034) 0:00:47.660 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Set variables part 2] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:15 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.035) 0:00:47.695 ******* ok: [managed-node2] => { "ansible_facts": { "__podman_rootless": false, "__podman_xdg_runtime_dir": "/run/user/0" }, "changed": false } TASK [fedora.linux_system_roles.podman : Manage linger] ************************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:21 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.045) 0:00:47.741 ******* included: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml for managed-node2 TASK [fedora.linux_system_roles.podman : Enable linger if needed] ************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:12 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.065) 0:00:47.806 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user as not yet needing to cancel linger] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:18 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.034) 0:00:47.840 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Mark user for possible linger cancel] *** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/manage_linger.yml:22 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.070) 0:00:47.911 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Stat XDG_RUNTIME_DIR] ***************** task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:26 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.036) 0:00:47.947 ******* skipping: [managed-node2] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.podman : Manage each secret] ******************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.035) 0:00:47.982 ******* fatal: [managed-node2]: FAILED! => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result" } TASK [Debug] ******************************************************************* task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:199 Saturday 02 August 2025 12:42:30 -0400 (0:00:00.041) 0:00:48.024 ******* ok: [managed-node2] => { "changed": false, "cmd": "exec 1>&2\nset -x\nset -o pipefail\nsystemctl list-units --plain -l --all | grep quadlet || :\nsystemctl list-unit-files --all | grep quadlet || :\nsystemctl list-units --plain --failed -l --all | grep quadlet || :\n", "delta": "0:00:00.391237", "end": "2025-08-02 12:42:31.079276", "rc": 0, "start": "2025-08-02 12:42:30.688039" } STDERR: + set -o pipefail + systemctl list-units --plain -l --all + grep quadlet + : + systemctl list-unit-files --all + grep quadlet + : + systemctl list-units --plain --failed -l --all + grep quadlet + : TASK [Get journald] ************************************************************ task path: /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:209 Saturday 02 August 2025 12:42:31 -0400 (0:00:00.757) 0:00:48.781 ******* fatal: [managed-node2]: FAILED! => { "changed": false, "cmd": [ "journalctl", "-ex" ], "delta": "0:00:00.028118", "end": "2025-08-02 12:42:31.468197", "failed_when_result": true, "rc": 0, "start": "2025-08-02 12:42:31.440079" } STDOUT: -- Logs begin at Sat 2025-08-02 12:30:34 EDT, end at Sat 2025-08-02 12:42:31 EDT. -- Aug 02 12:36:15 managed-node2 kernel: SELinux: Converting 460 SID table entries... Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability network_peer_controls=1 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability open_perms=1 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability extended_socket_class=1 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability always_check_network=0 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 02 12:36:15 managed-node2 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 02 12:36:15 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:36:15 managed-node2 platform-python[14798]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:36:20 managed-node2 platform-python[14921]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:22 managed-node2 platform-python[15046]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:23 managed-node2 platform-python[15169]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:36:23 managed-node2 platform-python[15292]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:36:23 managed-node2 platform-python[15391]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/nopull.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152583.1815786-10272-87697288283027/source _original_basename=tmp987jyyff follow=False checksum=d5dc917e3cae36de03aa971a17ac473f86fdf934 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:36:24 managed-node2 platform-python[15516]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:36:24 managed-node2 kernel: evm: overlay not supported Aug 02 12:36:24 managed-node2 systemd[1]: Created slice machine.slice. -- Subject: Unit machine.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:36:24 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice. -- Subject: Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:36:25 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:36:29 managed-node2 platform-python[15841]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:36:31 managed-node2 platform-python[15970]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:34 managed-node2 platform-python[16095]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:37 managed-node2 platform-python[16218]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:36:38 managed-node2 platform-python[16345]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:36:38 managed-node2 platform-python[16472]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:36:41 managed-node2 platform-python[16595]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:44 managed-node2 platform-python[16718]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:47 managed-node2 platform-python[16841]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:36:50 managed-node2 platform-python[16964]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:36:51 managed-node2 platform-python[17112]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:36:52 managed-node2 platform-python[17235]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:36:57 managed-node2 platform-python[17358]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:36:59 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:37:00 managed-node2 platform-python[17620]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:00 managed-node2 platform-python[17743]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:37:01 managed-node2 platform-python[17866]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:37:01 managed-node2 platform-python[17965]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/bogus.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152620.9003708-11840-265132279831358/source _original_basename=tmp6af94dg8 follow=False checksum=f8266a972ed3be7e204d2a67883fe3a22b8dbf18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:37:02 managed-node2 platform-python[18090]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:37:02 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice. -- Subject: Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:37:02 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:37:05 managed-node2 platform-python[18377]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:37:06 managed-node2 platform-python[18506]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:10 managed-node2 platform-python[18631]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:13 managed-node2 platform-python[18754]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:37:14 managed-node2 platform-python[18881]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:37:14 managed-node2 platform-python[19008]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:37:16 managed-node2 platform-python[19131]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:19 managed-node2 platform-python[19254]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:22 managed-node2 platform-python[19377]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:25 managed-node2 platform-python[19500]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:37:27 managed-node2 platform-python[19648]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:37:28 managed-node2 platform-python[19771]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:37:32 managed-node2 platform-python[19894]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:33 managed-node2 platform-python[20019]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:37:34 managed-node2 platform-python[20143]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:37:35 managed-node2 platform-python[20270]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:35 managed-node2 platform-python[20395]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:37:35 managed-node2 platform-python[20395]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/nopull.yml Aug 02 12:37:35 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice. -- Subject: Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished shutting down. Aug 02 12:37:35 managed-node2 systemd[1]: machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice completed and consumed the indicated resources. Aug 02 12:37:35 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:37:36 managed-node2 platform-python[20533]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:37:36 managed-node2 platform-python[20656]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:37:39 managed-node2 platform-python[20911]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:37:41 managed-node2 platform-python[21040]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:37:45 managed-node2 platform-python[21165]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:48 managed-node2 platform-python[21288]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:37:48 managed-node2 platform-python[21415]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:37:49 managed-node2 platform-python[21542]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:37:51 managed-node2 platform-python[21665]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:54 managed-node2 platform-python[21788]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:37:57 managed-node2 platform-python[21911]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:00 managed-node2 platform-python[22034]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:38:02 managed-node2 platform-python[22182]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:38:03 managed-node2 platform-python[22305]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:38:08 managed-node2 platform-python[22428]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:09 managed-node2 platform-python[22553]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:10 managed-node2 platform-python[22677]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:38:10 managed-node2 platform-python[22804]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:11 managed-node2 platform-python[22929]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:38:11 managed-node2 platform-python[22929]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/bogus.yml Aug 02 12:38:11 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice. -- Subject: Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished shutting down. Aug 02 12:38:11 managed-node2 systemd[1]: machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice: Consumed 0 CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice completed and consumed the indicated resources. Aug 02 12:38:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:38:12 managed-node2 platform-python[23068]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:38:12 managed-node2 platform-python[23191]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:16 managed-node2 platform-python[23446]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:17 managed-node2 platform-python[23575]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:21 managed-node2 platform-python[23700]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:24 managed-node2 platform-python[23823]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:38:24 managed-node2 platform-python[23950]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:38:25 managed-node2 platform-python[24077]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:38:27 managed-node2 platform-python[24200]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:30 managed-node2 platform-python[24323]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:33 managed-node2 platform-python[24446]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:38:36 managed-node2 platform-python[24569]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:38:38 managed-node2 platform-python[24717]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:38:38 managed-node2 platform-python[24840]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:38:43 managed-node2 platform-python[24963]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Aug 02 12:38:43 managed-node2 platform-python[25087]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:44 managed-node2 platform-python[25212]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:44 managed-node2 platform-python[25336]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:46 managed-node2 platform-python[25460]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:47 managed-node2 platform-python[25584]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Aug 02 12:38:47 managed-node2 systemd[1]: Created slice User Slice of UID 3001. -- Subject: Unit user-3001.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-3001.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[1]: Starting User runtime directory /run/user/3001... -- Subject: Unit user-runtime-dir@3001.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has begun starting up. Aug 02 12:38:47 managed-node2 systemd[1]: Started User runtime directory /run/user/3001. -- Subject: Unit user-runtime-dir@3001.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[1]: Starting User Manager for UID 3001... -- Subject: Unit user@3001.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has begun starting up. Aug 02 12:38:47 managed-node2 systemd[25590]: pam_unix(systemd-user:session): session opened for user podman_basic_user by (uid=0) Aug 02 12:38:47 managed-node2 systemd[25590]: Starting D-Bus User Message Bus Socket. -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Aug 02 12:38:47 managed-node2 systemd[25590]: Started Mark boot as successful after the user session has run 2 minutes. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Paths. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Timers. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Listening on D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Sockets. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Basic System. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Reached target Default. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:47 managed-node2 systemd[25590]: Startup finished in 32ms. -- Subject: User manager start-up is now complete -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The user manager instance for user 3001 has been started. All services queued -- for starting have been started. Note that other services might still be starting -- up or be started at any later time. -- -- Startup of the manager took 32456 microseconds. Aug 02 12:38:47 managed-node2 systemd[1]: Started User Manager for UID 3001. -- Subject: Unit user@3001.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has finished starting up. -- -- The start-up result is done. Aug 02 12:38:48 managed-node2 platform-python[25725]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:38:48 managed-node2 platform-python[25848]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:38:48 managed-node2 sudo[25971]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbiyvlbwevndfyvleplnipyfcreleuz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152728.615097-16354-140653512589780/AnsiballZ_podman_image.py' Aug 02 12:38:48 managed-node2 sudo[25971]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:49 managed-node2 systemd[25590]: Started D-Bus User Message Bus. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Created slice user.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Started podman-25984.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Started podman-pause-9fcbd008.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Started podman-26000.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:49 managed-node2 systemd[25590]: Started podman-26016.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:50 managed-node2 sudo[25971]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:50 managed-node2 platform-python[26145]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:50 managed-node2 platform-python[26268]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:38:51 managed-node2 platform-python[26391]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:38:51 managed-node2 platform-python[26490]: ansible-copy Invoked with dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml owner=podman_basic_user group=3001 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152731.0942633-16483-51427114771259/source _original_basename=tmpz7phazza follow=False checksum=41ba442683d49d3571d4ddce7f5dc14c85104270 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:38:51 managed-node2 sudo[26615]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtlljryarlxiuspbmjgrszvqdzysmgm ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152731.799995-16513-128710640317424/AnsiballZ_podman_play.py' Aug 02 12:38:51 managed-node2 sudo[26615]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:38:52 managed-node2 systemd[25590]: Started podman-26626.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:52 managed-node2 kernel: tun: Universal TUN/TAP device driver, 1.6 Aug 02 12:38:52 managed-node2 systemd[25590]: Started rootless-netns-6da9f76b.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:52 managed-node2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethff7bc329: link is not ready Aug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered blocking state Aug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state Aug 02 12:38:52 managed-node2 kernel: device vethff7bc329 entered promiscuous mode Aug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Aug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethff7bc329: link becomes ready Aug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered blocking state Aug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered forwarding state Aug 02 12:38:52 managed-node2 dnsmasq[26814]: listening on cni-podman1(#3): 10.89.0.1 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: started, version 2.79 cachesize 150 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using local addresses only for domain dns.podman Aug 02 12:38:52 managed-node2 dnsmasq[26816]: reading /etc/resolv.conf Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using local addresses only for domain dns.podman Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.0.2.3#53 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.29.169.13#53 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.29.170.12#53 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.2.32.1#53 Aug 02 12:38:52 managed-node2 dnsmasq[26816]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:38:52 managed-node2 conmon[26830]: conmon af16b69d72cc4526d63a : failed to write to /proc/self/oom_score_adj: Permission denied Aug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach} Aug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : terminal_ctrl_fd: 14 Aug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : winsz read side: 17, winsz write side: 18 Aug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : container PID: 26841 Aug 02 12:38:52 managed-node2 conmon[26851]: conmon 98c476488369c461640e : failed to write to /proc/self/oom_score_adj: Permission denied Aug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach} Aug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : terminal_ctrl_fd: 13 Aug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : winsz read side: 16, winsz write side: 17 Aug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : container PID: 26862 Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f Container: 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-08-02T12:38:52-04:00" level=info msg="/bin/podman filtering at log level debug" time="2025-08-02T12:38:52-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-08-02T12:38:52-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-08-02T12:38:52-04:00" level=info msg="Using sqlite as database backend" time="2025-08-02T12:38:52-04:00" level=debug msg="Using graph driver overlay" time="2025-08-02T12:38:52-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2025-08-02T12:38:52-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2025-08-02T12:38:52-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2025-08-02T12:38:52-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2025-08-02T12:38:52-04:00" level=debug msg="Using transient store: false" time="2025-08-02T12:38:52-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2025-08-02T12:38:52-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2025-08-02T12:38:52-04:00" level=debug msg="Initializing event backend file" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-08-02T12:38:52-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-08-02T12:38:52-04:00" level=info msg="Setting parallel job count to 7" time="2025-08-02T12:38:52-04:00" level=debug msg="Successfully loaded 1 networks" time="2025-08-02T12:38:52-04:00" level=debug msg="found free device name cni-podman1" time="2025-08-02T12:38:52-04:00" level=debug msg="found free ipv4 network subnet 10.89.0.0/24" time="2025-08-02T12:38:52-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:38:52.156183843 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-08-02T12:38:52-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="reference \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" does not resolve to an image ID" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="reference \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" does not resolve to an image ID" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="FROM \"scratch\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2025-08-02T12:38:52-04:00" level=debug msg="Check for idmapped mounts support " time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="overlay: test mount indicated that volatile is being used" time="2025-08-02T12:38:52-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/empty,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/work,userxattr,volatile,context=\"system_u:object_r:container_file_t:s0:c153,c335\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container ID: 0aa3a7343fabfcbf854b2db926e8ef83982bdca7985f430aea1f98b430ae4469" time="2025-08-02T12:38:52-04:00" level=debug msg="Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:copy Args:[/usr/libexec/podman/catatonit /catatonit] Flags:[] Attrs:map[] Message:COPY /usr/libexec/podman/catatonit /catatonit Heredocs:[] Original:COPY /usr/libexec/podman/catatonit /catatonit}" time="2025-08-02T12:38:52-04:00" level=debug msg="COPY []string(nil), imagebuilder.Copy{FromFS:false, From:\"\", Src:[]string{\"/usr/libexec/podman/catatonit\"}, Dest:\"/catatonit\", Download:false, Chown:\"\", Chmod:\"\", Checksum:\"\", Files:[]imagebuilder.File(nil)}" time="2025-08-02T12:38:52-04:00" level=debug msg="added content file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd" time="2025-08-02T12:38:52-04:00" level=debug msg="Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:entrypoint Args:[/catatonit -P] Flags:[] Attrs:map[json:true] Message:ENTRYPOINT /catatonit -P Heredocs:[] Original:ENTRYPOINT [\"/catatonit\", \"-P\"]}" time="2025-08-02T12:38:52-04:00" level=debug msg="COMMIT localhost/podman-pause:4.9.4-dev-1708535009" time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-08-02T12:38:52-04:00" level=debug msg="COMMIT \"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-08-02T12:38:52-04:00" level=debug msg="committing image with reference \"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\" is allowed by policy" time="2025-08-02T12:38:52-04:00" level=debug msg="layer list: [\"5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1\"]" time="2025-08-02T12:38:52-04:00" level=debug msg="using \"/var/tmp/buildah3803674644\" to hold temporary data" time="2025-08-02T12:38:52-04:00" level=debug msg="Tar with options on /home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/diff" time="2025-08-02T12:38:52-04:00" level=debug msg="layer \"5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1\" size is 767488 bytes, uncompressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690, possibly-compressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690" time="2025-08-02T12:38:52-04:00" level=debug msg="OCIv1 config = {\"created\":\"2025-08-02T16:38:52.296533662Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/catatonit\",\"-P\"],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"]},\"history\":[{\"created\":\"2025-08-02T16:38:52.295942274Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \",\"empty_layer\":true},{\"created\":\"2025-08-02T16:38:52.299650983Z\",\"created_by\":\"/bin/sh -c #(nop) ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]\"}]}" time="2025-08-02T12:38:52-04:00" level=debug msg="OCIv1 manifest = {\"schemaVersion\":2,\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"config\":{\"mediaType\":\"application/vnd.oci.image.config.v1+json\",\"digest\":\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\",\"size\":668},\"layers\":[{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar\",\"digest\":\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\",\"size\":767488}],\"annotations\":{\"org.opencontainers.image.base.digest\":\"\",\"org.opencontainers.image.base.name\":\"\"}}" time="2025-08-02T12:38:52-04:00" level=debug msg="Docker v2s2 config = {\"created\":\"2025-08-02T16:38:52.296533662Z\",\"container\":\"0aa3a7343fabfcbf854b2db926e8ef83982bdca7985f430aea1f98b430ae4469\",\"container_config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[],\"Image\":\"\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":[\"/catatonit\",\"-P\"],\"OnBuild\":[],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[],\"Image\":\"\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":[\"/catatonit\",\"-P\"],\"OnBuild\":[],\"Labels\":{\"io.buildah.version\":\"1.33.5\"}},\"architecture\":\"amd64\",\"os\":\"linux\",\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"]},\"history\":[{\"created\":\"2025-08-02T16:38:52.295942274Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \",\"empty_layer\":true},{\"created\":\"2025-08-02T16:38:52.299650983Z\",\"created_by\":\"/bin/sh -c #(nop) ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]\"}]}" time="2025-08-02T12:38:52-04:00" level=debug msg="Docker v2s2 manifest = {\"schemaVersion\":2,\"mediaType\":\"application/vnd.docker.distribution.manifest.v2+json\",\"config\":{\"mediaType\":\"application/vnd.docker.container.image.v1+json\",\"size\":1342,\"digest\":\"sha256:69b1a52f65cb5e3fa99e89b61152bda48cb5524edcedfdf2eac76a30c6778813\"},\"layers\":[{\"mediaType\":\"application/vnd.docker.image.rootfs.diff.tar\",\"size\":767488,\"digest\":\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"}]}" time="2025-08-02T12:38:52-04:00" level=debug msg="Using SQLite blob info cache at /home/podman_basic_user/.local/share/containers/cache/blob-info-cache-v1.sqlite" time="2025-08-02T12:38:52-04:00" level=debug msg="IsRunningImageAllowed for image containers-storage:" time="2025-08-02T12:38:52-04:00" level=debug msg=" Using transport \"containers-storage\" policy section " time="2025-08-02T12:38:52-04:00" level=debug msg=" Requirement 0: allowed" time="2025-08-02T12:38:52-04:00" level=debug msg="Overall: allowed" time="2025-08-02T12:38:52-04:00" level=debug msg="start reading config" time="2025-08-02T12:38:52-04:00" level=debug msg="finished reading config" time="2025-08-02T12:38:52-04:00" level=debug msg="Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]" time="2025-08-02T12:38:52-04:00" level=debug msg="... will first try using the original manifest unmodified" time="2025-08-02T12:38:52-04:00" level=debug msg="Checking if we can reuse blob sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690: general substitution = true, compression for MIME type \"application/vnd.oci.image.layer.v1.tar\" = true" time="2025-08-02T12:38:52-04:00" level=debug msg="reading layer \"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"" time="2025-08-02T12:38:52-04:00" level=debug msg="No compression detected" time="2025-08-02T12:38:52-04:00" level=debug msg="Using original blob without modification" time="2025-08-02T12:38:52-04:00" level=debug msg="Applying tar in /home/podman_basic_user/.local/share/containers/storage/overlay/d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690/diff" time="2025-08-02T12:38:52-04:00" level=debug msg="finished reading layer \"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"" time="2025-08-02T12:38:52-04:00" level=debug msg="No compression detected" time="2025-08-02T12:38:52-04:00" level=debug msg="Compression change for blob sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778 (\"application/vnd.oci.image.config.v1+json\") not supported" time="2025-08-02T12:38:52-04:00" level=debug msg="Using original blob without modification" time="2025-08-02T12:38:52-04:00" level=debug msg="setting image creation date to 2025-08-02 16:38:52.296533662 +0000 UTC" time="2025-08-02T12:38:52-04:00" level=debug msg="created new image ID \"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\" with metadata \"{}\"" time="2025-08-02T12:38:52-04:00" level=debug msg="added name \"localhost/podman-pause:4.9.4-dev-1708535009\" to image \"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\"" time="2025-08-02T12:38:52-04:00" level=debug msg="printing final image id \"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Pod using bridge network mode" time="2025-08-02T12:38:52-04:00" level=debug msg="Got pod cgroup as /libpod_parent/191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778" time="2025-08-02T12:38:52-04:00" level=debug msg="using systemd mode: false" time="2025-08-02T12:38:52-04:00" level=debug msg="setting container name 191a369333e4-infra" time="2025-08-02T12:38:52-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Allocated lock 1 for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae" time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created container \"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container \"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\" has work directory \"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container \"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\" has run directory \"/run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:38:52-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:38:52-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:38:52-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:38:52-04:00" level=debug msg="using systemd mode: false" time="2025-08-02T12:38:52-04:00" level=debug msg="adding container to pod httpd1" time="2025-08-02T12:38:52-04:00" level=debug msg="setting container name httpd1-httpd1" time="2025-08-02T12:38:52-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-08-02T12:38:52-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /proc" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /dev" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /dev/pts" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /dev/mqueue" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /sys" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2025-08-02T12:38:52-04:00" level=debug msg="Allocated lock 2 for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939" time="2025-08-02T12:38:52-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created container \"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container \"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\" has work directory \"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Container \"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\" has run directory \"/run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Strongconnecting node af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae" time="2025-08-02T12:38:52-04:00" level=debug msg="Pushed af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae onto stack" time="2025-08-02T12:38:52-04:00" level=debug msg="Finishing node af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae. Popped af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae off stack" time="2025-08-02T12:38:52-04:00" level=debug msg="Strongconnecting node 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939" time="2025-08-02T12:38:52-04:00" level=debug msg="Pushed 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 onto stack" time="2025-08-02T12:38:52-04:00" level=debug msg="Finishing node 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939. Popped 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 off stack" time="2025-08-02T12:38:52-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/4MRAZCR7JRY45YIIWXX5WJJ6A6,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/work,userxattr,context=\"system_u:object_r:container_file_t:s0:c389,c456\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Made network namespace at /run/user/3001/netns/netns-6723b79b-4d64-cc19-6a91-87394c058c53 for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae" time="2025-08-02T12:38:52-04:00" level=debug msg="Mounted container \"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\" at \"/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created root filesystem for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae at /home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged" time="2025-08-02T12:38:52-04:00" level=debug msg="creating rootless network namespace with name \"rootless-netns-d22c9f230d0691b8f418\"" time="2025-08-02T12:38:52-04:00" level=debug msg="slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/3001/netns/rootless-netns-d22c9f230d0691b8f418 tap0" time="2025-08-02T12:38:52-04:00" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/etc/resolv.conf\"" time="2025-08-02T12:38:52-04:00" level=debug msg="cni result for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:1e:08:d6:95:5e:f1 Sandbox:} {Name:vethff7bc329 Mac:26:19:1e:a6:0a:11 Sandbox:} {Name:eth0 Mac:1e:8a:1a:f5:d1:2a Sandbox:/run/user/3001/netns/netns-6723b79b-4d64-cc19-6a91-87394c058c53}] [{Version:4 Interface:0xc000b96228 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Starting parent driver\"\ntime=\"2025-08-02T12:38:52-04:00\" level=info msg=\"opaque=map[builtin.readypipepath:/run/user/3001/libpod/tmp/rootlessport3029357974/.bp-ready.pipe builtin.socketpath:/run/user/3001/libpod/tmp/rootlessport3029357974/.bp.sock]\"\ntime=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Starting child driver in child netns (\\\"/proc/self/exe\\\" [rootlessport-child])\"\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Waiting for initComplete\"\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"initComplete is closed; parent and child established the communication channel\"\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Exposing ports [{ 80 15001 1 tcp}]\"\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport: time=\"2025-08-02T12:38:52-04:00\" level=info msg=Ready\n" time="2025-08-02T12:38:52-04:00" level=debug msg="rootlessport is ready" time="2025-08-02T12:38:52-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-08-02T12:38:52-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-08-02T12:38:52-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created OCI spec for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/config.json" time="2025-08-02T12:38:52-04:00" level=debug msg="Got pod cgroup as " time="2025-08-02T12:38:52-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-08-02T12:38:52-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae -u af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata -p /run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/pidfile -n 191a369333e4-infra --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae]" time="2025-08-02T12:38:52-04:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied time="2025-08-02T12:38:52-04:00" level=debug msg="Received: 26841" time="2025-08-02T12:38:52-04:00" level=info msg="Got Conmon PID as 26831" time="2025-08-02T12:38:52-04:00" level=debug msg="Created container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae in OCI runtime" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2025-08-02T12:38:52-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2025-08-02T12:38:52-04:00" level=debug msg="Starting container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae with command [/catatonit -P]" time="2025-08-02T12:38:52-04:00" level=debug msg="Started container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae" time="2025-08-02T12:38:52-04:00" level=debug msg="overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/S5QNMEV2IMLZOTXAJ3H4ZQCILN,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/work,userxattr,context=\"system_u:object_r:container_file_t:s0:c389,c456\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Mounted container \"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\" at \"/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/merged\"" time="2025-08-02T12:38:52-04:00" level=debug msg="Created root filesystem for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 at /home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/merged" time="2025-08-02T12:38:52-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-08-02T12:38:52-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-08-02T12:38:52-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2025-08-02T12:38:52-04:00" level=debug msg="Created OCI spec for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/config.json" time="2025-08-02T12:38:52-04:00" level=debug msg="Got pod cgroup as " time="2025-08-02T12:38:52-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-08-02T12:38:52-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 -u 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata -p /run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/pidfile -n httpd1-httpd1 --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939]" time="2025-08-02T12:38:52-04:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied" [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied time="2025-08-02T12:38:52-04:00" level=debug msg="Received: 26862" time="2025-08-02T12:38:52-04:00" level=info msg="Got Conmon PID as 26852" time="2025-08-02T12:38:52-04:00" level=debug msg="Created container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 in OCI runtime" time="2025-08-02T12:38:52-04:00" level=debug msg="Starting container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 with command [/bin/busybox-extras httpd -f -p 80]" time="2025-08-02T12:38:52-04:00" level=debug msg="Started container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939" time="2025-08-02T12:38:52-04:00" level=debug msg="Called kube.PersistentPostRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-08-02T12:38:52-04:00" level=debug msg="Shutting down engines" Aug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Aug 02 12:38:52 managed-node2 sudo[26615]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:53 managed-node2 sudo[26993]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbaqkfutgdfbadtjjacfxjqxvpyoigvo ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152733.1876235-16558-123988354219606/AnsiballZ_systemd.py' Aug 02 12:38:53 managed-node2 sudo[26993]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:53 managed-node2 platform-python[26996]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Aug 02 12:38:53 managed-node2 systemd[25590]: Reloading. Aug 02 12:38:53 managed-node2 sudo[26993]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:53 managed-node2 sudo[27130]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmqxburdibhftsxxfjnfharsaboqrlrj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152733.8051448-16591-22459124935909/AnsiballZ_systemd.py' Aug 02 12:38:53 managed-node2 sudo[27130]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:54 managed-node2 platform-python[27133]: ansible-systemd Invoked with name= scope=user enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Aug 02 12:38:54 managed-node2 systemd[25590]: Reloading. Aug 02 12:38:54 managed-node2 sudo[27130]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:54 managed-node2 dnsmasq[26816]: listening on cni-podman1(#3): fe80::1c08:d6ff:fe95:5ef1%cni-podman1 Aug 02 12:38:54 managed-node2 sudo[27269]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bebktvlckbeotrlmhvsnejtmeicquqjz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152734.429291-16621-12283105958346/AnsiballZ_systemd.py' Aug 02 12:38:54 managed-node2 sudo[27269]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:38:54 managed-node2 platform-python[27272]: ansible-systemd Invoked with name= scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Aug 02 12:38:54 managed-node2 systemd[25590]: Created slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:54 managed-node2 systemd[25590]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit UNIT has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun starting up. Aug 02 12:38:54 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : container 26841 exited with status 137 Aug 02 12:38:54 managed-node2 conmon[26852]: conmon 98c476488369c461640e : container 26862 exited with status 137 Aug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:54-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae)" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:54-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:54-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:54-04:00" level=info msg="Using sqlite as database backend" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:54-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939)" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:54-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:54-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Aug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:54-04:00" level=info msg="Using sqlite as database backend" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using graph driver overlay" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using run root /run/user/3001/containers" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using transient store: false" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that metacopy is not being used" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that native-diff is usable" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Initializing event backend file" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=info msg="Setting parallel job count to 7" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using graph driver overlay" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using run root /run/user/3001/containers" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using transient store: false" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that metacopy is not being used" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Cached value indicated that native-diff is usable" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Initializing event backend file" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=info msg="Setting parallel job count to 7" Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state Aug 02 12:38:55 managed-node2 kernel: device vethff7bc329 left promiscuous mode Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939)" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time="2025-08-02T12:38:55-04:00" level=debug msg="Shutting down engines" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae)" Aug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time="2025-08-02T12:38:55-04:00" level=debug msg="Shutting down engines" Aug 02 12:38:55 managed-node2 podman[27278]: Pods stopped: Aug 02 12:38:55 managed-node2 podman[27278]: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f Aug 02 12:38:55 managed-node2 podman[27278]: Pods removed: Aug 02 12:38:55 managed-node2 podman[27278]: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f Aug 02 12:38:55 managed-node2 podman[27278]: Secrets removed: Aug 02 12:38:55 managed-node2 podman[27278]: Volumes removed: Aug 02 12:38:55 managed-node2 systemd[25590]: Started rootless-netns-dd6b3697.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:55 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethfa4f074b: link is not ready Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered blocking state Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state Aug 02 12:38:55 managed-node2 kernel: device vethfa4f074b entered promiscuous mode Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered blocking state Aug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered forwarding state Aug 02 12:38:55 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfa4f074b: link becomes ready Aug 02 12:38:55 managed-node2 dnsmasq[27525]: listening on cni-podman1(#3): 10.89.0.1 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: started, version 2.79 cachesize 150 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using local addresses only for domain dns.podman Aug 02 12:38:55 managed-node2 dnsmasq[27527]: reading /etc/resolv.conf Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using local addresses only for domain dns.podman Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.0.2.3#53 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.29.169.13#53 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.29.170.12#53 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.2.32.1#53 Aug 02 12:38:55 managed-node2 dnsmasq[27527]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:38:55 managed-node2 podman[27278]: Pod: Aug 02 12:38:55 managed-node2 podman[27278]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a Aug 02 12:38:55 managed-node2 podman[27278]: Container: Aug 02 12:38:55 managed-node2 podman[27278]: bc86eb03c7fb7110b2363dd55ed2866f782f16e8d8374c8a82784079a47558f1 Aug 02 12:38:55 managed-node2 systemd[25590]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:38:55 managed-node2 sudo[27269]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:38:56 managed-node2 platform-python[27703]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:38:56 managed-node2 dnsmasq[27527]: listening on cni-podman1(#3): fe80::a0c6:53ff:fed6:1184%cni-podman1 Aug 02 12:38:57 managed-node2 platform-python[27827]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:38:58 managed-node2 platform-python[27952]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:38:59 managed-node2 platform-python[28076]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:00 managed-node2 platform-python[28199]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:39:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:39:01 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:39:01 managed-node2 platform-python[28489]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:02 managed-node2 platform-python[28612]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:02 managed-node2 platform-python[28735]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:39:03 managed-node2 platform-python[28834]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152742.5335138-17006-203119541001881/source _original_basename=tmpvkt7buq9 follow=False checksum=2a8a08ffe6bf0159dd7563e043ed3c303a77cff4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:39:03 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:39:03 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice. -- Subject: Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.7856] manager: (cni-podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3) Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.7870] manager: (veth502e5636): new Veth device (/org/freedesktop/NetworkManager/Devices/4) Aug 02 12:39:03 managed-node2 systemd-udevd[29006]: Using default interface naming scheme 'rhel-8.0'. Aug 02 12:39:03 managed-node2 systemd-udevd[29006]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:03 managed-node2 systemd-udevd[29006]: Could not generate persistent MAC address for cni-podman1: No such file or directory Aug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth502e5636: link is not ready Aug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered blocking state Aug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state Aug 02 12:39:03 managed-node2 kernel: device veth502e5636 entered promiscuous mode Aug 02 12:39:03 managed-node2 systemd-udevd[29007]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:03 managed-node2 systemd-udevd[29007]: Could not generate persistent MAC address for veth502e5636: No such file or directory Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8196] device (cni-podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8201] device (cni-podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8209] device (cni-podman1): Activation: starting connection 'cni-podman1' (0ddcaf44-4d9a-41cb-acd9-42060ce7dc76) Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8210] device (cni-podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8212] device (cni-podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8215] device (cni-podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8217] device (cni-podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 dbus-daemon[595]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=665 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0") Aug 02 12:39:03 managed-node2 systemd[1]: Starting Network Manager Script Dispatcher Service... -- Subject: Unit NetworkManager-dispatcher.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has begun starting up. Aug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Aug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth502e5636: link becomes ready Aug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered blocking state Aug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered forwarding state Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8506] device (veth502e5636): carrier: link connected Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8508] device (cni-podman1): carrier: link connected Aug 02 12:39:03 managed-node2 dbus-daemon[595]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8678] device (cni-podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8680] device (cni-podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Aug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8684] device (cni-podman1): Activation: successful, device activated. Aug 02 12:39:03 managed-node2 systemd[1]: Started Network Manager Script Dispatcher Service. -- Subject: Unit NetworkManager-dispatcher.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit NetworkManager-dispatcher.service has finished starting up. -- -- The start-up result is done. Aug 02 12:39:03 managed-node2 dnsmasq[29128]: listening on cni-podman1(#3): 10.89.0.1 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: started, version 2.79 cachesize 150 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using local addresses only for domain dns.podman Aug 02 12:39:03 managed-node2 dnsmasq[29132]: reading /etc/resolv.conf Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using local addresses only for domain dns.podman Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.29.169.13#53 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.29.170.12#53 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.2.32.1#53 Aug 02 12:39:03 managed-node2 dnsmasq[29132]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:39:04 managed-node2 systemd[1]: Started libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope. -- Subject: Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach} Aug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : terminal_ctrl_fd: 13 Aug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : winsz read side: 17, winsz write side: 18 Aug 02 12:39:04 managed-node2 systemd[1]: Started libcontainer container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89. -- Subject: Unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : container PID: 29144 Aug 02 12:39:04 managed-node2 systemd[1]: Started libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope. -- Subject: Unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach} Aug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : terminal_ctrl_fd: 12 Aug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : winsz read side: 16, winsz write side: 17 Aug 02 12:39:04 managed-node2 systemd[1]: Started libcontainer container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb. -- Subject: Unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : container PID: 29166 Aug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Aug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496 Container: 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb Aug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-08-02T12:39:03-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2025-08-02T12:39:03-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-08-02T12:39:03-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-08-02T12:39:03-04:00" level=info msg="Using sqlite as database backend" time="2025-08-02T12:39:03-04:00" level=debug msg="Using graph driver overlay" time="2025-08-02T12:39:03-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Using run root /run/containers/storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2025-08-02T12:39:03-04:00" level=debug msg="Using tmp dir /run/libpod" time="2025-08-02T12:39:03-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2025-08-02T12:39:03-04:00" level=debug msg="Using transient store: false" time="2025-08-02T12:39:03-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2025-08-02T12:39:03-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2025-08-02T12:39:03-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2025-08-02T12:39:03-04:00" level=debug msg="Initializing event backend file" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-08-02T12:39:03-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-08-02T12:39:03-04:00" level=info msg="Setting parallel job count to 7" time="2025-08-02T12:39:03-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:36:24.472660556 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-08-02T12:39:03-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Pod using bridge network mode" time="2025-08-02T12:39:03-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496" time="2025-08-02T12:39:03-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:03-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb" time="2025-08-02T12:39:03-04:00" level=debug msg="using systemd mode: false" time="2025-08-02T12:39:03-04:00" level=debug msg="setting container name 90922c8ca930-infra" time="2025-08-02T12:39:03-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Allocated lock 1 for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported" time="2025-08-02T12:39:03-04:00" level=debug msg="Check for idmapped mounts support " time="2025-08-02T12:39:03-04:00" level=debug msg="Created container \"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Container \"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\" has work directory \"/var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Container \"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\" has run directory \"/run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Pulling image quay.io/libpod/testimage:20210610 (policy: missing)" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Looking up image \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:39:03-04:00" level=debug msg="Trying \"quay.io/libpod/testimage:20210610\" ..." time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage" time="2025-08-02T12:39:03-04:00" level=debug msg="Found image \"quay.io/libpod/testimage:20210610\" as \"quay.io/libpod/testimage:20210610\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:39:03-04:00" level=debug msg="Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f" time="2025-08-02T12:39:03-04:00" level=debug msg="using systemd mode: false" time="2025-08-02T12:39:03-04:00" level=debug msg="adding container to pod httpd2" time="2025-08-02T12:39:03-04:00" level=debug msg="setting container name httpd2-httpd2" time="2025-08-02T12:39:03-04:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\"" time="2025-08-02T12:39:03-04:00" level=info msg="Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /proc" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /dev" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /dev/pts" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /dev/mqueue" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /sys" time="2025-08-02T12:39:03-04:00" level=debug msg="Adding mount /sys/fs/cgroup" time="2025-08-02T12:39:03-04:00" level=debug msg="Allocated lock 2 for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb" time="2025-08-02T12:39:03-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="exporting opaque data as blob \"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Created container \"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Container \"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\" has work directory \"/var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Container \"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\" has run directory \"/run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Strongconnecting node ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:03-04:00" level=debug msg="Pushed ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 onto stack" time="2025-08-02T12:39:03-04:00" level=debug msg="Finishing node ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89. Popped ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 off stack" time="2025-08-02T12:39:03-04:00" level=debug msg="Strongconnecting node 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb" time="2025-08-02T12:39:03-04:00" level=debug msg="Pushed 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb onto stack" time="2025-08-02T12:39:03-04:00" level=debug msg="Finishing node 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb. Popped 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb off stack" time="2025-08-02T12:39:03-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/ZR5XOSU7O7VXY2BDL65A7UWKU6,upperdir=/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/diff,workdir=/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c784,c888\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Mounted container \"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\" at \"/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\"" time="2025-08-02T12:39:03-04:00" level=debug msg="Created root filesystem for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 at /var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged" time="2025-08-02T12:39:03-04:00" level=debug msg="Made network namespace at /run/netns/netns-057bdf77-0e93-7270-6a44-66c62177cd73 for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:03-04:00" level=debug msg="cni result for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:96:46:b4:0c:81:50 Sandbox:} {Name:veth502e5636 Mac:4a:ea:32:89:32:4a Sandbox:} {Name:eth0 Mac:ae:ce:ef:99:2c:87 Sandbox:/run/netns/netns-057bdf77-0e93-7270-6a44-66c62177cd73}] [{Version:4 Interface:0xc00087bc58 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}" time="2025-08-02T12:39:04-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-08-02T12:39:04-04:00" level=debug msg="Setting Cgroups for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 to machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice:libpod:ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:04-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-08-02T12:39:04-04:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\"" time="2025-08-02T12:39:04-04:00" level=debug msg="Created OCI spec for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 at /var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/config.json" time="2025-08-02T12:39:04-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496" time="2025-08-02T12:39:04-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:04-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:04-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-08-02T12:39:04-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 -u ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata -p /run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/pidfile -n 90922c8ca930-infra --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89]" time="2025-08-02T12:39:04-04:00" level=info msg="Running conmon under slice machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice and unitName libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope" time="2025-08-02T12:39:04-04:00" level=debug msg="Received: 29144" time="2025-08-02T12:39:04-04:00" level=info msg="Got Conmon PID as 29134" time="2025-08-02T12:39:04-04:00" level=debug msg="Created container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 in OCI runtime" time="2025-08-02T12:39:04-04:00" level=debug msg="Adding nameserver(s) from network status of '[\"10.89.0.1\"]'" time="2025-08-02T12:39:04-04:00" level=debug msg="Adding search domain(s) from network status of '[\"dns.podman\"]'" time="2025-08-02T12:39:04-04:00" level=debug msg="Starting container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 with command [/catatonit -P]" time="2025-08-02T12:39:04-04:00" level=debug msg="Started container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89" time="2025-08-02T12:39:04-04:00" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/HKP6QAO57O46FRNHGFBKAKZZRC,upperdir=/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/diff,workdir=/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c784,c888\"" time="2025-08-02T12:39:04-04:00" level=debug msg="Mounted container \"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\" at \"/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/merged\"" time="2025-08-02T12:39:04-04:00" level=debug msg="Created root filesystem for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb at /var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/merged" time="2025-08-02T12:39:04-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" time="2025-08-02T12:39:04-04:00" level=debug msg="Setting Cgroups for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb to machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice:libpod:071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb" time="2025-08-02T12:39:04-04:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d" time="2025-08-02T12:39:04-04:00" level=debug msg="Workdir \"/var/www\" resolved to a volume or mount" time="2025-08-02T12:39:04-04:00" level=debug msg="Created OCI spec for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb at /var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/config.json" time="2025-08-02T12:39:04-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496" time="2025-08-02T12:39:04-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:04-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice" time="2025-08-02T12:39:04-04:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog" time="2025-08-02T12:39:04-04:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb -u 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata -p /run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb]" time="2025-08-02T12:39:04-04:00" level=info msg="Running conmon under slice machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice and unitName libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope" time="2025-08-02T12:39:04-04:00" level=debug msg="Received: 29166" time="2025-08-02T12:39:04-04:00" level=info msg="Got Conmon PID as 29155" time="2025-08-02T12:39:04-04:00" level=debug msg="Created container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb in OCI runtime" time="2025-08-02T12:39:04-04:00" level=debug msg="Starting container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb with command [/bin/busybox-extras httpd -f -p 80]" time="2025-08-02T12:39:04-04:00" level=debug msg="Started container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb" time="2025-08-02T12:39:04-04:00" level=debug msg="Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-08-02T12:39:04-04:00" level=debug msg="Shutting down engines" Aug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Aug 02 12:39:04 managed-node2 platform-python[29297]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Aug 02 12:39:04 managed-node2 systemd[1]: Reloading. Aug 02 12:39:05 managed-node2 dnsmasq[29132]: listening on cni-podman1(#3): fe80::9446:b4ff:fe0c:8150%cni-podman1 Aug 02 12:39:05 managed-node2 platform-python[29458]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Aug 02 12:39:05 managed-node2 systemd[1]: Reloading. Aug 02 12:39:06 managed-node2 platform-python[29621]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Aug 02 12:39:06 managed-node2 systemd[1]: Created slice system-podman\x2dkube.slice. -- Subject: Unit system-podman\x2dkube.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit system-podman\x2dkube.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:06 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun starting up. Aug 02 12:39:06 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : container 29144 exited with status 137 Aug 02 12:39:06 managed-node2 systemd[1]: libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 systemd[1]: libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope completed and consumed the indicated resources. Aug 02 12:39:06 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : container 29166 exited with status 137 Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89)" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Aug 02 12:39:06 managed-node2 systemd[1]: libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=info msg="Using sqlite as database backend" Aug 02 12:39:06 managed-node2 systemd[1]: libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope completed and consumed the indicated resources. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb)" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Setting custom database backend: \"sqlite\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=info msg="Using sqlite as database backend" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using graph driver overlay" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using run root /run/containers/storage" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using tmp dir /run/libpod" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using transient store: false" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that metacopy is being used" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Initializing event backend file" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=info msg="Setting parallel job count to 7" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using graph driver overlay" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using graph root /var/lib/containers/storage" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using run root /run/containers/storage" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using tmp dir /run/libpod" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using transient store: false" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that overlay is supported" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that metacopy is being used" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Cached value indicated that native-diff is not being used" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Initializing event backend file" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=info msg="Setting parallel job count to 7" Aug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767-merged.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb)" Aug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time="2025-08-02T12:39:06-04:00" level=debug msg="Shutting down engines" Aug 02 12:39:06 managed-node2 systemd[1]: libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state Aug 02 12:39:06 managed-node2 kernel: device veth502e5636 left promiscuous mode Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state Aug 02 12:39:06 managed-node2 systemd[1]: run-netns-netns\x2d057bdf77\x2d0e93\x2d7270\x2d6a44\x2d66c62177cd73.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d057bdf77\x2d0e93\x2d7270\x2d6a44\x2d66c62177cd73.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay-8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1-merged.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89)" Aug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time="2025-08-02T12:39:06-04:00" level=debug msg="Shutting down engines" Aug 02 12:39:06 managed-node2 systemd[1]: libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has successfully entered the 'dead' state. Aug 02 12:39:06 managed-node2 systemd[1]: Stopped libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope. -- Subject: Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished shutting down. Aug 02 12:39:06 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice. -- Subject: Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished shutting down. Aug 02 12:39:06 managed-node2 systemd[1]: machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice: Consumed 209ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice completed and consumed the indicated resources. Aug 02 12:39:06 managed-node2 podman[29628]: Pods stopped: Aug 02 12:39:06 managed-node2 podman[29628]: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496 Aug 02 12:39:06 managed-node2 podman[29628]: Pods removed: Aug 02 12:39:06 managed-node2 podman[29628]: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496 Aug 02 12:39:06 managed-node2 podman[29628]: Secrets removed: Aug 02 12:39:06 managed-node2 podman[29628]: Volumes removed: Aug 02 12:39:06 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice. -- Subject: Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:06 managed-node2 systemd[1]: Started libcontainer container 1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27. -- Subject: Unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:06 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethb3f38e19: link is not ready Aug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7477] manager: (vethb3f38e19): new Veth device (/org/freedesktop/NetworkManager/Devices/5) Aug 02 12:39:06 managed-node2 systemd-udevd[29789]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:06 managed-node2 systemd-udevd[29789]: Could not generate persistent MAC address for vethb3f38e19: No such file or directory Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state Aug 02 12:39:06 managed-node2 kernel: device vethb3f38e19 entered promiscuous mode Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered forwarding state Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state Aug 02 12:39:06 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb3f38e19: link becomes ready Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state Aug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered forwarding state Aug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7761] device (vethb3f38e19): carrier: link connected Aug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7763] device (cni-podman1): carrier: link connected Aug 02 12:39:06 managed-node2 dnsmasq[29859]: listening on cni-podman1(#3): 10.89.0.1 Aug 02 12:39:06 managed-node2 dnsmasq[29859]: listening on cni-podman1(#3): fe80::9446:b4ff:fe0c:8150%cni-podman1 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: started, version 2.79 cachesize 150 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using local addresses only for domain dns.podman Aug 02 12:39:06 managed-node2 dnsmasq[29863]: reading /etc/resolv.conf Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using local addresses only for domain dns.podman Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.29.169.13#53 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.29.170.12#53 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.2.32.1#53 Aug 02 12:39:06 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:39:06 managed-node2 systemd[1]: Started libcontainer container 36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561. -- Subject: Unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:07 managed-node2 systemd[1]: Started libcontainer container 58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5. -- Subject: Unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:07 managed-node2 podman[29628]: Pod: Aug 02 12:39:07 managed-node2 podman[29628]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3 Aug 02 12:39:07 managed-node2 podman[29628]: Container: Aug 02 12:39:07 managed-node2 podman[29628]: 58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5 Aug 02 12:39:07 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished starting up. -- -- The start-up result is done. Aug 02 12:39:08 managed-node2 platform-python[30036]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:09 managed-node2 platform-python[30161]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:10 managed-node2 platform-python[30285]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:11 managed-node2 platform-python[30408]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:12 managed-node2 platform-python[30696]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:13 managed-node2 platform-python[30819]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:13 managed-node2 platform-python[30942]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:39:13 managed-node2 platform-python[31041]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152753.1569695-17471-186787888155164/source _original_basename=tmpca25d1vk follow=False checksum=0ee95d54856ad9dce4aa168ba4cfda0f7aaf74cc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None Aug 02 12:39:13 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state. Aug 02 12:39:14 managed-node2 platform-python[31167]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:39:14 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice. -- Subject: Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethe290c1c0: link is not ready Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state Aug 02 12:39:14 managed-node2 kernel: device vethe290c1c0 entered promiscuous mode Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered forwarding state Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state Aug 02 12:39:14 managed-node2 NetworkManager[665]: [1754152754.3788] manager: (vethe290c1c0): new Veth device (/org/freedesktop/NetworkManager/Devices/6) Aug 02 12:39:14 managed-node2 systemd-udevd[31214]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:14 managed-node2 systemd-udevd[31214]: Could not generate persistent MAC address for vethe290c1c0: No such file or directory Aug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Aug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe290c1c0: link becomes ready Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state Aug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered forwarding state Aug 02 12:39:14 managed-node2 NetworkManager[665]: [1754152754.3907] device (vethe290c1c0): carrier: link connected Aug 02 12:39:14 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses Aug 02 12:39:14 managed-node2 systemd[1]: Started libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope. -- Subject: Unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:14 managed-node2 systemd[1]: Started libcontainer container 757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b. -- Subject: Unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:14 managed-node2 systemd[1]: Started libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope. -- Subject: Unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:14 managed-node2 systemd[1]: Started libcontainer container c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8. -- Subject: Unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:15 managed-node2 platform-python[31446]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None Aug 02 12:39:15 managed-node2 systemd[1]: Reloading. Aug 02 12:39:15 managed-node2 platform-python[31599]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None Aug 02 12:39:16 managed-node2 systemd[1]: Reloading. Aug 02 12:39:16 managed-node2 platform-python[31762]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None Aug 02 12:39:16 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun starting up. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Consumed 33ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope completed and consumed the indicated resources. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope completed and consumed the indicated resources. Aug 02 12:39:16 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay-560605836dd57768a1625bf83fb2efde4d0b4be2bd75173250f2c981226dcfec-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-560605836dd57768a1625bf83fb2efde4d0b4be2bd75173250f2c981226dcfec-merged.mount has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state Aug 02 12:39:16 managed-node2 kernel: device vethe290c1c0 left promiscuous mode Aug 02 12:39:16 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state Aug 02 12:39:16 managed-node2 systemd[1]: run-netns-netns\x2d925a2bce\x2dbdb1\x2deec4\x2d32ca\x2dde6b846181b3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2d925a2bce\x2dbdb1\x2deec4\x2d32ca\x2dde6b846181b3.mount has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay-aecbd6406a5b43a9f34c80dc98df8e948a604d44b70ab60cb4952ed4aea64143-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-aecbd6406a5b43a9f34c80dc98df8e948a604d44b70ab60cb4952ed4aea64143-merged.mount has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has successfully entered the 'dead' state. Aug 02 12:39:16 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice. -- Subject: Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished shutting down. Aug 02 12:39:16 managed-node2 systemd[1]: machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice: Consumed 199ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice completed and consumed the indicated resources. Aug 02 12:39:17 managed-node2 podman[31769]: Pods stopped: Aug 02 12:39:17 managed-node2 podman[31769]: c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd Aug 02 12:39:17 managed-node2 podman[31769]: Pods removed: Aug 02 12:39:17 managed-node2 podman[31769]: c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd Aug 02 12:39:17 managed-node2 podman[31769]: Secrets removed: Aug 02 12:39:17 managed-node2 podman[31769]: Volumes removed: Aug 02 12:39:17 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice. -- Subject: Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124. -- Subject: Unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:17 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth69cd15af: link is not ready Aug 02 12:39:17 managed-node2 NetworkManager[665]: [1754152757.2610] manager: (veth69cd15af): new Veth device (/org/freedesktop/NetworkManager/Devices/7) Aug 02 12:39:17 managed-node2 systemd-udevd[31935]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Aug 02 12:39:17 managed-node2 systemd-udevd[31935]: Could not generate persistent MAC address for veth69cd15af: No such file or directory Aug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered blocking state Aug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state Aug 02 12:39:17 managed-node2 kernel: device veth69cd15af entered promiscuous mode Aug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered blocking state Aug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered forwarding state Aug 02 12:39:17 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth69cd15af: link becomes ready Aug 02 12:39:17 managed-node2 NetworkManager[665]: [1754152757.2829] device (veth69cd15af): carrier: link connected Aug 02 12:39:17 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses Aug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container 686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c. -- Subject: Unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container 42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c. -- Subject: Unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:39:17 managed-node2 podman[31769]: Pod: Aug 02 12:39:17 managed-node2 podman[31769]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748 Aug 02 12:39:17 managed-node2 podman[31769]: Container: Aug 02 12:39:17 managed-node2 podman[31769]: 42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c Aug 02 12:39:17 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished starting up. -- -- The start-up result is done. Aug 02 12:39:18 managed-node2 sudo[32165]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqlhppzgtldczaoizfnuaorgtcfrvcv ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152758.2331746-17700-125571240428666/AnsiballZ_command.py' Aug 02 12:39:18 managed-node2 sudo[32165]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:39:18 managed-node2 platform-python[32168]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:18 managed-node2 systemd[25590]: Started podman-32177.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:18 managed-node2 sudo[32165]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:39:18 managed-node2 platform-python[32306]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:19 managed-node2 platform-python[32437]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:19 managed-node2 sudo[32575]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fljlyrnggisgqfazmxwyyzqdsrmpnozw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152759.5230556-17753-36771943884398/AnsiballZ_command.py' Aug 02 12:39:19 managed-node2 sudo[32575]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:39:19 managed-node2 platform-python[32578]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:19 managed-node2 sudo[32575]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:39:20 managed-node2 platform-python[32704]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:20 managed-node2 platform-python[32830]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:21 managed-node2 platform-python[32956]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:21 managed-node2 platform-python[33080]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:21 managed-node2 rsyslogd[1025]: imjournal: journal files changed, reloading... [v8.2102.0-15.el8 try https://www.rsyslog.com/e/0 ] Aug 02 12:39:22 managed-node2 platform-python[33205]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd1-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:22 managed-node2 platform-python[33329]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd2-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:22 managed-node2 platform-python[33453]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd3-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:25 managed-node2 platform-python[33702]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:26 managed-node2 platform-python[33831]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:30 managed-node2 platform-python[33956]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:39:33 managed-node2 platform-python[34079]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:39:33 managed-node2 platform-python[34206]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:39:34 managed-node2 platform-python[34333]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:39:36 managed-node2 platform-python[34456]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:39:39 managed-node2 platform-python[34579]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:39:42 managed-node2 platform-python[34702]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:39:45 managed-node2 platform-python[34825]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Aug 02 12:39:47 managed-node2 platform-python[34986]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True Aug 02 12:39:48 managed-node2 platform-python[35109]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked Aug 02 12:39:53 managed-node2 platform-python[35232]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Aug 02 12:39:53 managed-node2 platform-python[35356]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:54 managed-node2 platform-python[35481]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:54 managed-node2 platform-python[35605]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:55 managed-node2 platform-python[35729]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:39:56 managed-node2 platform-python[35853]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None Aug 02 12:39:57 managed-node2 platform-python[35976]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:57 managed-node2 platform-python[36099]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:39:58 managed-node2 sudo[36222]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgimaanjcdkkcebhasfqpdcfpgwkfami ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152797.8648844-19514-81631354606804/AnsiballZ_podman_image.py' Aug 02 12:39:58 managed-node2 sudo[36222]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36227.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36235.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36243.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36251.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36259.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36268.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:39:58 managed-node2 sudo[36222]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:39:59 managed-node2 platform-python[36397]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:39:59 managed-node2 platform-python[36522]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:00 managed-node2 platform-python[36645]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:40:00 managed-node2 platform-python[36709]: ansible-file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=tmp5c1b5ldh recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:00 managed-node2 sudo[36832]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkfbzsoxqmfspcyzxykzglzhyzsybbor ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152800.4994206-19649-125491116375657/AnsiballZ_podman_play.py' Aug 02 12:40:00 managed-node2 sudo[36832]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:00 managed-node2 systemd[25590]: Started podman-36843.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-08-02T12:40:00-04:00" level=info msg="/bin/podman filtering at log level debug" time="2025-08-02T12:40:00-04:00" level=debug msg="Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)" time="2025-08-02T12:40:00-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-08-02T12:40:00-04:00" level=info msg="Using sqlite as database backend" time="2025-08-02T12:40:00-04:00" level=debug msg="Using graph driver overlay" time="2025-08-02T12:40:00-04:00" level=debug msg="Using graph root /home/podman_basic_user/.local/share/containers/storage" time="2025-08-02T12:40:00-04:00" level=debug msg="Using run root /run/user/3001/containers" time="2025-08-02T12:40:00-04:00" level=debug msg="Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod" time="2025-08-02T12:40:00-04:00" level=debug msg="Using tmp dir /run/user/3001/libpod/tmp" time="2025-08-02T12:40:00-04:00" level=debug msg="Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes" time="2025-08-02T12:40:00-04:00" level=debug msg="Using transient store: false" time="2025-08-02T12:40:00-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-08-02T12:40:00-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:40:00-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:40:00-04:00" level=debug msg="Cached value indicated that metacopy is not being used" time="2025-08-02T12:40:00-04:00" level=debug msg="Cached value indicated that native-diff is usable" time="2025-08-02T12:40:00-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false" time="2025-08-02T12:40:00-04:00" level=debug msg="Initializing event backend file" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-08-02T12:40:00-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-08-02T12:40:00-04:00" level=info msg="Setting parallel job count to 7" time="2025-08-02T12:40:00-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:38:52.156183843 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-08-02T12:40:00-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-08-02T12:40:00-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:40:00-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:40:00-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:40:00-04:00" level=debug msg="parsed reference into \"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:40:00-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:40:00-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778)" time="2025-08-02T12:40:00-04:00" level=debug msg="exporting opaque data as blob \"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"" time="2025-08-02T12:40:00-04:00" level=debug msg="Pod using bridge network mode" time="2025-08-02T12:40:00-04:00" level=debug msg="Got pod cgroup as /libpod_parent/af868cea690b52212d50213e7cf00f2f99a7e0af0fbb1c22376a1c8272177aef" Error: adding pod to state: name "httpd1" is in use: pod already exists time="2025-08-02T12:40:00-04:00" level=debug msg="Shutting down engines" Aug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Aug 02 12:40:00 managed-node2 sudo[36832]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:01 managed-node2 platform-python[36997]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:40:02 managed-node2 platform-python[37121]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:03 managed-node2 platform-python[37246]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:04 managed-node2 platform-python[37370]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:05 managed-node2 platform-python[37493]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:06 managed-node2 platform-python[37784]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:07 managed-node2 platform-python[37909]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:07 managed-node2 platform-python[38032]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:40:07 managed-node2 platform-python[38096]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=tmp582cc1u4 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:08 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice. -- Subject: Unit machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time="2025-08-02T12:40:08-04:00" level=info msg="/usr/bin/podman filtering at log level debug" time="2025-08-02T12:40:08-04:00" level=debug msg="Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)" time="2025-08-02T12:40:08-04:00" level=debug msg="Using conmon: \"/usr/bin/conmon\"" time="2025-08-02T12:40:08-04:00" level=info msg="Using sqlite as database backend" time="2025-08-02T12:40:08-04:00" level=debug msg="Using graph driver overlay" time="2025-08-02T12:40:08-04:00" level=debug msg="Using graph root /var/lib/containers/storage" time="2025-08-02T12:40:08-04:00" level=debug msg="Using run root /run/containers/storage" time="2025-08-02T12:40:08-04:00" level=debug msg="Using static dir /var/lib/containers/storage/libpod" time="2025-08-02T12:40:08-04:00" level=debug msg="Using tmp dir /run/libpod" time="2025-08-02T12:40:08-04:00" level=debug msg="Using volume path /var/lib/containers/storage/volumes" time="2025-08-02T12:40:08-04:00" level=debug msg="Using transient store: false" time="2025-08-02T12:40:08-04:00" level=debug msg="[graphdriver] trying provided driver \"overlay\"" time="2025-08-02T12:40:08-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:40:08-04:00" level=debug msg="Cached value indicated that overlay is supported" time="2025-08-02T12:40:08-04:00" level=debug msg="Cached value indicated that metacopy is being used" time="2025-08-02T12:40:08-04:00" level=debug msg="Cached value indicated that native-diff is not being used" time="2025-08-02T12:40:08-04:00" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" time="2025-08-02T12:40:08-04:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true" time="2025-08-02T12:40:08-04:00" level=debug msg="Initializing event backend file" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument" time="2025-08-02T12:40:08-04:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\"" time="2025-08-02T12:40:08-04:00" level=info msg="Setting parallel job count to 7" time="2025-08-02T12:40:08-04:00" level=debug msg="Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:36:24.472660556 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}" time="2025-08-02T12:40:08-04:00" level=debug msg="Successfully loaded 2 networks" time="2025-08-02T12:40:08-04:00" level=debug msg="Looking up image \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:40:08-04:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux [] }" time="2025-08-02T12:40:08-04:00" level=debug msg="Trying \"localhost/podman-pause:4.9.4-dev-1708535009\" ..." time="2025-08-02T12:40:08-04:00" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:40:08-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage" time="2025-08-02T12:40:08-04:00" level=debug msg="Found image \"localhost/podman-pause:4.9.4-dev-1708535009\" as \"localhost/podman-pause:4.9.4-dev-1708535009\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)" time="2025-08-02T12:40:08-04:00" level=debug msg="exporting opaque data as blob \"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"" time="2025-08-02T12:40:08-04:00" level=debug msg="Pod using bridge network mode" time="2025-08-02T12:40:08-04:00" level=debug msg="Created cgroup path machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice for parent machine.slice and name libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d" time="2025-08-02T12:40:08-04:00" level=debug msg="Created cgroup machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice" time="2025-08-02T12:40:08-04:00" level=debug msg="Got pod cgroup as machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice" Error: adding pod to state: name "httpd2" is in use: pod already exists time="2025-08-02T12:40:08-04:00" level=debug msg="Shutting down engines" Aug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125 Aug 02 12:40:09 managed-node2 platform-python[38380]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:11 managed-node2 platform-python[38505]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:12 managed-node2 platform-python[38629]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:12 managed-node2 platform-python[38752]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:14 managed-node2 platform-python[39041]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:14 managed-node2 platform-python[39166]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:15 managed-node2 platform-python[39289]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Aug 02 12:40:15 managed-node2 platform-python[39353]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=tmp1h9opetg recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:15 managed-node2 platform-python[39476]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:15 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice. -- Subject: Unit machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice has finished starting up. -- -- The start-up result is done. Aug 02 12:40:16 managed-node2 sudo[39637]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqvcznhcsisjvfqgdskaqojrqwhygefl ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152816.415957-20432-182641700984178/AnsiballZ_command.py' Aug 02 12:40:16 managed-node2 sudo[39637]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:16 managed-node2 platform-python[39640]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:16 managed-node2 systemd[25590]: Started podman-39648.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:16 managed-node2 sudo[39637]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:17 managed-node2 platform-python[39778]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:17 managed-node2 platform-python[39909]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:17 managed-node2 sudo[40040]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omdhsbbfmibalzpyvmgfxmtrjimxacss ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152817.7911968-20497-78215437807137/AnsiballZ_command.py' Aug 02 12:40:17 managed-node2 sudo[40040]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:18 managed-node2 platform-python[40043]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:18 managed-node2 sudo[40040]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:18 managed-node2 platform-python[40169]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:18 managed-node2 platform-python[40295]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:19 managed-node2 platform-python[40421]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:19 managed-node2 platform-python[40545]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:20 managed-node2 platform-python[40669]: ansible-uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:23 managed-node2 platform-python[40918]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:24 managed-node2 platform-python[41047]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:27 managed-node2 platform-python[41172]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Aug 02 12:40:28 managed-node2 platform-python[41296]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:28 managed-node2 platform-python[41421]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:29 managed-node2 platform-python[41545]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:30 managed-node2 platform-python[41669]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:31 managed-node2 platform-python[41793]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:31 managed-node2 sudo[41918]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxfxukflwegbkcgylwjhylqbzyvcoltj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152831.2720408-21172-116283974621483/AnsiballZ_systemd.py' Aug 02 12:40:31 managed-node2 sudo[41918]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:31 managed-node2 platform-python[41921]: ansible-systemd Invoked with name= scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:40:31 managed-node2 systemd[25590]: Reloading. Aug 02 12:40:31 managed-node2 systemd[25590]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Aug 02 12:40:31 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state Aug 02 12:40:31 managed-node2 kernel: device vethfa4f074b left promiscuous mode Aug 02 12:40:31 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state Aug 02 12:40:32 managed-node2 podman[41937]: Pods stopped: Aug 02 12:40:32 managed-node2 podman[41937]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a Aug 02 12:40:32 managed-node2 podman[41937]: Pods removed: Aug 02 12:40:32 managed-node2 podman[41937]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a Aug 02 12:40:32 managed-node2 podman[41937]: Secrets removed: Aug 02 12:40:32 managed-node2 podman[41937]: Volumes removed: Aug 02 12:40:32 managed-node2 systemd[25590]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:32 managed-node2 sudo[41918]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:32 managed-node2 platform-python[42211]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:32 managed-node2 sudo[42336]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvztponjzcqifptzcqpojjxosoczfayg ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152832.762454-21237-209128215005834/AnsiballZ_podman_play.py' Aug 02 12:40:32 managed-node2 sudo[42336]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Aug 02 12:40:33 managed-node2 systemd[25590]: Started podman-42347.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Aug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Aug 02 12:40:33 managed-node2 sudo[42336]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:33 managed-node2 platform-python[42476]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:34 managed-node2 platform-python[42599]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:40:35 managed-node2 platform-python[42723]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:36 managed-node2 platform-python[42848]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:37 managed-node2 platform-python[42972]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:40:37 managed-node2 systemd[1]: Reloading. Aug 02 12:40:37 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has begun shutting down. Aug 02 12:40:37 managed-node2 systemd[1]: libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope: Consumed 31ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope completed and consumed the indicated resources. Aug 02 12:40:37 managed-node2 systemd[1]: libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope completed and consumed the indicated resources. Aug 02 12:40:37 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses Aug 02 12:40:37 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state Aug 02 12:40:37 managed-node2 kernel: device vethb3f38e19 left promiscuous mode Aug 02 12:40:37 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state Aug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-5a0e125408b3f62c274917d9a997808220e1c2685a2d8ff8405416971a11f6c0-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-5a0e125408b3f62c274917d9a997808220e1c2685a2d8ff8405416971a11f6c0-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: run-netns-netns\x2de8368567\x2d59b6\x2d542f\x2d1a97\x2df2ca68e931e3.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2de8368567\x2d59b6\x2d542f\x2d1a97\x2df2ca68e931e3.mount has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-15c93535d4532096f93e6259ac42e3e35cf574dd19355624da5c37ad60d78144-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-15c93535d4532096f93e6259ac42e3e35cf574dd19355624da5c37ad60d78144-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:37 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice. -- Subject: Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished shutting down. Aug 02 12:40:37 managed-node2 systemd[1]: machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice: Consumed 67ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice completed and consumed the indicated resources. Aug 02 12:40:38 managed-node2 systemd[1]: libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has successfully entered the 'dead' state. Aug 02 12:40:38 managed-node2 systemd[1]: libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope completed and consumed the indicated resources. Aug 02 12:40:38 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:40:38 managed-node2 podman[43008]: Pods stopped: Aug 02 12:40:38 managed-node2 podman[43008]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3 Aug 02 12:40:38 managed-node2 podman[43008]: Pods removed: Aug 02 12:40:38 managed-node2 podman[43008]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3 Aug 02 12:40:38 managed-node2 podman[43008]: Secrets removed: Aug 02 12:40:38 managed-node2 podman[43008]: Volumes removed: Aug 02 12:40:38 managed-node2 dnsmasq[29863]: exiting on receipt of SIGTERM Aug 02 12:40:38 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state. Aug 02 12:40:38 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd2.yml.service has finished shutting down. Aug 02 12:40:38 managed-node2 platform-python[43285]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:38 managed-node2 systemd[1]: var-lib-containers-storage-overlay-1ad187e54185c2c7cbe64b95feb5c0fbe8c425581baa88a9f71bb6eaaa92a272-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-1ad187e54185c2c7cbe64b95feb5c0fbe8c425581baa88a9f71bb6eaaa92a272-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:38 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped: Pods removed: Secrets removed: Volumes removed: Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: Aug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0 Aug 02 12:40:39 managed-node2 platform-python[43546]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:40 managed-node2 platform-python[43669]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:42 managed-node2 platform-python[43794]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:42 managed-node2 platform-python[43918]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:40:42 managed-node2 systemd[1]: Reloading. Aug 02 12:40:43 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play... -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has begun shutting down. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope: Consumed 32ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope completed and consumed the indicated resources. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope: Consumed 34ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope completed and consumed the indicated resources. Aug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay-65e7014be45047102e9045dcdd9345e82206f8672e1c2920f53097bbdf3fcc43-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-65e7014be45047102e9045dcdd9345e82206f8672e1c2920f53097bbdf3fcc43-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state Aug 02 12:40:43 managed-node2 kernel: device veth69cd15af left promiscuous mode Aug 02 12:40:43 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state Aug 02 12:40:43 managed-node2 systemd[1]: run-netns-netns\x2dfd171033\x2dc8d0\x2d5ddd\x2d985b\x2d865fa20d123b.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-netns-netns\x2dfd171033\x2dc8d0\x2d5ddd\x2d985b\x2d865fa20d123b.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay-bd51c7fcb084eb966e3f3b68637936b0c8d50499aba52b4ffce32c4ad877cf0d-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-bd51c7fcb084eb966e3f3b68637936b0c8d50499aba52b4ffce32c4ad877cf0d-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice. -- Subject: Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished shutting down. Aug 02 12:40:43 managed-node2 systemd[1]: machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice: Consumed 66ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice completed and consumed the indicated resources. Aug 02 12:40:43 managed-node2 podman[43954]: Pods stopped: Aug 02 12:40:43 managed-node2 podman[43954]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748 Aug 02 12:40:43 managed-node2 podman[43954]: Pods removed: Aug 02 12:40:43 managed-node2 podman[43954]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748 Aug 02 12:40:43 managed-node2 podman[43954]: Secrets removed: Aug 02 12:40:43 managed-node2 podman[43954]: Volumes removed: Aug 02 12:40:43 managed-node2 systemd[1]: libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope: Consumed 35ms CPU time -- Subject: Resources consumed by unit runtime -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope completed and consumed the indicated resources. Aug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124-userdata-shm.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay\x2dcontainers-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124-userdata-shm.mount has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state. Aug 02 12:40:43 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play. -- Subject: Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit podman-kube@-etc-containers-ansible\x2dkubernetes.d-httpd3.yml.service has finished shutting down. Aug 02 12:40:44 managed-node2 platform-python[44224]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7cab78c749c43bcbce8111bde995b9f5abcdf949ac791b25a3b873fde27f6845-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-7cab78c749c43bcbce8111bde995b9f5abcdf949ac791b25a3b873fde27f6845-merged.mount has successfully entered the 'dead' state. Aug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:40:44 managed-node2 platform-python[44349]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Aug 02 12:40:44 managed-node2 platform-python[44349]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml Aug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:40:44 managed-node2 platform-python[44486]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:40:45 managed-node2 platform-python[44609]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Aug 02 12:40:46 managed-node2 platform-python[44733]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:47 managed-node2 sudo[44858]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olrmvkvglxbsttmchncdtptzsqpgbnrc ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152846.6847095-21950-187401977583699/AnsiballZ_podman_container_info.py' Aug 02 12:40:47 managed-node2 sudo[44858]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:47 managed-node2 platform-python[44861]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Aug 02 12:40:47 managed-node2 systemd[25590]: Started podman-44863.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:47 managed-node2 sudo[44858]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:47 managed-node2 sudo[44992]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpjmlfvygeamsbgxiibmynquopakmqzw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152847.3914187-21987-273657899931369/AnsiballZ_command.py' Aug 02 12:40:47 managed-node2 sudo[44992]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:47 managed-node2 platform-python[44995]: ansible-command Invoked with _raw_params=podman network ls -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:47 managed-node2 systemd[25590]: Started podman-44997.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:47 managed-node2 sudo[44992]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:47 managed-node2 sudo[45152]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkszomlmijhetpvbizxheimorvpsvgdk ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152847.866539-22018-22833626881705/AnsiballZ_command.py' Aug 02 12:40:47 managed-node2 sudo[45152]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:48 managed-node2 platform-python[45155]: ansible-command Invoked with _raw_params=podman secret ls -n -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:48 managed-node2 systemd[25590]: Started podman-45157.scope. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:48 managed-node2 sudo[45152]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:48 managed-node2 platform-python[45287]: ansible-command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None Aug 02 12:40:48 managed-node2 systemd[1]: Stopping User Manager for UID 3001... -- Subject: Unit user@3001.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has begun shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopping podman-pause-9fcbd008.scope. -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Default. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopping D-Bus User Message Bus... -- Subject: Unit UNIT has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has begun shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Removed slice podman\x2dkube.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped D-Bus User Message Bus. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Basic System. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Sockets. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Timers. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped Mark boot as successful after the user session has run 2 minutes. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Paths. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Closed D-Bus User Message Bus Socket. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Stopped podman-pause-9fcbd008.scope. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Removed slice user.slice. -- Subject: Unit UNIT has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[25590]: Reached target Shutdown. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:48 managed-node2 systemd[25590]: Started Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:48 managed-node2 systemd[25590]: Reached target Exit the Session. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Aug 02 12:40:48 managed-node2 systemd[1]: user@3001.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user@3001.service has successfully entered the 'dead' state. Aug 02 12:40:48 managed-node2 systemd[1]: Stopped User Manager for UID 3001. -- Subject: Unit user@3001.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user@3001.service has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[1]: Stopping User runtime directory /run/user/3001... -- Subject: Unit user-runtime-dir@3001.service has begun shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has begun shutting down. Aug 02 12:40:48 managed-node2 systemd[1]: run-user-3001.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-user-3001.mount has successfully entered the 'dead' state. Aug 02 12:40:48 managed-node2 systemd[1]: user-runtime-dir@3001.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit user-runtime-dir@3001.service has successfully entered the 'dead' state. Aug 02 12:40:48 managed-node2 systemd[1]: Stopped User runtime directory /run/user/3001. -- Subject: Unit user-runtime-dir@3001.service has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-runtime-dir@3001.service has finished shutting down. Aug 02 12:40:48 managed-node2 systemd[1]: Removed slice User Slice of UID 3001. -- Subject: Unit user-3001.slice has finished shutting down -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit user-3001.slice has finished shutting down. Aug 02 12:40:48 managed-node2 platform-python[45419]: ansible-command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:49 managed-node2 sudo[45543]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqbllrdzcpiqglhdxzdzawivaqzmpyy ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152849.62423-22100-37847572003711/AnsiballZ_command.py' Aug 02 12:40:49 managed-node2 sudo[45543]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:49 managed-node2 platform-python[45546]: ansible-command Invoked with _raw_params=podman pod exists httpd1 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:49 managed-node2 sudo[45543]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:50 managed-node2 platform-python[45676]: ansible-command Invoked with _raw_params=podman pod exists httpd2 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:40:50 managed-node2 platform-python[45806]: ansible-command Invoked with _raw_params=podman pod exists httpd3 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:40:50 managed-node2 sudo[45937]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kquekbnhuqoxfuhqzmvgbrdfuqcyvrpf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152850.8293648-22145-217225041127031/AnsiballZ_command.py' Aug 02 12:40:50 managed-node2 sudo[45937]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0) Aug 02 12:40:51 managed-node2 platform-python[45940]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:51 managed-node2 sudo[45937]: pam_unix(sudo:session): session closed for user podman_basic_user Aug 02 12:40:51 managed-node2 platform-python[46066]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:51 managed-node2 platform-python[46192]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:52 managed-node2 platform-python[46318]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:40:55 managed-node2 platform-python[46566]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:40:56 managed-node2 platform-python[46695]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:40:57 managed-node2 platform-python[46819]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:00 managed-node2 platform-python[46944]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None Aug 02 12:41:01 managed-node2 platform-python[47068]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:01 managed-node2 platform-python[47193]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:01 managed-node2 platform-python[47317]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:03 managed-node2 platform-python[47441]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:03 managed-node2 platform-python[47565]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:04 managed-node2 platform-python[47688]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:04 managed-node2 platform-python[47811]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:06 managed-node2 platform-python[47934]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:41:06 managed-node2 platform-python[48058]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:08 managed-node2 platform-python[48183]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:08 managed-node2 platform-python[48307]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:41:09 managed-node2 platform-python[48434]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:09 managed-node2 platform-python[48557]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:11 managed-node2 platform-python[48680]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:12 managed-node2 platform-python[48805]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:13 managed-node2 platform-python[48929]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None Aug 02 12:41:14 managed-node2 platform-python[49056]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:14 managed-node2 platform-python[49179]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:15 managed-node2 platform-python[49302]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None Aug 02 12:41:16 managed-node2 platform-python[49426]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:17 managed-node2 platform-python[49549]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:17 managed-node2 platform-python[49672]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:18 managed-node2 sshd[49693]: Accepted publickey for root from 10.31.46.71 port 34968 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:41:18 managed-node2 systemd-logind[591]: New session 9 of user root. -- Subject: A new session 9 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 9 has been created for the user root. -- -- The leading process of the session is 49693. Aug 02 12:41:18 managed-node2 systemd[1]: Started Session 9 of user root. -- Subject: Unit session-9.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-9.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:41:18 managed-node2 sshd[49693]: pam_unix(sshd:session): session opened for user root by (uid=0) Aug 02 12:41:18 managed-node2 sshd[49696]: Received disconnect from 10.31.46.71 port 34968:11: disconnected by user Aug 02 12:41:18 managed-node2 sshd[49696]: Disconnected from user root 10.31.46.71 port 34968 Aug 02 12:41:18 managed-node2 sshd[49693]: pam_unix(sshd:session): session closed for user root Aug 02 12:41:18 managed-node2 systemd[1]: session-9.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-9.scope has successfully entered the 'dead' state. Aug 02 12:41:18 managed-node2 systemd-logind[591]: Session 9 logged out. Waiting for processes to exit. Aug 02 12:41:18 managed-node2 systemd-logind[591]: Removed session 9. -- Subject: Session 9 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 9 has been terminated. Aug 02 12:41:20 managed-node2 platform-python[49858]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Aug 02 12:41:21 managed-node2 platform-python[49985]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:21 managed-node2 platform-python[50108]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:24 managed-node2 platform-python[50356]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:25 managed-node2 platform-python[50485]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:41:26 managed-node2 platform-python[50609]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:27 managed-node2 sshd[50632]: Accepted publickey for root from 10.31.46.71 port 55872 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:41:27 managed-node2 systemd-logind[591]: New session 10 of user root. -- Subject: A new session 10 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 10 has been created for the user root. -- -- The leading process of the session is 50632. Aug 02 12:41:27 managed-node2 systemd[1]: Started Session 10 of user root. -- Subject: Unit session-10.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-10.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:41:27 managed-node2 sshd[50632]: pam_unix(sshd:session): session opened for user root by (uid=0) Aug 02 12:41:27 managed-node2 sshd[50635]: Received disconnect from 10.31.46.71 port 55872:11: disconnected by user Aug 02 12:41:27 managed-node2 sshd[50635]: Disconnected from user root 10.31.46.71 port 55872 Aug 02 12:41:27 managed-node2 sshd[50632]: pam_unix(sshd:session): session closed for user root Aug 02 12:41:27 managed-node2 systemd[1]: session-10.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-10.scope has successfully entered the 'dead' state. Aug 02 12:41:27 managed-node2 systemd-logind[591]: Session 10 logged out. Waiting for processes to exit. Aug 02 12:41:27 managed-node2 systemd-logind[591]: Removed session 10. -- Subject: Session 10 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 10 has been terminated. Aug 02 12:41:29 managed-node2 platform-python[50797]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Aug 02 12:41:33 managed-node2 platform-python[50949]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:33 managed-node2 platform-python[51072]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:35 managed-node2 platform-python[51320]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:36 managed-node2 platform-python[51449]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:41:37 managed-node2 platform-python[51573]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:41 managed-node2 sshd[51596]: Accepted publickey for root from 10.31.46.71 port 38190 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:41:41 managed-node2 systemd[1]: Started Session 11 of user root. -- Subject: Unit session-11.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-11.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:41:41 managed-node2 systemd-logind[591]: New session 11 of user root. -- Subject: A new session 11 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 11 has been created for the user root. -- -- The leading process of the session is 51596. Aug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session opened for user root by (uid=0) Aug 02 12:41:41 managed-node2 sshd[51599]: Received disconnect from 10.31.46.71 port 38190:11: disconnected by user Aug 02 12:41:41 managed-node2 sshd[51599]: Disconnected from user root 10.31.46.71 port 38190 Aug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session closed for user root Aug 02 12:41:41 managed-node2 systemd[1]: session-11.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-11.scope has successfully entered the 'dead' state. Aug 02 12:41:41 managed-node2 systemd-logind[591]: Session 11 logged out. Waiting for processes to exit. Aug 02 12:41:41 managed-node2 systemd-logind[591]: Removed session 11. -- Subject: Session 11 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 11 has been terminated. Aug 02 12:41:43 managed-node2 platform-python[51761]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Aug 02 12:41:43 managed-node2 platform-python[51913]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:44 managed-node2 platform-python[52036]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:45 managed-node2 platform-python[52160]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:41:48 managed-node2 platform-python[52288]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 systemd[1]: Reloading. Aug 02 12:41:51 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:51 managed-node2 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Aug 02 12:41:52 managed-node2 systemd[1]: Reloading. Aug 02 12:41:52 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Aug 02 12:41:52 managed-node2 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:52 managed-node2 systemd[1]: run-r5b158d19759a4bbaa61aee183ab0cad0.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has successfully entered the 'dead' state. Aug 02 12:41:53 managed-node2 platform-python[52920]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:53 managed-node2 platform-python[53043]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:54 managed-node2 platform-python[53166]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:41:54 managed-node2 systemd[1]: Reloading. Aug 02 12:41:54 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment... -- Subject: Unit certmonger.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has begun starting up. Aug 02 12:41:54 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment. -- Subject: Unit certmonger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:55 managed-node2 platform-python[53359]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=# # Ansible managed # # system_role:certificate booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53375]: Certificate in file "/etc/pki/tls/certs/quadlet_demo.crt" issued by CA and saved. Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:56 managed-node2 platform-python[53497]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Aug 02 12:41:56 managed-node2 platform-python[53620]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key Aug 02 12:41:57 managed-node2 platform-python[53743]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Aug 02 12:41:57 managed-node2 platform-python[53866]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:57 managed-node2 certmonger[53202]: 2025-08-02 12:41:57 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:58 managed-node2 platform-python[53990]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:58 managed-node2 platform-python[54113]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:58 managed-node2 platform-python[54236]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:59 managed-node2 platform-python[54359]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:59 managed-node2 platform-python[54482]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:02 managed-node2 platform-python[54730]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:03 managed-node2 platform-python[54859]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:42:03 managed-node2 platform-python[54983]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:06 managed-node2 platform-python[55108]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:06 managed-node2 platform-python[55231]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:07 managed-node2 platform-python[55354]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:08 managed-node2 platform-python[55478]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:42:11 managed-node2 platform-python[55601]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:42:11 managed-node2 platform-python[55728]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:42:12 managed-node2 platform-python[55855]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:13 managed-node2 platform-python[55978]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:15 managed-node2 platform-python[56101]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:15 managed-node2 platform-python[56225]: ansible-command Invoked with _raw_params=podman ps -a warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:15 managed-node2 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck89620470-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-metacopy\x2dcheck89620470-merged.mount has successfully entered the 'dead' state. Aug 02 12:42:15 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:42:16 managed-node2 platform-python[56356]: ansible-command Invoked with _raw_params=podman pod ps --ctr-ids --ctr-names --ctr-status warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:16 managed-node2 platform-python[56486]: ansible-command Invoked with _raw_params=set -euo pipefail; systemctl list-units --all | grep quadlet _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:42:17 managed-node2 platform-python[56612]: ansible-command Invoked with _raw_params=ls -alrtF /etc/systemd/system warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:20 managed-node2 platform-python[56861]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:21 managed-node2 platform-python[56990]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:23 managed-node2 platform-python[57115]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:42:27 managed-node2 platform-python[57238]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:42:27 managed-node2 platform-python[57365]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:42:28 managed-node2 platform-python[57492]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:28 managed-node2 platform-python[57615]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:30 managed-node2 platform-python[57738]: ansible-command Invoked with _raw_params=exec 1>&2 set -x set -o pipefail systemctl list-units --plain -l --all | grep quadlet || : systemctl list-unit-files --all | grep quadlet || : systemctl list-units --plain --failed -l --all | grep quadlet || : _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:31 managed-node2 platform-python[57868]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None PLAY RECAP ********************************************************************* managed-node2 : ok=90 changed=8 unreachable=0 failed=2 skipped=140 rescued=2 ignored=0 SYSTEM ROLES ERRORS BEGIN v1 [ { "ansible_version": "2.9.27", "end_time": "2025-08-02T16:42:15.030634+00:00Z", "host": "managed-node2", "message": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "start_time": "2025-08-02T16:42:15.004906+00:00Z", "task_name": "Manage each secret", "task_path": "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42" }, { "ansible_version": "2.9.27", "delta": "0:00:00.028703", "end_time": "2025-08-02 12:42:15.455975", "host": "managed-node2", "message": "No message could be found", "rc": 0, "start_time": "2025-08-02 12:42:15.427272", "stdout": "-- Logs begin at Sat 2025-08-02 12:30:34 EDT, end at Sat 2025-08-02 12:42:15 EDT. --\nAug 02 12:36:00 managed-node2 platform-python[13179]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:36:01 managed-node2 platform-python[13302]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:01 managed-node2 platform-python[13425]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:02 managed-node2 platform-python[13548]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:05 managed-node2 platform-python[13671]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:08 managed-node2 platform-python[13794]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:10 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:36:10 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:36:10 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.\n-- Subject: Unit run-ra2afdf9e5f5f4df293ca569a6cdf6359.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit run-ra2afdf9e5f5f4df293ca569a6cdf6359.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:36:10 managed-node2 systemd[1]: Starting man-db-cache-update.service...\n-- Subject: Unit man-db-cache-update.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has begun starting up.\nAug 02 12:36:11 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit man-db-cache-update.service has successfully entered the 'dead' state.\nAug 02 12:36:11 managed-node2 systemd[1]: Started man-db-cache-update.service.\n-- Subject: Unit man-db-cache-update.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:36:11 managed-node2 systemd[1]: run-ra2afdf9e5f5f4df293ca569a6cdf6359.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-ra2afdf9e5f5f4df293ca569a6cdf6359.service has successfully entered the 'dead' state.\nAug 02 12:36:11 managed-node2 platform-python[14399]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:36:12 managed-node2 platform-python[14547]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:36:13 managed-node2 platform-python[14671]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:36:15 managed-node2 kernel: SELinux: Converting 460 SID table entries...\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability network_peer_controls=1\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability open_perms=1\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability extended_socket_class=1\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability always_check_network=0\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability cgroup_seclabel=1\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability nnp_nosuid_transition=1\nAug 02 12:36:15 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:36:15 managed-node2 platform-python[14798]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:36:20 managed-node2 platform-python[14921]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:22 managed-node2 platform-python[15046]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:23 managed-node2 platform-python[15169]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:36:23 managed-node2 platform-python[15292]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:36:23 managed-node2 platform-python[15391]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/nopull.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152583.1815786-10272-87697288283027/source _original_basename=tmp987jyyff follow=False checksum=d5dc917e3cae36de03aa971a17ac473f86fdf934 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:36:24 managed-node2 platform-python[15516]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:36:24 managed-node2 kernel: evm: overlay not supported\nAug 02 12:36:24 managed-node2 systemd[1]: Created slice machine.slice.\n-- Subject: Unit machine.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:36:24 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice.\n-- Subject: Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:36:25 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:36:29 managed-node2 platform-python[15841]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:36:31 managed-node2 platform-python[15970]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:34 managed-node2 platform-python[16095]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:37 managed-node2 platform-python[16218]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:36:38 managed-node2 platform-python[16345]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:36:38 managed-node2 platform-python[16472]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:36:41 managed-node2 platform-python[16595]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:44 managed-node2 platform-python[16718]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:47 managed-node2 platform-python[16841]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:50 managed-node2 platform-python[16964]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:36:51 managed-node2 platform-python[17112]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:36:52 managed-node2 platform-python[17235]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:36:57 managed-node2 platform-python[17358]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:59 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:37:00 managed-node2 platform-python[17620]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:00 managed-node2 platform-python[17743]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:37:01 managed-node2 platform-python[17866]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:37:01 managed-node2 platform-python[17965]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/bogus.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152620.9003708-11840-265132279831358/source _original_basename=tmp6af94dg8 follow=False checksum=f8266a972ed3be7e204d2a67883fe3a22b8dbf18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:37:02 managed-node2 platform-python[18090]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:37:02 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice.\n-- Subject: Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:37:02 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:37:05 managed-node2 platform-python[18377]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:37:06 managed-node2 platform-python[18506]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:10 managed-node2 platform-python[18631]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:13 managed-node2 platform-python[18754]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:37:14 managed-node2 platform-python[18881]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:37:14 managed-node2 platform-python[19008]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:37:16 managed-node2 platform-python[19131]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:19 managed-node2 platform-python[19254]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:22 managed-node2 platform-python[19377]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:25 managed-node2 platform-python[19500]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:37:27 managed-node2 platform-python[19648]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:37:28 managed-node2 platform-python[19771]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:37:32 managed-node2 platform-python[19894]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:33 managed-node2 platform-python[20019]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:37:34 managed-node2 platform-python[20143]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:37:35 managed-node2 platform-python[20270]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:35 managed-node2 platform-python[20395]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:37:35 managed-node2 platform-python[20395]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/nopull.yml\nAug 02 12:37:35 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice.\n-- Subject: Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished shutting down.\nAug 02 12:37:35 managed-node2 systemd[1]: machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice: Consumed 0 CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice completed and consumed the indicated resources.\nAug 02 12:37:35 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:37:36 managed-node2 platform-python[20533]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:37:36 managed-node2 platform-python[20656]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:37:39 managed-node2 platform-python[20911]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:37:41 managed-node2 platform-python[21040]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:45 managed-node2 platform-python[21165]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:48 managed-node2 platform-python[21288]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:37:48 managed-node2 platform-python[21415]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:37:49 managed-node2 platform-python[21542]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:37:51 managed-node2 platform-python[21665]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:54 managed-node2 platform-python[21788]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:57 managed-node2 platform-python[21911]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:00 managed-node2 platform-python[22034]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:38:02 managed-node2 platform-python[22182]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:38:03 managed-node2 platform-python[22305]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:38:08 managed-node2 platform-python[22428]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:09 managed-node2 platform-python[22553]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:10 managed-node2 platform-python[22677]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:38:10 managed-node2 platform-python[22804]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:11 managed-node2 platform-python[22929]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:38:11 managed-node2 platform-python[22929]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/bogus.yml\nAug 02 12:38:11 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice.\n-- Subject: Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished shutting down.\nAug 02 12:38:11 managed-node2 systemd[1]: machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice: Consumed 0 CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice completed and consumed the indicated resources.\nAug 02 12:38:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:38:12 managed-node2 platform-python[23068]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:38:12 managed-node2 platform-python[23191]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:16 managed-node2 platform-python[23446]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:17 managed-node2 platform-python[23575]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:21 managed-node2 platform-python[23700]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:24 managed-node2 platform-python[23823]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:38:24 managed-node2 platform-python[23950]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:38:25 managed-node2 platform-python[24077]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:38:27 managed-node2 platform-python[24200]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:30 managed-node2 platform-python[24323]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:33 managed-node2 platform-python[24446]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:36 managed-node2 platform-python[24569]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:38:38 managed-node2 platform-python[24717]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:38:38 managed-node2 platform-python[24840]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:38:43 managed-node2 platform-python[24963]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nAug 02 12:38:43 managed-node2 platform-python[25087]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:44 managed-node2 platform-python[25212]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:44 managed-node2 platform-python[25336]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:46 managed-node2 platform-python[25460]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:47 managed-node2 platform-python[25584]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nAug 02 12:38:47 managed-node2 systemd[1]: Created slice User Slice of UID 3001.\n-- Subject: Unit user-3001.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-3001.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[1]: Starting User runtime directory /run/user/3001...\n-- Subject: Unit user-runtime-dir@3001.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has begun starting up.\nAug 02 12:38:47 managed-node2 systemd[1]: Started User runtime directory /run/user/3001.\n-- Subject: Unit user-runtime-dir@3001.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[1]: Starting User Manager for UID 3001...\n-- Subject: Unit user@3001.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has begun starting up.\nAug 02 12:38:47 managed-node2 systemd[25590]: pam_unix(systemd-user:session): session opened for user podman_basic_user by (uid=0)\nAug 02 12:38:47 managed-node2 systemd[25590]: Starting D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun starting up.\nAug 02 12:38:47 managed-node2 systemd[25590]: Started Mark boot as successful after the user session has run 2 minutes.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Paths.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Timers.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Listening on D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Sockets.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Basic System.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Default.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Startup finished in 32ms.\n-- Subject: User manager start-up is now complete\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The user manager instance for user 3001 has been started. All services queued\n-- for starting have been started. Note that other services might still be starting\n-- up or be started at any later time.\n-- \n-- Startup of the manager took 32456 microseconds.\nAug 02 12:38:47 managed-node2 systemd[1]: Started User Manager for UID 3001.\n-- Subject: Unit user@3001.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:48 managed-node2 platform-python[25725]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:38:48 managed-node2 platform-python[25848]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:38:48 managed-node2 sudo[25971]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbiyvlbwevndfyvleplnipyfcreleuz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152728.615097-16354-140653512589780/AnsiballZ_podman_image.py'\nAug 02 12:38:48 managed-node2 sudo[25971]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:49 managed-node2 systemd[25590]: Started D-Bus User Message Bus.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Created slice user.slice.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Started podman-25984.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Started podman-pause-9fcbd008.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Started podman-26000.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Started podman-26016.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:50 managed-node2 sudo[25971]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:50 managed-node2 platform-python[26145]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:50 managed-node2 platform-python[26268]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:38:51 managed-node2 platform-python[26391]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:38:51 managed-node2 platform-python[26490]: ansible-copy Invoked with dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml owner=podman_basic_user group=3001 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152731.0942633-16483-51427114771259/source _original_basename=tmpz7phazza follow=False checksum=41ba442683d49d3571d4ddce7f5dc14c85104270 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:38:51 managed-node2 sudo[26615]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtlljryarlxiuspbmjgrszvqdzysmgm ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152731.799995-16513-128710640317424/AnsiballZ_podman_play.py'\nAug 02 12:38:51 managed-node2 sudo[26615]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:38:52 managed-node2 systemd[25590]: Started podman-26626.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:52 managed-node2 kernel: tun: Universal TUN/TAP device driver, 1.6\nAug 02 12:38:52 managed-node2 systemd[25590]: Started rootless-netns-6da9f76b.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:52 managed-node2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.\nAug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethff7bc329: link is not ready\nAug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered blocking state\nAug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state\nAug 02 12:38:52 managed-node2 kernel: device vethff7bc329 entered promiscuous mode\nAug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nAug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nAug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethff7bc329: link becomes ready\nAug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered blocking state\nAug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered forwarding state\nAug 02 12:38:52 managed-node2 dnsmasq[26814]: listening on cni-podman1(#3): 10.89.0.1\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: started, version 2.79 cachesize 150\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using local addresses only for domain dns.podman\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: reading /etc/resolv.conf\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using local addresses only for domain dns.podman\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.0.2.3#53\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.29.169.13#53\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.29.170.12#53\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.2.32.1#53\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:38:52 managed-node2 conmon[26830]: conmon af16b69d72cc4526d63a : failed to write to /proc/self/oom_score_adj: Permission denied\nAug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach}\nAug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : terminal_ctrl_fd: 14\nAug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : winsz read side: 17, winsz write side: 18\nAug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : container PID: 26841\nAug 02 12:38:52 managed-node2 conmon[26851]: conmon 98c476488369c461640e : failed to write to /proc/self/oom_score_adj: Permission denied\nAug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}\nAug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : terminal_ctrl_fd: 13\nAug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : winsz read side: 16, winsz write side: 17\nAug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : container PID: 26862\nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f\n Container:\n 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\n \nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Successfully loaded 1 networks\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"found free device name cni-podman1\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"found free ipv4 network subnet 10.89.0.0/24\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:38:52.156183843 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reference \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" does not resolve to an image ID\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reference \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" does not resolve to an image ID\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"FROM \\\"scratch\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are not supported\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Check for idmapped mounts support \"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"overlay: test mount indicated that volatile is being used\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/empty,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/work,userxattr,volatile,context=\\\"system_u:object_r:container_file_t:s0:c153,c335\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container ID: 0aa3a7343fabfcbf854b2db926e8ef83982bdca7985f430aea1f98b430ae4469\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:copy Args:[/usr/libexec/podman/catatonit /catatonit] Flags:[] Attrs:map[] Message:COPY /usr/libexec/podman/catatonit /catatonit Heredocs:[] Original:COPY /usr/libexec/podman/catatonit /catatonit}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"COPY []string(nil), imagebuilder.Copy{FromFS:false, From:\\\"\\\", Src:[]string{\\\"/usr/libexec/podman/catatonit\\\"}, Dest:\\\"/catatonit\\\", Download:false, Chown:\\\"\\\", Chmod:\\\"\\\", Checksum:\\\"\\\", Files:[]imagebuilder.File(nil)}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"added content file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:entrypoint Args:[/catatonit -P] Flags:[] Attrs:map[json:true] Message:ENTRYPOINT /catatonit -P Heredocs:[] Original:ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"COMMIT localhost/podman-pause:4.9.4-dev-1708535009\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"COMMIT \\\"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"committing image with reference \\\"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" is allowed by policy\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"layer list: [\\\"5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1\\\"]\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"using \\\"/var/tmp/buildah3803674644\\\" to hold temporary data\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Tar with options on /home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/diff\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"layer \\\"5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1\\\" size is 767488 bytes, uncompressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690, possibly-compressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"OCIv1 config = {\\\"created\\\":\\\"2025-08-02T16:38:52.296533662Z\\\",\\\"architecture\\\":\\\"amd64\\\",\\\"os\\\":\\\"linux\\\",\\\"config\\\":{\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"rootfs\\\":{\\\"type\\\":\\\"layers\\\",\\\"diff_ids\\\":[\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"]},\\\"history\\\":[{\\\"created\\\":\\\"2025-08-02T16:38:52.295942274Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \\\",\\\"empty_layer\\\":true},{\\\"created\\\":\\\"2025-08-02T16:38:52.299650983Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) ENTRYPOINT [\\\\\\\"/catatonit\\\\\\\", \\\\\\\"-P\\\\\\\"]\\\"}]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"OCIv1 manifest = {\\\"schemaVersion\\\":2,\\\"mediaType\\\":\\\"application/vnd.oci.image.manifest.v1+json\\\",\\\"config\\\":{\\\"mediaType\\\":\\\"application/vnd.oci.image.config.v1+json\\\",\\\"digest\\\":\\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\",\\\"size\\\":668},\\\"layers\\\":[{\\\"mediaType\\\":\\\"application/vnd.oci.image.layer.v1.tar\\\",\\\"digest\\\":\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\",\\\"size\\\":767488}],\\\"annotations\\\":{\\\"org.opencontainers.image.base.digest\\\":\\\"\\\",\\\"org.opencontainers.image.base.name\\\":\\\"\\\"}}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Docker v2s2 config = {\\\"created\\\":\\\"2025-08-02T16:38:52.296533662Z\\\",\\\"container\\\":\\\"0aa3a7343fabfcbf854b2db926e8ef83982bdca7985f430aea1f98b430ae4469\\\",\\\"container_config\\\":{\\\"Hostname\\\":\\\"\\\",\\\"Domainname\\\":\\\"\\\",\\\"User\\\":\\\"\\\",\\\"AttachStdin\\\":false,\\\"AttachStdout\\\":false,\\\"AttachStderr\\\":false,\\\"Tty\\\":false,\\\"OpenStdin\\\":false,\\\"StdinOnce\\\":false,\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Cmd\\\":[],\\\"Image\\\":\\\"\\\",\\\"Volumes\\\":{},\\\"WorkingDir\\\":\\\"\\\",\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"OnBuild\\\":[],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"config\\\":{\\\"Hostname\\\":\\\"\\\",\\\"Domainname\\\":\\\"\\\",\\\"User\\\":\\\"\\\",\\\"AttachStdin\\\":false,\\\"AttachStdout\\\":false,\\\"AttachStderr\\\":false,\\\"Tty\\\":false,\\\"OpenStdin\\\":false,\\\"StdinOnce\\\":false,\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Cmd\\\":[],\\\"Image\\\":\\\"\\\",\\\"Volumes\\\":{},\\\"WorkingDir\\\":\\\"\\\",\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"OnBuild\\\":[],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"architecture\\\":\\\"amd64\\\",\\\"os\\\":\\\"linux\\\",\\\"rootfs\\\":{\\\"type\\\":\\\"layers\\\",\\\"diff_ids\\\":[\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"]},\\\"history\\\":[{\\\"created\\\":\\\"2025-08-02T16:38:52.295942274Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \\\",\\\"empty_layer\\\":true},{\\\"created\\\":\\\"2025-08-02T16:38:52.299650983Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) ENTRYPOINT [\\\\\\\"/catatonit\\\\\\\", \\\\\\\"-P\\\\\\\"]\\\"}]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Docker v2s2 manifest = {\\\"schemaVersion\\\":2,\\\"mediaType\\\":\\\"application/vnd.docker.distribution.manifest.v2+json\\\",\\\"config\\\":{\\\"mediaType\\\":\\\"application/vnd.docker.container.image.v1+json\\\",\\\"size\\\":1342,\\\"digest\\\":\\\"sha256:69b1a52f65cb5e3fa99e89b61152bda48cb5524edcedfdf2eac76a30c6778813\\\"},\\\"layers\\\":[{\\\"mediaType\\\":\\\"application/vnd.docker.image.rootfs.diff.tar\\\",\\\"size\\\":767488,\\\"digest\\\":\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"}]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using SQLite blob info cache at /home/podman_basic_user/.local/share/containers/cache/blob-info-cache-v1.sqlite\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"IsRunningImageAllowed for image containers-storage:\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\" Using transport \\\"containers-storage\\\" policy section \"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\" Requirement 0: allowed\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Overall: allowed\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"start reading config\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"finished reading config\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"... will first try using the original manifest unmodified\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Checking if we can reuse blob sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690: general substitution = true, compression for MIME type \\\"application/vnd.oci.image.layer.v1.tar\\\" = true\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reading layer \\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"No compression detected\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using original blob without modification\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Applying tar in /home/podman_basic_user/.local/share/containers/storage/overlay/d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690/diff\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"finished reading layer \\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"No compression detected\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Compression change for blob sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778 (\\\"application/vnd.oci.image.config.v1+json\\\") not supported\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using original blob without modification\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"setting image creation date to 2025-08-02 16:38:52.296533662 +0000 UTC\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"created new image ID \\\"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\" with metadata \\\"{}\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"added name \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" to image \\\"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"printing final image id \\\"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Got pod cgroup as /libpod_parent/191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"setting container name 191a369333e4-infra\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Allocated lock 1 for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created container \\\"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container \\\"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\\\" has work directory \\\"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container \\\"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\\\" has run directory \\\"/run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"adding container to pod httpd1\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"setting container name httpd1-httpd1\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Allocated lock 2 for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created container \\\"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container \\\"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\\\" has work directory \\\"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container \\\"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\\\" has run directory \\\"/run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Strongconnecting node af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Pushed af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae onto stack\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Finishing node af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae. Popped af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae off stack\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Strongconnecting node 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Pushed 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 onto stack\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Finishing node 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939. Popped 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 off stack\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/4MRAZCR7JRY45YIIWXX5WJJ6A6,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/work,userxattr,context=\\\"system_u:object_r:container_file_t:s0:c389,c456\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Made network namespace at /run/user/3001/netns/netns-6723b79b-4d64-cc19-6a91-87394c058c53 for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Mounted container \\\"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\\\" at \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created root filesystem for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae at /home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"creating rootless network namespace with name \\\"rootless-netns-d22c9f230d0691b8f418\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/3001/netns/rootless-netns-d22c9f230d0691b8f418 tap0\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"The path of /etc/resolv.conf in the mount ns is \\\"/etc/resolv.conf\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"cni result for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:1e:08:d6:95:5e:f1 Sandbox:} {Name:vethff7bc329 Mac:26:19:1e:a6:0a:11 Sandbox:} {Name:eth0 Mac:1e:8a:1a:f5:d1:2a Sandbox:/run/user/3001/netns/netns-6723b79b-4d64-cc19-6a91-87394c058c53}] [{Version:4 Interface:0xc000b96228 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"Starting parent driver\\\"\\ntime=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"opaque=map[builtin.readypipepath:/run/user/3001/libpod/tmp/rootlessport3029357974/.bp-ready.pipe builtin.socketpath:/run/user/3001/libpod/tmp/rootlessport3029357974/.bp.sock]\\\"\\ntime=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"Starting child driver in child netns (\\\\\\\"/proc/self/exe\\\\\\\" [rootlessport-child])\\\"\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"Waiting for initComplete\\\"\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"initComplete is closed; parent and child established the communication channel\\\"\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"Exposing ports [{ 80 15001 1 tcp}]\\\"\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=Ready\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport is ready\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created OCI spec for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/config.json\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Got pod cgroup as \"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae -u af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata -p /run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/pidfile -n 191a369333e4-infra --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae]\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied\"\n [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n \n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Received: 26841\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Got Conmon PID as 26831\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae in OCI runtime\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Starting container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae with command [/catatonit -P]\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Started container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/S5QNMEV2IMLZOTXAJ3H4ZQCILN,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/work,userxattr,context=\\\"system_u:object_r:container_file_t:s0:c389,c456\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Mounted container \\\"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\\\" at \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/merged\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created root filesystem for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 at /home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/merged\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created OCI spec for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/config.json\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Got pod cgroup as \"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 -u 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata -p /run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/pidfile -n httpd1-httpd1 --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939]\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied\"\n [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n \n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Received: 26862\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Got Conmon PID as 26852\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 in OCI runtime\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Starting container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Started container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Called kube.PersistentPostRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nAug 02 12:38:52 managed-node2 sudo[26615]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:53 managed-node2 sudo[26993]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbaqkfutgdfbadtjjacfxjqxvpyoigvo ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152733.1876235-16558-123988354219606/AnsiballZ_systemd.py'\nAug 02 12:38:53 managed-node2 sudo[26993]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:53 managed-node2 platform-python[26996]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nAug 02 12:38:53 managed-node2 systemd[25590]: Reloading.\nAug 02 12:38:53 managed-node2 sudo[26993]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:53 managed-node2 sudo[27130]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmqxburdibhftsxxfjnfharsaboqrlrj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152733.8051448-16591-22459124935909/AnsiballZ_systemd.py'\nAug 02 12:38:53 managed-node2 sudo[27130]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:54 managed-node2 platform-python[27133]: ansible-systemd Invoked with name= scope=user enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nAug 02 12:38:54 managed-node2 systemd[25590]: Reloading.\nAug 02 12:38:54 managed-node2 sudo[27130]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:54 managed-node2 dnsmasq[26816]: listening on cni-podman1(#3): fe80::1c08:d6ff:fe95:5ef1%cni-podman1\nAug 02 12:38:54 managed-node2 sudo[27269]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bebktvlckbeotrlmhvsnejtmeicquqjz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152734.429291-16621-12283105958346/AnsiballZ_systemd.py'\nAug 02 12:38:54 managed-node2 sudo[27269]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:54 managed-node2 platform-python[27272]: ansible-systemd Invoked with name= scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nAug 02 12:38:54 managed-node2 systemd[25590]: Created slice podman\\x2dkube.slice.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:54 managed-node2 systemd[25590]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit UNIT has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun starting up.\nAug 02 12:38:54 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : container 26841 exited with status 137\nAug 02 12:38:54 managed-node2 conmon[26852]: conmon 98c476488369c461640e : container 26862 exited with status 137\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae)\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:54-04:00\" level=info msg=\"Using sqlite as database backend\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939)\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:54-04:00\" level=info msg=\"Using sqlite as database backend\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using graph driver overlay\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using transient store: false\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Initializing event backend file\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=info msg=\"Setting parallel job count to 7\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using graph driver overlay\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using transient store: false\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Initializing event backend file\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=info msg=\"Setting parallel job count to 7\"\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state\nAug 02 12:38:55 managed-node2 kernel: device vethff7bc329 left promiscuous mode\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939)\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae)\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:38:55 managed-node2 podman[27278]: Pods stopped:\nAug 02 12:38:55 managed-node2 podman[27278]: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f\nAug 02 12:38:55 managed-node2 podman[27278]: Pods removed:\nAug 02 12:38:55 managed-node2 podman[27278]: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f\nAug 02 12:38:55 managed-node2 podman[27278]: Secrets removed:\nAug 02 12:38:55 managed-node2 podman[27278]: Volumes removed:\nAug 02 12:38:55 managed-node2 systemd[25590]: Started rootless-netns-dd6b3697.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:55 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethfa4f074b: link is not ready\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered blocking state\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state\nAug 02 12:38:55 managed-node2 kernel: device vethfa4f074b entered promiscuous mode\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered blocking state\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered forwarding state\nAug 02 12:38:55 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfa4f074b: link becomes ready\nAug 02 12:38:55 managed-node2 dnsmasq[27525]: listening on cni-podman1(#3): 10.89.0.1\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: started, version 2.79 cachesize 150\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using local addresses only for domain dns.podman\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: reading /etc/resolv.conf\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using local addresses only for domain dns.podman\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.0.2.3#53\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.29.169.13#53\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.29.170.12#53\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.2.32.1#53\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:38:55 managed-node2 podman[27278]: Pod:\nAug 02 12:38:55 managed-node2 podman[27278]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a\nAug 02 12:38:55 managed-node2 podman[27278]: Container:\nAug 02 12:38:55 managed-node2 podman[27278]: bc86eb03c7fb7110b2363dd55ed2866f782f16e8d8374c8a82784079a47558f1\nAug 02 12:38:55 managed-node2 systemd[25590]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:55 managed-node2 sudo[27269]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:56 managed-node2 platform-python[27703]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:38:56 managed-node2 dnsmasq[27527]: listening on cni-podman1(#3): fe80::a0c6:53ff:fed6:1184%cni-podman1\nAug 02 12:38:57 managed-node2 platform-python[27827]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:58 managed-node2 platform-python[27952]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:59 managed-node2 platform-python[28076]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:00 managed-node2 platform-python[28199]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:39:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:39:01 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:39:01 managed-node2 platform-python[28489]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:02 managed-node2 platform-python[28612]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:02 managed-node2 platform-python[28735]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:39:03 managed-node2 platform-python[28834]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152742.5335138-17006-203119541001881/source _original_basename=tmpvkt7buq9 follow=False checksum=2a8a08ffe6bf0159dd7563e043ed3c303a77cff4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:39:03 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:39:03 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice.\n-- Subject: Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.7856] manager: (cni-podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3)\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.7870] manager: (veth502e5636): new Veth device (/org/freedesktop/NetworkManager/Devices/4)\nAug 02 12:39:03 managed-node2 systemd-udevd[29006]: Using default interface naming scheme 'rhel-8.0'.\nAug 02 12:39:03 managed-node2 systemd-udevd[29006]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:03 managed-node2 systemd-udevd[29006]: Could not generate persistent MAC address for cni-podman1: No such file or directory\nAug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth502e5636: link is not ready\nAug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered blocking state\nAug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state\nAug 02 12:39:03 managed-node2 kernel: device veth502e5636 entered promiscuous mode\nAug 02 12:39:03 managed-node2 systemd-udevd[29007]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:03 managed-node2 systemd-udevd[29007]: Could not generate persistent MAC address for veth502e5636: No such file or directory\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8196] device (cni-podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8201] device (cni-podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8209] device (cni-podman1): Activation: starting connection 'cni-podman1' (0ddcaf44-4d9a-41cb-acd9-42060ce7dc76)\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8210] device (cni-podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8212] device (cni-podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8215] device (cni-podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8217] device (cni-podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 dbus-daemon[595]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=665 comm=\"/usr/sbin/NetworkManager --no-daemon \" label=\"system_u:system_r:NetworkManager_t:s0\")\nAug 02 12:39:03 managed-node2 systemd[1]: Starting Network Manager Script Dispatcher Service...\n-- Subject: Unit NetworkManager-dispatcher.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit NetworkManager-dispatcher.service has begun starting up.\nAug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nAug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nAug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth502e5636: link becomes ready\nAug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered blocking state\nAug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered forwarding state\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8506] device (veth502e5636): carrier: link connected\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8508] device (cni-podman1): carrier: link connected\nAug 02 12:39:03 managed-node2 dbus-daemon[595]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8678] device (cni-podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8680] device (cni-podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8684] device (cni-podman1): Activation: successful, device activated.\nAug 02 12:39:03 managed-node2 systemd[1]: Started Network Manager Script Dispatcher Service.\n-- Subject: Unit NetworkManager-dispatcher.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit NetworkManager-dispatcher.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:03 managed-node2 dnsmasq[29128]: listening on cni-podman1(#3): 10.89.0.1\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: started, version 2.79 cachesize 150\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using local addresses only for domain dns.podman\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: reading /etc/resolv.conf\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using local addresses only for domain dns.podman\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.29.169.13#53\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.29.170.12#53\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.2.32.1#53\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:39:04 managed-node2 systemd[1]: Started libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope.\n-- Subject: Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}\nAug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : terminal_ctrl_fd: 13\nAug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : winsz read side: 17, winsz write side: 18\nAug 02 12:39:04 managed-node2 systemd[1]: Started libcontainer container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.\n-- Subject: Unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : container PID: 29144\nAug 02 12:39:04 managed-node2 systemd[1]: Started libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope.\n-- Subject: Unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}\nAug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : terminal_ctrl_fd: 12\nAug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : winsz read side: 16, winsz write side: 17\nAug 02 12:39:04 managed-node2 systemd[1]: Started libcontainer container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.\n-- Subject: Unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : container PID: 29166\nAug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nAug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\n Container:\n 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\n \nAug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:36:24.472660556 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"setting container name 90922c8ca930-infra\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Allocated lock 1 for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are not supported\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Check for idmapped mounts support \"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created container \\\"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Container \\\"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Container \\\"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\\\" has run directory \\\"/run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"adding container to pod httpd2\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"setting container name httpd2-httpd2\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Allocated lock 2 for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created container \\\"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Container \\\"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Container \\\"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\\\" has run directory \\\"/run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Strongconnecting node ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Pushed ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 onto stack\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Finishing node ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89. Popped ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 off stack\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Strongconnecting node 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Pushed 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb onto stack\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Finishing node 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb. Popped 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb off stack\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/ZR5XOSU7O7VXY2BDL65A7UWKU6,upperdir=/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/diff,workdir=/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c784,c888\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Mounted container \\\"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\\\" at \\\"/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created root filesystem for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 at /var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Made network namespace at /run/netns/netns-057bdf77-0e93-7270-6a44-66c62177cd73 for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"cni result for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:96:46:b4:0c:81:50 Sandbox:} {Name:veth502e5636 Mac:4a:ea:32:89:32:4a Sandbox:} {Name:eth0 Mac:ae:ce:ef:99:2c:87 Sandbox:/run/netns/netns-057bdf77-0e93-7270-6a44-66c62177cd73}] [{Version:4 Interface:0xc00087bc58 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Setting Cgroups for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 to machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice:libpod:ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\\\"\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created OCI spec for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 at /var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/config.json\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 -u ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata -p /run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/pidfile -n 90922c8ca930-infra --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89]\"\n time=\"2025-08-02T12:39:04-04:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice and unitName libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Received: 29144\"\n time=\"2025-08-02T12:39:04-04:00\" level=info msg=\"Got Conmon PID as 29134\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 in OCI runtime\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Starting container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 with command [/catatonit -P]\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Started container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/HKP6QAO57O46FRNHGFBKAKZZRC,upperdir=/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/diff,workdir=/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c784,c888\\\"\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Mounted container \\\"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\\\" at \\\"/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/merged\\\"\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created root filesystem for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb at /var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/merged\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Setting Cgroups for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb to machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice:libpod:071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created OCI spec for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb at /var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/config.json\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb -u 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata -p /run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb]\"\n time=\"2025-08-02T12:39:04-04:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice and unitName libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Received: 29166\"\n time=\"2025-08-02T12:39:04-04:00\" level=info msg=\"Got Conmon PID as 29155\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb in OCI runtime\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Starting container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Started container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nAug 02 12:39:04 managed-node2 platform-python[29297]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nAug 02 12:39:04 managed-node2 systemd[1]: Reloading.\nAug 02 12:39:05 managed-node2 dnsmasq[29132]: listening on cni-podman1(#3): fe80::9446:b4ff:fe0c:8150%cni-podman1\nAug 02 12:39:05 managed-node2 platform-python[29458]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nAug 02 12:39:05 managed-node2 systemd[1]: Reloading.\nAug 02 12:39:06 managed-node2 platform-python[29621]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nAug 02 12:39:06 managed-node2 systemd[1]: Created slice system-podman\\x2dkube.slice.\n-- Subject: Unit system-podman\\x2dkube.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit system-podman\\x2dkube.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:06 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun starting up.\nAug 02 12:39:06 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : container 29144 exited with status 137\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Consumed 32ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope completed and consumed the indicated resources.\nAug 02 12:39:06 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : container 29166 exited with status 137\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89)\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Using sqlite as database backend\"\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope completed and consumed the indicated resources.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb)\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Using sqlite as database backend\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using graph driver overlay\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using transient store: false\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Initializing event backend file\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Setting parallel job count to 7\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using graph driver overlay\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using transient store: false\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Initializing event backend file\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Setting parallel job count to 7\"\nAug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767-merged.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb)\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state\nAug 02 12:39:06 managed-node2 kernel: device veth502e5636 left promiscuous mode\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state\nAug 02 12:39:06 managed-node2 systemd[1]: run-netns-netns\\x2d057bdf77\\x2d0e93\\x2d7270\\x2d6a44\\x2d66c62177cd73.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d057bdf77\\x2d0e93\\x2d7270\\x2d6a44\\x2d66c62177cd73.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay-8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1-merged.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89)\"\nAug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 systemd[1]: Stopped libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope.\n-- Subject: Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished shutting down.\nAug 02 12:39:06 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice.\n-- Subject: Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished shutting down.\nAug 02 12:39:06 managed-node2 systemd[1]: machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice: Consumed 209ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice completed and consumed the indicated resources.\nAug 02 12:39:06 managed-node2 podman[29628]: Pods stopped:\nAug 02 12:39:06 managed-node2 podman[29628]: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\nAug 02 12:39:06 managed-node2 podman[29628]: Pods removed:\nAug 02 12:39:06 managed-node2 podman[29628]: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\nAug 02 12:39:06 managed-node2 podman[29628]: Secrets removed:\nAug 02 12:39:06 managed-node2 podman[29628]: Volumes removed:\nAug 02 12:39:06 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice.\n-- Subject: Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:06 managed-node2 systemd[1]: Started libcontainer container 1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.\n-- Subject: Unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:06 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethb3f38e19: link is not ready\nAug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7477] manager: (vethb3f38e19): new Veth device (/org/freedesktop/NetworkManager/Devices/5)\nAug 02 12:39:06 managed-node2 systemd-udevd[29789]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:06 managed-node2 systemd-udevd[29789]: Could not generate persistent MAC address for vethb3f38e19: No such file or directory\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state\nAug 02 12:39:06 managed-node2 kernel: device vethb3f38e19 entered promiscuous mode\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered forwarding state\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state\nAug 02 12:39:06 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb3f38e19: link becomes ready\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered forwarding state\nAug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7761] device (vethb3f38e19): carrier: link connected\nAug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7763] device (cni-podman1): carrier: link connected\nAug 02 12:39:06 managed-node2 dnsmasq[29859]: listening on cni-podman1(#3): 10.89.0.1\nAug 02 12:39:06 managed-node2 dnsmasq[29859]: listening on cni-podman1(#3): fe80::9446:b4ff:fe0c:8150%cni-podman1\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: started, version 2.79 cachesize 150\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using local addresses only for domain dns.podman\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: reading /etc/resolv.conf\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using local addresses only for domain dns.podman\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.29.169.13#53\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.29.170.12#53\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.2.32.1#53\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:39:06 managed-node2 systemd[1]: Started libcontainer container 36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.\n-- Subject: Unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:07 managed-node2 systemd[1]: Started libcontainer container 58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.\n-- Subject: Unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:07 managed-node2 podman[29628]: Pod:\nAug 02 12:39:07 managed-node2 podman[29628]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3\nAug 02 12:39:07 managed-node2 podman[29628]: Container:\nAug 02 12:39:07 managed-node2 podman[29628]: 58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5\nAug 02 12:39:07 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:08 managed-node2 platform-python[30036]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:09 managed-node2 platform-python[30161]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:10 managed-node2 platform-python[30285]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:11 managed-node2 platform-python[30408]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:12 managed-node2 platform-python[30696]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:13 managed-node2 platform-python[30819]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:13 managed-node2 platform-python[30942]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:39:13 managed-node2 platform-python[31041]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152753.1569695-17471-186787888155164/source _original_basename=tmpca25d1vk follow=False checksum=0ee95d54856ad9dce4aa168ba4cfda0f7aaf74cc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:39:13 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state.\nAug 02 12:39:14 managed-node2 platform-python[31167]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:39:14 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice.\n-- Subject: Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethe290c1c0: link is not ready\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state\nAug 02 12:39:14 managed-node2 kernel: device vethe290c1c0 entered promiscuous mode\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered forwarding state\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state\nAug 02 12:39:14 managed-node2 NetworkManager[665]: [1754152754.3788] manager: (vethe290c1c0): new Veth device (/org/freedesktop/NetworkManager/Devices/6)\nAug 02 12:39:14 managed-node2 systemd-udevd[31214]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:14 managed-node2 systemd-udevd[31214]: Could not generate persistent MAC address for vethe290c1c0: No such file or directory\nAug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nAug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nAug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe290c1c0: link becomes ready\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered forwarding state\nAug 02 12:39:14 managed-node2 NetworkManager[665]: [1754152754.3907] device (vethe290c1c0): carrier: link connected\nAug 02 12:39:14 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses\nAug 02 12:39:14 managed-node2 systemd[1]: Started libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope.\n-- Subject: Unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:14 managed-node2 systemd[1]: Started libcontainer container 757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.\n-- Subject: Unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:14 managed-node2 systemd[1]: Started libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope.\n-- Subject: Unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:14 managed-node2 systemd[1]: Started libcontainer container c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.\n-- Subject: Unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:15 managed-node2 platform-python[31446]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nAug 02 12:39:15 managed-node2 systemd[1]: Reloading.\nAug 02 12:39:15 managed-node2 platform-python[31599]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nAug 02 12:39:16 managed-node2 systemd[1]: Reloading.\nAug 02 12:39:16 managed-node2 platform-python[31762]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nAug 02 12:39:16 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun starting up.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope completed and consumed the indicated resources.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Consumed 35ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope completed and consumed the indicated resources.\nAug 02 12:39:16 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay-560605836dd57768a1625bf83fb2efde4d0b4be2bd75173250f2c981226dcfec-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-560605836dd57768a1625bf83fb2efde4d0b4be2bd75173250f2c981226dcfec-merged.mount has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state\nAug 02 12:39:16 managed-node2 kernel: device vethe290c1c0 left promiscuous mode\nAug 02 12:39:16 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state\nAug 02 12:39:16 managed-node2 systemd[1]: run-netns-netns\\x2d925a2bce\\x2dbdb1\\x2deec4\\x2d32ca\\x2dde6b846181b3.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d925a2bce\\x2dbdb1\\x2deec4\\x2d32ca\\x2dde6b846181b3.mount has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay-aecbd6406a5b43a9f34c80dc98df8e948a604d44b70ab60cb4952ed4aea64143-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-aecbd6406a5b43a9f34c80dc98df8e948a604d44b70ab60cb4952ed4aea64143-merged.mount has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice.\n-- Subject: Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished shutting down.\nAug 02 12:39:16 managed-node2 systemd[1]: machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice: Consumed 199ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice completed and consumed the indicated resources.\nAug 02 12:39:17 managed-node2 podman[31769]: Pods stopped:\nAug 02 12:39:17 managed-node2 podman[31769]: c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd\nAug 02 12:39:17 managed-node2 podman[31769]: Pods removed:\nAug 02 12:39:17 managed-node2 podman[31769]: c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd\nAug 02 12:39:17 managed-node2 podman[31769]: Secrets removed:\nAug 02 12:39:17 managed-node2 podman[31769]: Volumes removed:\nAug 02 12:39:17 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice.\n-- Subject: Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.\n-- Subject: Unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:17 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth69cd15af: link is not ready\nAug 02 12:39:17 managed-node2 NetworkManager[665]: [1754152757.2610] manager: (veth69cd15af): new Veth device (/org/freedesktop/NetworkManager/Devices/7)\nAug 02 12:39:17 managed-node2 systemd-udevd[31935]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:17 managed-node2 systemd-udevd[31935]: Could not generate persistent MAC address for veth69cd15af: No such file or directory\nAug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered blocking state\nAug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state\nAug 02 12:39:17 managed-node2 kernel: device veth69cd15af entered promiscuous mode\nAug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered blocking state\nAug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered forwarding state\nAug 02 12:39:17 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth69cd15af: link becomes ready\nAug 02 12:39:17 managed-node2 NetworkManager[665]: [1754152757.2829] device (veth69cd15af): carrier: link connected\nAug 02 12:39:17 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses\nAug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container 686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.\n-- Subject: Unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container 42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.\n-- Subject: Unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:17 managed-node2 podman[31769]: Pod:\nAug 02 12:39:17 managed-node2 podman[31769]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748\nAug 02 12:39:17 managed-node2 podman[31769]: Container:\nAug 02 12:39:17 managed-node2 podman[31769]: 42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c\nAug 02 12:39:17 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:18 managed-node2 sudo[32165]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqlhppzgtldczaoizfnuaorgtcfrvcv ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152758.2331746-17700-125571240428666/AnsiballZ_command.py'\nAug 02 12:39:18 managed-node2 sudo[32165]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:39:18 managed-node2 platform-python[32168]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:18 managed-node2 systemd[25590]: Started podman-32177.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:18 managed-node2 sudo[32165]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:39:18 managed-node2 platform-python[32306]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:19 managed-node2 platform-python[32437]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:19 managed-node2 sudo[32575]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fljlyrnggisgqfazmxwyyzqdsrmpnozw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152759.5230556-17753-36771943884398/AnsiballZ_command.py'\nAug 02 12:39:19 managed-node2 sudo[32575]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:39:19 managed-node2 platform-python[32578]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:19 managed-node2 sudo[32575]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:39:20 managed-node2 platform-python[32704]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:20 managed-node2 platform-python[32830]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:21 managed-node2 platform-python[32956]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:21 managed-node2 platform-python[33080]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:21 managed-node2 rsyslogd[1025]: imjournal: journal files changed, reloading... [v8.2102.0-15.el8 try https://www.rsyslog.com/e/0 ]\nAug 02 12:39:22 managed-node2 platform-python[33205]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd1-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:22 managed-node2 platform-python[33329]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd2-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:22 managed-node2 platform-python[33453]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd3-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:25 managed-node2 platform-python[33702]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:26 managed-node2 platform-python[33831]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:30 managed-node2 platform-python[33956]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:39:33 managed-node2 platform-python[34079]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:39:33 managed-node2 platform-python[34206]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:39:34 managed-node2 platform-python[34333]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:39:36 managed-node2 platform-python[34456]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:39:39 managed-node2 platform-python[34579]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:39:42 managed-node2 platform-python[34702]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:39:45 managed-node2 platform-python[34825]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:39:47 managed-node2 platform-python[34986]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:39:48 managed-node2 platform-python[35109]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:39:53 managed-node2 platform-python[35232]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nAug 02 12:39:53 managed-node2 platform-python[35356]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:54 managed-node2 platform-python[35481]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:54 managed-node2 platform-python[35605]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:55 managed-node2 platform-python[35729]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:56 managed-node2 platform-python[35853]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nAug 02 12:39:57 managed-node2 platform-python[35976]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:57 managed-node2 platform-python[36099]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:58 managed-node2 sudo[36222]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgimaanjcdkkcebhasfqpdcfpgwkfami ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152797.8648844-19514-81631354606804/AnsiballZ_podman_image.py'\nAug 02 12:39:58 managed-node2 sudo[36222]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36227.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36235.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36243.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36251.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36259.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36268.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 sudo[36222]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:39:59 managed-node2 platform-python[36397]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:59 managed-node2 platform-python[36522]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:00 managed-node2 platform-python[36645]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:40:00 managed-node2 platform-python[36709]: ansible-file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=tmp5c1b5ldh recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:00 managed-node2 sudo[36832]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkfbzsoxqmfspcyzxykzglzhyzsybbor ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152800.4994206-19649-125491116375657/AnsiballZ_podman_play.py'\nAug 02 12:40:00 managed-node2 sudo[36832]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:00 managed-node2 systemd[25590]: Started podman-36843.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-08-02T12:40:00-04:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:38:52.156183843 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778)\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Got pod cgroup as /libpod_parent/af868cea690b52212d50213e7cf00f2f99a7e0af0fbb1c22376a1c8272177aef\"\n Error: adding pod to state: name \"httpd1\" is in use: pod already exists\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nAug 02 12:40:00 managed-node2 sudo[36832]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:01 managed-node2 platform-python[36997]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:40:02 managed-node2 platform-python[37121]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:03 managed-node2 platform-python[37246]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:04 managed-node2 platform-python[37370]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:05 managed-node2 platform-python[37493]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:06 managed-node2 platform-python[37784]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:07 managed-node2 platform-python[37909]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:07 managed-node2 platform-python[38032]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:40:07 managed-node2 platform-python[38096]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=tmp582cc1u4 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:08 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice.\n-- Subject: Unit machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-08-02T12:40:08-04:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2025-08-02T12:40:08-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:36:24.472660556 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice for parent machine.slice and name libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice\"\n Error: adding pod to state: name \"httpd2\" is in use: pod already exists\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nAug 02 12:40:09 managed-node2 platform-python[38380]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:11 managed-node2 platform-python[38505]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:12 managed-node2 platform-python[38629]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:12 managed-node2 platform-python[38752]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:14 managed-node2 platform-python[39041]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:14 managed-node2 platform-python[39166]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:15 managed-node2 platform-python[39289]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:40:15 managed-node2 platform-python[39353]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=tmp1h9opetg recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:15 managed-node2 platform-python[39476]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:15 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice.\n-- Subject: Unit machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:16 managed-node2 sudo[39637]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqvcznhcsisjvfqgdskaqojrqwhygefl ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152816.415957-20432-182641700984178/AnsiballZ_command.py'\nAug 02 12:40:16 managed-node2 sudo[39637]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:16 managed-node2 platform-python[39640]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:16 managed-node2 systemd[25590]: Started podman-39648.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:16 managed-node2 sudo[39637]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:17 managed-node2 platform-python[39778]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:17 managed-node2 platform-python[39909]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:17 managed-node2 sudo[40040]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omdhsbbfmibalzpyvmgfxmtrjimxacss ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152817.7911968-20497-78215437807137/AnsiballZ_command.py'\nAug 02 12:40:17 managed-node2 sudo[40040]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:18 managed-node2 platform-python[40043]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:18 managed-node2 sudo[40040]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:18 managed-node2 platform-python[40169]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:18 managed-node2 platform-python[40295]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:19 managed-node2 platform-python[40421]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:19 managed-node2 platform-python[40545]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:20 managed-node2 platform-python[40669]: ansible-uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:23 managed-node2 platform-python[40918]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:24 managed-node2 platform-python[41047]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:27 managed-node2 platform-python[41172]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nAug 02 12:40:28 managed-node2 platform-python[41296]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:28 managed-node2 platform-python[41421]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:29 managed-node2 platform-python[41545]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:30 managed-node2 platform-python[41669]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:31 managed-node2 platform-python[41793]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:31 managed-node2 sudo[41918]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxfxukflwegbkcgylwjhylqbzyvcoltj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152831.2720408-21172-116283974621483/AnsiballZ_systemd.py'\nAug 02 12:40:31 managed-node2 sudo[41918]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:31 managed-node2 platform-python[41921]: ansible-systemd Invoked with name= scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:40:31 managed-node2 systemd[25590]: Reloading.\nAug 02 12:40:31 managed-node2 systemd[25590]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nAug 02 12:40:31 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state\nAug 02 12:40:31 managed-node2 kernel: device vethfa4f074b left promiscuous mode\nAug 02 12:40:31 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state\nAug 02 12:40:32 managed-node2 podman[41937]: Pods stopped:\nAug 02 12:40:32 managed-node2 podman[41937]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a\nAug 02 12:40:32 managed-node2 podman[41937]: Pods removed:\nAug 02 12:40:32 managed-node2 podman[41937]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a\nAug 02 12:40:32 managed-node2 podman[41937]: Secrets removed:\nAug 02 12:40:32 managed-node2 podman[41937]: Volumes removed:\nAug 02 12:40:32 managed-node2 systemd[25590]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:32 managed-node2 sudo[41918]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:32 managed-node2 platform-python[42211]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:32 managed-node2 sudo[42336]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvztponjzcqifptzcqpojjxosoczfayg ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152832.762454-21237-209128215005834/AnsiballZ_podman_play.py'\nAug 02 12:40:32 managed-node2 sudo[42336]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nAug 02 12:40:33 managed-node2 systemd[25590]: Started podman-42347.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nAug 02 12:40:33 managed-node2 sudo[42336]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:33 managed-node2 platform-python[42476]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:34 managed-node2 platform-python[42599]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:40:35 managed-node2 platform-python[42723]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:36 managed-node2 platform-python[42848]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:37 managed-node2 platform-python[42972]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:40:37 managed-node2 systemd[1]: Reloading.\nAug 02 12:40:37 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun shutting down.\nAug 02 12:40:37 managed-node2 systemd[1]: libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope: Consumed 31ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope completed and consumed the indicated resources.\nAug 02 12:40:37 managed-node2 systemd[1]: libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope: Consumed 35ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope completed and consumed the indicated resources.\nAug 02 12:40:37 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:40:37 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state\nAug 02 12:40:37 managed-node2 kernel: device vethb3f38e19 left promiscuous mode\nAug 02 12:40:37 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state\nAug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-5a0e125408b3f62c274917d9a997808220e1c2685a2d8ff8405416971a11f6c0-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-5a0e125408b3f62c274917d9a997808220e1c2685a2d8ff8405416971a11f6c0-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: run-netns-netns\\x2de8368567\\x2d59b6\\x2d542f\\x2d1a97\\x2df2ca68e931e3.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2de8368567\\x2d59b6\\x2d542f\\x2d1a97\\x2df2ca68e931e3.mount has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-15c93535d4532096f93e6259ac42e3e35cf574dd19355624da5c37ad60d78144-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-15c93535d4532096f93e6259ac42e3e35cf574dd19355624da5c37ad60d78144-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice.\n-- Subject: Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished shutting down.\nAug 02 12:40:37 managed-node2 systemd[1]: machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice: Consumed 67ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice completed and consumed the indicated resources.\nAug 02 12:40:38 managed-node2 systemd[1]: libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has successfully entered the 'dead' state.\nAug 02 12:40:38 managed-node2 systemd[1]: libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope: Consumed 35ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope completed and consumed the indicated resources.\nAug 02 12:40:38 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:40:38 managed-node2 podman[43008]: Pods stopped:\nAug 02 12:40:38 managed-node2 podman[43008]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3\nAug 02 12:40:38 managed-node2 podman[43008]: Pods removed:\nAug 02 12:40:38 managed-node2 podman[43008]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3\nAug 02 12:40:38 managed-node2 podman[43008]: Secrets removed:\nAug 02 12:40:38 managed-node2 podman[43008]: Volumes removed:\nAug 02 12:40:38 managed-node2 dnsmasq[29863]: exiting on receipt of SIGTERM\nAug 02 12:40:38 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state.\nAug 02 12:40:38 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished shutting down.\nAug 02 12:40:38 managed-node2 platform-python[43285]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:38 managed-node2 systemd[1]: var-lib-containers-storage-overlay-1ad187e54185c2c7cbe64b95feb5c0fbe8c425581baa88a9f71bb6eaaa92a272-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-1ad187e54185c2c7cbe64b95feb5c0fbe8c425581baa88a9f71bb6eaaa92a272-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:38 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml\nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml\nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nAug 02 12:40:39 managed-node2 platform-python[43546]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:40 managed-node2 platform-python[43669]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:42 managed-node2 platform-python[43794]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:42 managed-node2 platform-python[43918]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:40:42 managed-node2 systemd[1]: Reloading.\nAug 02 12:40:43 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun shutting down.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope: Consumed 32ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope completed and consumed the indicated resources.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope: Consumed 34ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope completed and consumed the indicated resources.\nAug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay-65e7014be45047102e9045dcdd9345e82206f8672e1c2920f53097bbdf3fcc43-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-65e7014be45047102e9045dcdd9345e82206f8672e1c2920f53097bbdf3fcc43-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state\nAug 02 12:40:43 managed-node2 kernel: device veth69cd15af left promiscuous mode\nAug 02 12:40:43 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state\nAug 02 12:40:43 managed-node2 systemd[1]: run-netns-netns\\x2dfd171033\\x2dc8d0\\x2d5ddd\\x2d985b\\x2d865fa20d123b.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2dfd171033\\x2dc8d0\\x2d5ddd\\x2d985b\\x2d865fa20d123b.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay-bd51c7fcb084eb966e3f3b68637936b0c8d50499aba52b4ffce32c4ad877cf0d-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-bd51c7fcb084eb966e3f3b68637936b0c8d50499aba52b4ffce32c4ad877cf0d-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice.\n-- Subject: Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished shutting down.\nAug 02 12:40:43 managed-node2 systemd[1]: machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice: Consumed 66ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice completed and consumed the indicated resources.\nAug 02 12:40:43 managed-node2 podman[43954]: Pods stopped:\nAug 02 12:40:43 managed-node2 podman[43954]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748\nAug 02 12:40:43 managed-node2 podman[43954]: Pods removed:\nAug 02 12:40:43 managed-node2 podman[43954]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748\nAug 02 12:40:43 managed-node2 podman[43954]: Secrets removed:\nAug 02 12:40:43 managed-node2 podman[43954]: Volumes removed:\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope: Consumed 35ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope completed and consumed the indicated resources.\nAug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished shutting down.\nAug 02 12:40:44 managed-node2 platform-python[44224]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7cab78c749c43bcbce8111bde995b9f5abcdf949ac791b25a3b873fde27f6845-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-7cab78c749c43bcbce8111bde995b9f5abcdf949ac791b25a3b873fde27f6845-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:40:44 managed-node2 platform-python[44349]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:44 managed-node2 platform-python[44349]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml\nAug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:40:44 managed-node2 platform-python[44486]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:45 managed-node2 platform-python[44609]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nAug 02 12:40:46 managed-node2 platform-python[44733]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:47 managed-node2 sudo[44858]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olrmvkvglxbsttmchncdtptzsqpgbnrc ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152846.6847095-21950-187401977583699/AnsiballZ_podman_container_info.py'\nAug 02 12:40:47 managed-node2 sudo[44858]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:47 managed-node2 platform-python[44861]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None\nAug 02 12:40:47 managed-node2 systemd[25590]: Started podman-44863.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:47 managed-node2 sudo[44858]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:47 managed-node2 sudo[44992]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpjmlfvygeamsbgxiibmynquopakmqzw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152847.3914187-21987-273657899931369/AnsiballZ_command.py'\nAug 02 12:40:47 managed-node2 sudo[44992]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:47 managed-node2 platform-python[44995]: ansible-command Invoked with _raw_params=podman network ls -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:47 managed-node2 systemd[25590]: Started podman-44997.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:47 managed-node2 sudo[44992]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:47 managed-node2 sudo[45152]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkszomlmijhetpvbizxheimorvpsvgdk ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152847.866539-22018-22833626881705/AnsiballZ_command.py'\nAug 02 12:40:47 managed-node2 sudo[45152]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:48 managed-node2 platform-python[45155]: ansible-command Invoked with _raw_params=podman secret ls -n -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:48 managed-node2 systemd[25590]: Started podman-45157.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:48 managed-node2 sudo[45152]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:48 managed-node2 platform-python[45287]: ansible-command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None\nAug 02 12:40:48 managed-node2 systemd[1]: Stopping User Manager for UID 3001...\n-- Subject: Unit user@3001.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has begun shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopping podman-pause-9fcbd008.scope.\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Default.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopping D-Bus User Message Bus...\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Removed slice podman\\x2dkube.slice.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped D-Bus User Message Bus.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Basic System.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Sockets.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Timers.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped Mark boot as successful after the user session has run 2 minutes.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Paths.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Closed D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped podman-pause-9fcbd008.scope.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Removed slice user.slice.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Reached target Shutdown.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:48 managed-node2 systemd[25590]: Started Exit the Session.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:48 managed-node2 systemd[25590]: Reached target Exit the Session.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:48 managed-node2 systemd[1]: user@3001.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit user@3001.service has successfully entered the 'dead' state.\nAug 02 12:40:48 managed-node2 systemd[1]: Stopped User Manager for UID 3001.\n-- Subject: Unit user@3001.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[1]: Stopping User runtime directory /run/user/3001...\n-- Subject: Unit user-runtime-dir@3001.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has begun shutting down.\nAug 02 12:40:48 managed-node2 systemd[1]: run-user-3001.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-user-3001.mount has successfully entered the 'dead' state.\nAug 02 12:40:48 managed-node2 systemd[1]: user-runtime-dir@3001.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit user-runtime-dir@3001.service has successfully entered the 'dead' state.\nAug 02 12:40:48 managed-node2 systemd[1]: Stopped User runtime directory /run/user/3001.\n-- Subject: Unit user-runtime-dir@3001.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[1]: Removed slice User Slice of UID 3001.\n-- Subject: Unit user-3001.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-3001.slice has finished shutting down.\nAug 02 12:40:48 managed-node2 platform-python[45419]: ansible-command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:49 managed-node2 sudo[45543]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqbllrdzcpiqglhdxzdzawivaqzmpyy ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152849.62423-22100-37847572003711/AnsiballZ_command.py'\nAug 02 12:40:49 managed-node2 sudo[45543]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:49 managed-node2 platform-python[45546]: ansible-command Invoked with _raw_params=podman pod exists httpd1 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:49 managed-node2 sudo[45543]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:50 managed-node2 platform-python[45676]: ansible-command Invoked with _raw_params=podman pod exists httpd2 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:40:50 managed-node2 platform-python[45806]: ansible-command Invoked with _raw_params=podman pod exists httpd3 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:40:50 managed-node2 sudo[45937]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kquekbnhuqoxfuhqzmvgbrdfuqcyvrpf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152850.8293648-22145-217225041127031/AnsiballZ_command.py'\nAug 02 12:40:50 managed-node2 sudo[45937]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:51 managed-node2 platform-python[45940]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:51 managed-node2 sudo[45937]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:51 managed-node2 platform-python[46066]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:51 managed-node2 platform-python[46192]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:52 managed-node2 platform-python[46318]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:55 managed-node2 platform-python[46566]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:56 managed-node2 platform-python[46695]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:40:57 managed-node2 platform-python[46819]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:00 managed-node2 platform-python[46944]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nAug 02 12:41:01 managed-node2 platform-python[47068]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:01 managed-node2 platform-python[47193]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:01 managed-node2 platform-python[47317]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:03 managed-node2 platform-python[47441]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:03 managed-node2 platform-python[47565]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:04 managed-node2 platform-python[47688]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:04 managed-node2 platform-python[47811]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:06 managed-node2 platform-python[47934]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:41:06 managed-node2 platform-python[48058]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:08 managed-node2 platform-python[48183]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:08 managed-node2 platform-python[48307]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:41:09 managed-node2 platform-python[48434]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:09 managed-node2 platform-python[48557]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:11 managed-node2 platform-python[48680]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:12 managed-node2 platform-python[48805]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:13 managed-node2 platform-python[48929]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:41:14 managed-node2 platform-python[49056]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:14 managed-node2 platform-python[49179]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:15 managed-node2 platform-python[49302]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nAug 02 12:41:16 managed-node2 platform-python[49426]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:17 managed-node2 platform-python[49549]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:17 managed-node2 platform-python[49672]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:18 managed-node2 sshd[49693]: Accepted publickey for root from 10.31.46.71 port 34968 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nAug 02 12:41:18 managed-node2 systemd-logind[591]: New session 9 of user root.\n-- Subject: A new session 9 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 9 has been created for the user root.\n-- \n-- The leading process of the session is 49693.\nAug 02 12:41:18 managed-node2 systemd[1]: Started Session 9 of user root.\n-- Subject: Unit session-9.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-9.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:18 managed-node2 sshd[49693]: pam_unix(sshd:session): session opened for user root by (uid=0)\nAug 02 12:41:18 managed-node2 sshd[49696]: Received disconnect from 10.31.46.71 port 34968:11: disconnected by user\nAug 02 12:41:18 managed-node2 sshd[49696]: Disconnected from user root 10.31.46.71 port 34968\nAug 02 12:41:18 managed-node2 sshd[49693]: pam_unix(sshd:session): session closed for user root\nAug 02 12:41:18 managed-node2 systemd[1]: session-9.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-9.scope has successfully entered the 'dead' state.\nAug 02 12:41:18 managed-node2 systemd-logind[591]: Session 9 logged out. Waiting for processes to exit.\nAug 02 12:41:18 managed-node2 systemd-logind[591]: Removed session 9.\n-- Subject: Session 9 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 9 has been terminated.\nAug 02 12:41:20 managed-node2 platform-python[49858]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nAug 02 12:41:21 managed-node2 platform-python[49985]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:21 managed-node2 platform-python[50108]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:24 managed-node2 platform-python[50356]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:25 managed-node2 platform-python[50485]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:41:26 managed-node2 platform-python[50609]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:27 managed-node2 sshd[50632]: Accepted publickey for root from 10.31.46.71 port 55872 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nAug 02 12:41:27 managed-node2 systemd-logind[591]: New session 10 of user root.\n-- Subject: A new session 10 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 10 has been created for the user root.\n-- \n-- The leading process of the session is 50632.\nAug 02 12:41:27 managed-node2 systemd[1]: Started Session 10 of user root.\n-- Subject: Unit session-10.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-10.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:27 managed-node2 sshd[50632]: pam_unix(sshd:session): session opened for user root by (uid=0)\nAug 02 12:41:27 managed-node2 sshd[50635]: Received disconnect from 10.31.46.71 port 55872:11: disconnected by user\nAug 02 12:41:27 managed-node2 sshd[50635]: Disconnected from user root 10.31.46.71 port 55872\nAug 02 12:41:27 managed-node2 sshd[50632]: pam_unix(sshd:session): session closed for user root\nAug 02 12:41:27 managed-node2 systemd[1]: session-10.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-10.scope has successfully entered the 'dead' state.\nAug 02 12:41:27 managed-node2 systemd-logind[591]: Session 10 logged out. Waiting for processes to exit.\nAug 02 12:41:27 managed-node2 systemd-logind[591]: Removed session 10.\n-- Subject: Session 10 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 10 has been terminated.\nAug 02 12:41:29 managed-node2 platform-python[50797]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nAug 02 12:41:33 managed-node2 platform-python[50949]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:33 managed-node2 platform-python[51072]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:35 managed-node2 platform-python[51320]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:36 managed-node2 platform-python[51449]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:41:37 managed-node2 platform-python[51573]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:41 managed-node2 sshd[51596]: Accepted publickey for root from 10.31.46.71 port 38190 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nAug 02 12:41:41 managed-node2 systemd[1]: Started Session 11 of user root.\n-- Subject: Unit session-11.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-11.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:41 managed-node2 systemd-logind[591]: New session 11 of user root.\n-- Subject: A new session 11 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 11 has been created for the user root.\n-- \n-- The leading process of the session is 51596.\nAug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session opened for user root by (uid=0)\nAug 02 12:41:41 managed-node2 sshd[51599]: Received disconnect from 10.31.46.71 port 38190:11: disconnected by user\nAug 02 12:41:41 managed-node2 sshd[51599]: Disconnected from user root 10.31.46.71 port 38190\nAug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session closed for user root\nAug 02 12:41:41 managed-node2 systemd[1]: session-11.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-11.scope has successfully entered the 'dead' state.\nAug 02 12:41:41 managed-node2 systemd-logind[591]: Session 11 logged out. Waiting for processes to exit.\nAug 02 12:41:41 managed-node2 systemd-logind[591]: Removed session 11.\n-- Subject: Session 11 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 11 has been terminated.\nAug 02 12:41:43 managed-node2 platform-python[51761]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nAug 02 12:41:43 managed-node2 platform-python[51913]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:44 managed-node2 platform-python[52036]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:45 managed-node2 platform-python[52160]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:41:48 managed-node2 platform-python[52288]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 systemd[1]: Reloading.\nAug 02 12:41:51 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.\n-- Subject: Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:51 managed-node2 systemd[1]: Starting man-db-cache-update.service...\n-- Subject: Unit man-db-cache-update.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has begun starting up.\nAug 02 12:41:52 managed-node2 systemd[1]: Reloading.\nAug 02 12:41:52 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit man-db-cache-update.service has successfully entered the 'dead' state.\nAug 02 12:41:52 managed-node2 systemd[1]: Started man-db-cache-update.service.\n-- Subject: Unit man-db-cache-update.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:52 managed-node2 systemd[1]: run-r5b158d19759a4bbaa61aee183ab0cad0.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has successfully entered the 'dead' state.\nAug 02 12:41:53 managed-node2 platform-python[52920]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:53 managed-node2 platform-python[53043]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:54 managed-node2 platform-python[53166]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:41:54 managed-node2 systemd[1]: Reloading.\nAug 02 12:41:54 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment...\n-- Subject: Unit certmonger.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit certmonger.service has begun starting up.\nAug 02 12:41:54 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment.\n-- Subject: Unit certmonger.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit certmonger.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:55 managed-node2 platform-python[53359]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=#\n # Ansible managed\n #\n # system_role:certificate\n booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53375]: Certificate in file \"/etc/pki/tls/certs/quadlet_demo.crt\" issued by CA and saved.\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:56 managed-node2 platform-python[53497]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt\nAug 02 12:41:56 managed-node2 platform-python[53620]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key\nAug 02 12:41:57 managed-node2 platform-python[53743]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt\nAug 02 12:41:57 managed-node2 platform-python[53866]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:57 managed-node2 certmonger[53202]: 2025-08-02 12:41:57 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:58 managed-node2 platform-python[53990]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:58 managed-node2 platform-python[54113]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:58 managed-node2 platform-python[54236]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:59 managed-node2 platform-python[54359]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:59 managed-node2 platform-python[54482]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:02 managed-node2 platform-python[54730]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:03 managed-node2 platform-python[54859]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:42:03 managed-node2 platform-python[54983]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:06 managed-node2 platform-python[55108]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:06 managed-node2 platform-python[55231]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:07 managed-node2 platform-python[55354]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:08 managed-node2 platform-python[55478]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:42:11 managed-node2 platform-python[55601]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:42:11 managed-node2 platform-python[55728]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:42:12 managed-node2 platform-python[55855]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:42:13 managed-node2 platform-python[55978]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:42:15 managed-node2 platform-python[56101]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None", "task_name": "Dump journal", "task_path": "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:142" }, { "ansible_version": "2.9.27", "end_time": "2025-08-02T16:42:30.367950+00:00Z", "host": "managed-node2", "message": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "start_time": "2025-08-02T16:42:30.349422+00:00Z", "task_name": "Manage each secret", "task_path": "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_secret.yml:42" }, { "ansible_version": "2.9.27", "delta": "0:00:00.028118", "end_time": "2025-08-02 12:42:31.468197", "host": "managed-node2", "message": "No message could be found", "rc": 0, "start_time": "2025-08-02 12:42:31.440079", "stdout": "-- Logs begin at Sat 2025-08-02 12:30:34 EDT, end at Sat 2025-08-02 12:42:31 EDT. --\nAug 02 12:36:15 managed-node2 kernel: SELinux: Converting 460 SID table entries...\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability network_peer_controls=1\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability open_perms=1\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability extended_socket_class=1\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability always_check_network=0\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability cgroup_seclabel=1\nAug 02 12:36:15 managed-node2 kernel: SELinux: policy capability nnp_nosuid_transition=1\nAug 02 12:36:15 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:36:15 managed-node2 platform-python[14798]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:36:20 managed-node2 platform-python[14921]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:22 managed-node2 platform-python[15046]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:23 managed-node2 platform-python[15169]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:36:23 managed-node2 platform-python[15292]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:36:23 managed-node2 platform-python[15391]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/nopull.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152583.1815786-10272-87697288283027/source _original_basename=tmp987jyyff follow=False checksum=d5dc917e3cae36de03aa971a17ac473f86fdf934 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:36:24 managed-node2 platform-python[15516]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:36:24 managed-node2 kernel: evm: overlay not supported\nAug 02 12:36:24 managed-node2 systemd[1]: Created slice machine.slice.\n-- Subject: Unit machine.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:36:24 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice.\n-- Subject: Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:36:25 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:36:29 managed-node2 platform-python[15841]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:36:31 managed-node2 platform-python[15970]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:34 managed-node2 platform-python[16095]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:37 managed-node2 platform-python[16218]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:36:38 managed-node2 platform-python[16345]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:36:38 managed-node2 platform-python[16472]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:36:41 managed-node2 platform-python[16595]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:44 managed-node2 platform-python[16718]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:47 managed-node2 platform-python[16841]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:36:50 managed-node2 platform-python[16964]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:36:51 managed-node2 platform-python[17112]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:36:52 managed-node2 platform-python[17235]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:36:57 managed-node2 platform-python[17358]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:36:59 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:37:00 managed-node2 platform-python[17620]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:00 managed-node2 platform-python[17743]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:37:01 managed-node2 platform-python[17866]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:37:01 managed-node2 platform-python[17965]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/bogus.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152620.9003708-11840-265132279831358/source _original_basename=tmp6af94dg8 follow=False checksum=f8266a972ed3be7e204d2a67883fe3a22b8dbf18 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:37:02 managed-node2 platform-python[18090]: ansible-containers.podman.podman_play Invoked with state=created kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:37:02 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice.\n-- Subject: Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:37:02 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:37:05 managed-node2 platform-python[18377]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:37:06 managed-node2 platform-python[18506]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:10 managed-node2 platform-python[18631]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:13 managed-node2 platform-python[18754]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:37:14 managed-node2 platform-python[18881]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:37:14 managed-node2 platform-python[19008]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:37:16 managed-node2 platform-python[19131]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:19 managed-node2 platform-python[19254]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:22 managed-node2 platform-python[19377]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:25 managed-node2 platform-python[19500]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:37:27 managed-node2 platform-python[19648]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:37:28 managed-node2 platform-python[19771]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:37:32 managed-node2 platform-python[19894]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:33 managed-node2 platform-python[20019]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:37:34 managed-node2 platform-python[20143]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:37:35 managed-node2 platform-python[20270]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:35 managed-node2 platform-python[20395]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/nopull.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:37:35 managed-node2 platform-python[20395]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/nopull.yml\nAug 02 12:37:35 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice.\n-- Subject: Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice has finished shutting down.\nAug 02 12:37:35 managed-node2 systemd[1]: machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice: Consumed 0 CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_529d5729a8a1a5d9309b23c7023dc67a24ef86519f36f0ece8c317334447c9df.slice completed and consumed the indicated resources.\nAug 02 12:37:35 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:37:36 managed-node2 platform-python[20533]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/nopull.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:37:36 managed-node2 platform-python[20656]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:37:39 managed-node2 platform-python[20911]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:37:41 managed-node2 platform-python[21040]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:37:45 managed-node2 platform-python[21165]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:48 managed-node2 platform-python[21288]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:37:48 managed-node2 platform-python[21415]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:37:49 managed-node2 platform-python[21542]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:37:51 managed-node2 platform-python[21665]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:54 managed-node2 platform-python[21788]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:37:57 managed-node2 platform-python[21911]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:00 managed-node2 platform-python[22034]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:38:02 managed-node2 platform-python[22182]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:38:03 managed-node2 platform-python[22305]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:38:08 managed-node2 platform-python[22428]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:09 managed-node2 platform-python[22553]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:10 managed-node2 platform-python[22677]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:38:10 managed-node2 platform-python[22804]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:11 managed-node2 platform-python[22929]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/bogus.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:38:11 managed-node2 platform-python[22929]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/bogus.yml\nAug 02 12:38:11 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice.\n-- Subject: Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice has finished shutting down.\nAug 02 12:38:11 managed-node2 systemd[1]: machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice: Consumed 0 CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_39a7481b612649d9df6c26df41e78899d4ab2912170b60fc752ba1f008507a78.slice completed and consumed the indicated resources.\nAug 02 12:38:11 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:38:12 managed-node2 platform-python[23068]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/bogus.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:38:12 managed-node2 platform-python[23191]: ansible-command Invoked with _raw_params=podman image prune -f warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:16 managed-node2 platform-python[23446]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:17 managed-node2 platform-python[23575]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:21 managed-node2 platform-python[23700]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:24 managed-node2 platform-python[23823]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:38:24 managed-node2 platform-python[23950]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:38:25 managed-node2 platform-python[24077]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:38:27 managed-node2 platform-python[24200]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:30 managed-node2 platform-python[24323]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:33 managed-node2 platform-python[24446]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:38:36 managed-node2 platform-python[24569]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:38:38 managed-node2 platform-python[24717]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:38:38 managed-node2 platform-python[24840]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:38:43 managed-node2 platform-python[24963]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nAug 02 12:38:43 managed-node2 platform-python[25087]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:44 managed-node2 platform-python[25212]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:44 managed-node2 platform-python[25336]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:46 managed-node2 platform-python[25460]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:47 managed-node2 platform-python[25584]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nAug 02 12:38:47 managed-node2 systemd[1]: Created slice User Slice of UID 3001.\n-- Subject: Unit user-3001.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-3001.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[1]: Starting User runtime directory /run/user/3001...\n-- Subject: Unit user-runtime-dir@3001.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has begun starting up.\nAug 02 12:38:47 managed-node2 systemd[1]: Started User runtime directory /run/user/3001.\n-- Subject: Unit user-runtime-dir@3001.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[1]: Starting User Manager for UID 3001...\n-- Subject: Unit user@3001.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has begun starting up.\nAug 02 12:38:47 managed-node2 systemd[25590]: pam_unix(systemd-user:session): session opened for user podman_basic_user by (uid=0)\nAug 02 12:38:47 managed-node2 systemd[25590]: Starting D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun starting up.\nAug 02 12:38:47 managed-node2 systemd[25590]: Started Mark boot as successful after the user session has run 2 minutes.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Paths.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Timers.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Listening on D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Sockets.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Basic System.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Reached target Default.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:47 managed-node2 systemd[25590]: Startup finished in 32ms.\n-- Subject: User manager start-up is now complete\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The user manager instance for user 3001 has been started. All services queued\n-- for starting have been started. Note that other services might still be starting\n-- up or be started at any later time.\n-- \n-- Startup of the manager took 32456 microseconds.\nAug 02 12:38:47 managed-node2 systemd[1]: Started User Manager for UID 3001.\n-- Subject: Unit user@3001.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:48 managed-node2 platform-python[25725]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:38:48 managed-node2 platform-python[25848]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:38:48 managed-node2 sudo[25971]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbiyvlbwevndfyvleplnipyfcreleuz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152728.615097-16354-140653512589780/AnsiballZ_podman_image.py'\nAug 02 12:38:48 managed-node2 sudo[25971]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:49 managed-node2 systemd[25590]: Started D-Bus User Message Bus.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Created slice user.slice.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Started podman-25984.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Started podman-pause-9fcbd008.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Started podman-26000.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:49 managed-node2 systemd[25590]: Started podman-26016.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:50 managed-node2 sudo[25971]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:50 managed-node2 platform-python[26145]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:50 managed-node2 platform-python[26268]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:38:51 managed-node2 platform-python[26391]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:38:51 managed-node2 platform-python[26490]: ansible-copy Invoked with dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml owner=podman_basic_user group=3001 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152731.0942633-16483-51427114771259/source _original_basename=tmpz7phazza follow=False checksum=41ba442683d49d3571d4ddce7f5dc14c85104270 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:38:51 managed-node2 sudo[26615]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtlljryarlxiuspbmjgrszvqdzysmgm ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152731.799995-16513-128710640317424/AnsiballZ_podman_play.py'\nAug 02 12:38:51 managed-node2 sudo[26615]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:38:52 managed-node2 systemd[25590]: Started podman-26626.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:52 managed-node2 kernel: tun: Universal TUN/TAP device driver, 1.6\nAug 02 12:38:52 managed-node2 systemd[25590]: Started rootless-netns-6da9f76b.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:52 managed-node2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.\nAug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethff7bc329: link is not ready\nAug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered blocking state\nAug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state\nAug 02 12:38:52 managed-node2 kernel: device vethff7bc329 entered promiscuous mode\nAug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nAug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nAug 02 12:38:52 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethff7bc329: link becomes ready\nAug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered blocking state\nAug 02 12:38:52 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered forwarding state\nAug 02 12:38:52 managed-node2 dnsmasq[26814]: listening on cni-podman1(#3): 10.89.0.1\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: started, version 2.79 cachesize 150\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using local addresses only for domain dns.podman\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: reading /etc/resolv.conf\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using local addresses only for domain dns.podman\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.0.2.3#53\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.29.169.13#53\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.29.170.12#53\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: using nameserver 10.2.32.1#53\nAug 02 12:38:52 managed-node2 dnsmasq[26816]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:38:52 managed-node2 conmon[26830]: conmon af16b69d72cc4526d63a : failed to write to /proc/self/oom_score_adj: Permission denied\nAug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/14/attach}\nAug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : terminal_ctrl_fd: 14\nAug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : winsz read side: 17, winsz write side: 18\nAug 02 12:38:52 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : container PID: 26841\nAug 02 12:38:52 managed-node2 conmon[26851]: conmon 98c476488369c461640e : failed to write to /proc/self/oom_score_adj: Permission denied\nAug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}\nAug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : terminal_ctrl_fd: 13\nAug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : winsz read side: 16, winsz write side: 17\nAug 02 12:38:52 managed-node2 conmon[26852]: conmon 98c476488369c461640e : container PID: 26862\nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f\n Container:\n 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\n \nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Successfully loaded 1 networks\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"found free device name cni-podman1\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"found free ipv4 network subnet 10.89.0.0/24\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:38:52.156183843 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reference \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" does not resolve to an image ID\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reference \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" does not resolve to an image ID\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"FROM \\\"scratch\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are not supported\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Check for idmapped mounts support \"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"overlay: test mount indicated that volatile is being used\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/empty,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/work,userxattr,volatile,context=\\\"system_u:object_r:container_file_t:s0:c153,c335\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container ID: 0aa3a7343fabfcbf854b2db926e8ef83982bdca7985f430aea1f98b430ae4469\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:copy Args:[/usr/libexec/podman/catatonit /catatonit] Flags:[] Attrs:map[] Message:COPY /usr/libexec/podman/catatonit /catatonit Heredocs:[] Original:COPY /usr/libexec/podman/catatonit /catatonit}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"COPY []string(nil), imagebuilder.Copy{FromFS:false, From:\\\"\\\", Src:[]string{\\\"/usr/libexec/podman/catatonit\\\"}, Dest:\\\"/catatonit\\\", Download:false, Chown:\\\"\\\", Chmod:\\\"\\\", Checksum:\\\"\\\", Files:[]imagebuilder.File(nil)}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"added content file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Parsed Step: {Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] Command:entrypoint Args:[/catatonit -P] Flags:[] Attrs:map[json:true] Message:ENTRYPOINT /catatonit -P Heredocs:[] Original:ENTRYPOINT [\\\"/catatonit\\\", \\\"-P\\\"]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"COMMIT localhost/podman-pause:4.9.4-dev-1708535009\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"COMMIT \\\"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"committing image with reference \\\"containers-storage:[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\" is allowed by policy\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"layer list: [\\\"5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1\\\"]\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"using \\\"/var/tmp/buildah3803674644\\\" to hold temporary data\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Tar with options on /home/podman_basic_user/.local/share/containers/storage/overlay/5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1/diff\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"layer \\\"5e0fb6bd2b80ddb4db7632fae1b286af2cecc3e996bb17e16cfd3ee8213d50a1\\\" size is 767488 bytes, uncompressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690, possibly-compressed digest sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"OCIv1 config = {\\\"created\\\":\\\"2025-08-02T16:38:52.296533662Z\\\",\\\"architecture\\\":\\\"amd64\\\",\\\"os\\\":\\\"linux\\\",\\\"config\\\":{\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"rootfs\\\":{\\\"type\\\":\\\"layers\\\",\\\"diff_ids\\\":[\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"]},\\\"history\\\":[{\\\"created\\\":\\\"2025-08-02T16:38:52.295942274Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \\\",\\\"empty_layer\\\":true},{\\\"created\\\":\\\"2025-08-02T16:38:52.299650983Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) ENTRYPOINT [\\\\\\\"/catatonit\\\\\\\", \\\\\\\"-P\\\\\\\"]\\\"}]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"OCIv1 manifest = {\\\"schemaVersion\\\":2,\\\"mediaType\\\":\\\"application/vnd.oci.image.manifest.v1+json\\\",\\\"config\\\":{\\\"mediaType\\\":\\\"application/vnd.oci.image.config.v1+json\\\",\\\"digest\\\":\\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\",\\\"size\\\":668},\\\"layers\\\":[{\\\"mediaType\\\":\\\"application/vnd.oci.image.layer.v1.tar\\\",\\\"digest\\\":\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\",\\\"size\\\":767488}],\\\"annotations\\\":{\\\"org.opencontainers.image.base.digest\\\":\\\"\\\",\\\"org.opencontainers.image.base.name\\\":\\\"\\\"}}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Docker v2s2 config = {\\\"created\\\":\\\"2025-08-02T16:38:52.296533662Z\\\",\\\"container\\\":\\\"0aa3a7343fabfcbf854b2db926e8ef83982bdca7985f430aea1f98b430ae4469\\\",\\\"container_config\\\":{\\\"Hostname\\\":\\\"\\\",\\\"Domainname\\\":\\\"\\\",\\\"User\\\":\\\"\\\",\\\"AttachStdin\\\":false,\\\"AttachStdout\\\":false,\\\"AttachStderr\\\":false,\\\"Tty\\\":false,\\\"OpenStdin\\\":false,\\\"StdinOnce\\\":false,\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Cmd\\\":[],\\\"Image\\\":\\\"\\\",\\\"Volumes\\\":{},\\\"WorkingDir\\\":\\\"\\\",\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"OnBuild\\\":[],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"config\\\":{\\\"Hostname\\\":\\\"\\\",\\\"Domainname\\\":\\\"\\\",\\\"User\\\":\\\"\\\",\\\"AttachStdin\\\":false,\\\"AttachStdout\\\":false,\\\"AttachStderr\\\":false,\\\"Tty\\\":false,\\\"OpenStdin\\\":false,\\\"StdinOnce\\\":false,\\\"Env\\\":[\\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\"],\\\"Cmd\\\":[],\\\"Image\\\":\\\"\\\",\\\"Volumes\\\":{},\\\"WorkingDir\\\":\\\"\\\",\\\"Entrypoint\\\":[\\\"/catatonit\\\",\\\"-P\\\"],\\\"OnBuild\\\":[],\\\"Labels\\\":{\\\"io.buildah.version\\\":\\\"1.33.5\\\"}},\\\"architecture\\\":\\\"amd64\\\",\\\"os\\\":\\\"linux\\\",\\\"rootfs\\\":{\\\"type\\\":\\\"layers\\\",\\\"diff_ids\\\":[\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"]},\\\"history\\\":[{\\\"created\\\":\\\"2025-08-02T16:38:52.295942274Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) COPY file:b0770577934d9536a010638e2bd49b7571c5d0a878a528b9fdba01abe9f2d5dd in /catatonit \\\",\\\"empty_layer\\\":true},{\\\"created\\\":\\\"2025-08-02T16:38:52.299650983Z\\\",\\\"created_by\\\":\\\"/bin/sh -c #(nop) ENTRYPOINT [\\\\\\\"/catatonit\\\\\\\", \\\\\\\"-P\\\\\\\"]\\\"}]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Docker v2s2 manifest = {\\\"schemaVersion\\\":2,\\\"mediaType\\\":\\\"application/vnd.docker.distribution.manifest.v2+json\\\",\\\"config\\\":{\\\"mediaType\\\":\\\"application/vnd.docker.container.image.v1+json\\\",\\\"size\\\":1342,\\\"digest\\\":\\\"sha256:69b1a52f65cb5e3fa99e89b61152bda48cb5524edcedfdf2eac76a30c6778813\\\"},\\\"layers\\\":[{\\\"mediaType\\\":\\\"application/vnd.docker.image.rootfs.diff.tar\\\",\\\"size\\\":767488,\\\"digest\\\":\\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"}]}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using SQLite blob info cache at /home/podman_basic_user/.local/share/containers/cache/blob-info-cache-v1.sqlite\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"IsRunningImageAllowed for image containers-storage:\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\" Using transport \\\"containers-storage\\\" policy section \"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\" Requirement 0: allowed\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Overall: allowed\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"start reading config\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"finished reading config\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.docker.distribution.manifest.v1+json]\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"... will first try using the original manifest unmodified\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Checking if we can reuse blob sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690: general substitution = true, compression for MIME type \\\"application/vnd.oci.image.layer.v1.tar\\\" = true\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reading layer \\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"No compression detected\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using original blob without modification\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Applying tar in /home/podman_basic_user/.local/share/containers/storage/overlay/d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690/diff\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"finished reading layer \\\"sha256:d2d0eb8a68f8cf95b9c7068be2f59961cd9dc579139bd79dee5eb65ea6de5690\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"No compression detected\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Compression change for blob sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778 (\\\"application/vnd.oci.image.config.v1+json\\\") not supported\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Using original blob without modification\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"setting image creation date to 2025-08-02 16:38:52.296533662 +0000 UTC\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"created new image ID \\\"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\" with metadata \\\"{}\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"added name \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" to image \\\"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]localhost/podman-pause:4.9.4-dev-1708535009\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"printing final image id \\\"60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Got pod cgroup as /libpod_parent/191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"setting container name 191a369333e4-infra\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Allocated lock 1 for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created container \\\"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container \\\"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\\\" has work directory \\\"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container \\\"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\\\" has run directory \\\"/run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"adding container to pod httpd1\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"setting container name httpd1-httpd1\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Allocated lock 2 for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created container \\\"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container \\\"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\\\" has work directory \\\"/home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Container \\\"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\\\" has run directory \\\"/run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Strongconnecting node af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Pushed af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae onto stack\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Finishing node af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae. Popped af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae off stack\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Strongconnecting node 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Pushed 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 onto stack\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Finishing node 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939. Popped 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 off stack\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/4MRAZCR7JRY45YIIWXX5WJJ6A6,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/work,userxattr,context=\\\"system_u:object_r:container_file_t:s0:c389,c456\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Made network namespace at /run/user/3001/netns/netns-6723b79b-4d64-cc19-6a91-87394c058c53 for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Mounted container \\\"af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\\\" at \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created root filesystem for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae at /home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"creating rootless network namespace with name \\\"rootless-netns-d22c9f230d0691b8f418\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"slirp4netns command: /bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/3001/netns/rootless-netns-d22c9f230d0691b8f418 tap0\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"The path of /etc/resolv.conf in the mount ns is \\\"/etc/resolv.conf\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"cni result for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:1e:08:d6:95:5e:f1 Sandbox:} {Name:vethff7bc329 Mac:26:19:1e:a6:0a:11 Sandbox:} {Name:eth0 Mac:1e:8a:1a:f5:d1:2a Sandbox:/run/user/3001/netns/netns-6723b79b-4d64-cc19-6a91-87394c058c53}] [{Version:4 Interface:0xc000b96228 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"Starting parent driver\\\"\\ntime=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"opaque=map[builtin.readypipepath:/run/user/3001/libpod/tmp/rootlessport3029357974/.bp-ready.pipe builtin.socketpath:/run/user/3001/libpod/tmp/rootlessport3029357974/.bp.sock]\\\"\\ntime=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"Starting child driver in child netns (\\\\\\\"/proc/self/exe\\\\\\\" [rootlessport-child])\\\"\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"Waiting for initComplete\\\"\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"initComplete is closed; parent and child established the communication channel\\\"\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=\\\"Exposing ports [{ 80 15001 1 tcp}]\\\"\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport: time=\\\"2025-08-02T12:38:52-04:00\\\" level=info msg=Ready\\n\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"rootlessport is ready\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/7baaa5ea30ba9041c549e3a41dd506ca5108434c6c411af487419b60ba054a90/merged\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created OCI spec for container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/config.json\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Got pod cgroup as \"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae -u af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata -p /run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/pidfile -n 191a369333e4-infra --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae]\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied\"\n [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n \n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Received: 26841\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Got Conmon PID as 26831\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae in OCI runtime\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Starting container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae with command [/catatonit -P]\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Started container af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/home/podman_basic_user/.local/share/containers/storage/overlay/l/S5QNMEV2IMLZOTXAJ3H4ZQCILN,upperdir=/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/diff,workdir=/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/work,userxattr,context=\\\"system_u:object_r:container_file_t:s0:c389,c456\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Mounted container \\\"98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\\\" at \\\"/home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/merged\\\"\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created root filesystem for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 at /home/podman_basic_user/.local/share/containers/storage/overlay/fc398049350e0c400b07ddfc62c1a479c9944e65a0e865ca93759d7a8a2d317e/merged\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created OCI spec for container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 at /home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/config.json\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Got pod cgroup as \"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 -u 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 -r /usr/bin/runc -b /home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata -p /run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/pidfile -n httpd1-httpd1 --exit-dir /run/user/3001/libpod/tmp/exits --full-attach -l k8s-file:/home/podman_basic_user/.local/share/containers/storage/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/3001/containers/overlay-containers/98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/podman_basic_user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/3001/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/3001/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /home/podman_basic_user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939]\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/conmon: permission denied\"\n [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n \n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Received: 26862\"\n time=\"2025-08-02T12:38:52-04:00\" level=info msg=\"Got Conmon PID as 26852\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Created container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 in OCI runtime\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Starting container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939 with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Started container 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Called kube.PersistentPostRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-08-02T12:38:52-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:38:52 managed-node2 platform-python[26618]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nAug 02 12:38:52 managed-node2 sudo[26615]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:53 managed-node2 sudo[26993]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbaqkfutgdfbadtjjacfxjqxvpyoigvo ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152733.1876235-16558-123988354219606/AnsiballZ_systemd.py'\nAug 02 12:38:53 managed-node2 sudo[26993]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:53 managed-node2 platform-python[26996]: ansible-systemd Invoked with daemon_reload=True scope=user daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nAug 02 12:38:53 managed-node2 systemd[25590]: Reloading.\nAug 02 12:38:53 managed-node2 sudo[26993]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:53 managed-node2 sudo[27130]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmqxburdibhftsxxfjnfharsaboqrlrj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152733.8051448-16591-22459124935909/AnsiballZ_systemd.py'\nAug 02 12:38:53 managed-node2 sudo[27130]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:54 managed-node2 platform-python[27133]: ansible-systemd Invoked with name= scope=user enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nAug 02 12:38:54 managed-node2 systemd[25590]: Reloading.\nAug 02 12:38:54 managed-node2 sudo[27130]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:54 managed-node2 dnsmasq[26816]: listening on cni-podman1(#3): fe80::1c08:d6ff:fe95:5ef1%cni-podman1\nAug 02 12:38:54 managed-node2 sudo[27269]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bebktvlckbeotrlmhvsnejtmeicquqjz ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152734.429291-16621-12283105958346/AnsiballZ_systemd.py'\nAug 02 12:38:54 managed-node2 sudo[27269]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:38:54 managed-node2 platform-python[27272]: ansible-systemd Invoked with name= scope=user state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nAug 02 12:38:54 managed-node2 systemd[25590]: Created slice podman\\x2dkube.slice.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:54 managed-node2 systemd[25590]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit UNIT has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun starting up.\nAug 02 12:38:54 managed-node2 conmon[26831]: conmon af16b69d72cc4526d63a : container 26841 exited with status 137\nAug 02 12:38:54 managed-node2 conmon[26852]: conmon 98c476488369c461640e : container 26862 exited with status 137\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae)\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:54-04:00\" level=info msg=\"Using sqlite as database backend\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939)\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:54-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nAug 02 12:38:54 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:54-04:00\" level=info msg=\"Using sqlite as database backend\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using graph driver overlay\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using transient store: false\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Initializing event backend file\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=info msg=\"Setting parallel job count to 7\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using graph driver overlay\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using transient store: false\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Initializing event backend file\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=info msg=\"Setting parallel job count to 7\"\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state\nAug 02 12:38:55 managed-node2 kernel: device vethff7bc329 left promiscuous mode\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethff7bc329) entered disabled state\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup 98c476488369c461640e88473117b57874e56e25a2772fd5e4c683ac7bb56939)\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27310]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /home/podman_basic_user/.local/share/containers/storage --runroot /run/user/3001/containers --log-level debug --cgroup-manager cgroupfs --tmpdir /run/user/3001/libpod/tmp --network-config-dir --network-backend cni --volumepath /home/podman_basic_user/.local/share/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --events-backend file --syslog container cleanup af16b69d72cc4526d63a51167a73dfefa453133cd58943e138ceb22870dc35ae)\"\nAug 02 12:38:55 managed-node2 /usr/bin/podman[27294]: time=\"2025-08-02T12:38:55-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:38:55 managed-node2 podman[27278]: Pods stopped:\nAug 02 12:38:55 managed-node2 podman[27278]: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f\nAug 02 12:38:55 managed-node2 podman[27278]: Pods removed:\nAug 02 12:38:55 managed-node2 podman[27278]: 191a369333e43e0cc3c4cb3c211b5270b06c4cc24ba67c05b75efe9cee70907f\nAug 02 12:38:55 managed-node2 podman[27278]: Secrets removed:\nAug 02 12:38:55 managed-node2 podman[27278]: Volumes removed:\nAug 02 12:38:55 managed-node2 systemd[25590]: Started rootless-netns-dd6b3697.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:55 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethfa4f074b: link is not ready\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered blocking state\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state\nAug 02 12:38:55 managed-node2 kernel: device vethfa4f074b entered promiscuous mode\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered blocking state\nAug 02 12:38:55 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered forwarding state\nAug 02 12:38:55 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfa4f074b: link becomes ready\nAug 02 12:38:55 managed-node2 dnsmasq[27525]: listening on cni-podman1(#3): 10.89.0.1\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: started, version 2.79 cachesize 150\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using local addresses only for domain dns.podman\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: reading /etc/resolv.conf\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using local addresses only for domain dns.podman\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.0.2.3#53\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.29.169.13#53\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.29.170.12#53\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: using nameserver 10.2.32.1#53\nAug 02 12:38:55 managed-node2 dnsmasq[27527]: read /run/user/3001/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:38:55 managed-node2 podman[27278]: Pod:\nAug 02 12:38:55 managed-node2 podman[27278]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a\nAug 02 12:38:55 managed-node2 podman[27278]: Container:\nAug 02 12:38:55 managed-node2 podman[27278]: bc86eb03c7fb7110b2363dd55ed2866f782f16e8d8374c8a82784079a47558f1\nAug 02 12:38:55 managed-node2 systemd[25590]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:38:55 managed-node2 sudo[27269]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:38:56 managed-node2 platform-python[27703]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:38:56 managed-node2 dnsmasq[27527]: listening on cni-podman1(#3): fe80::a0c6:53ff:fed6:1184%cni-podman1\nAug 02 12:38:57 managed-node2 platform-python[27827]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:38:58 managed-node2 platform-python[27952]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:38:59 managed-node2 platform-python[28076]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:00 managed-node2 platform-python[28199]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:39:00 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:39:01 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:39:01 managed-node2 platform-python[28489]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:02 managed-node2 platform-python[28612]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:02 managed-node2 platform-python[28735]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:39:03 managed-node2 platform-python[28834]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd2.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152742.5335138-17006-203119541001881/source _original_basename=tmpvkt7buq9 follow=False checksum=2a8a08ffe6bf0159dd7563e043ed3c303a77cff4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:39:03 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:39:03 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice.\n-- Subject: Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.7856] manager: (cni-podman1): new Bridge device (/org/freedesktop/NetworkManager/Devices/3)\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.7870] manager: (veth502e5636): new Veth device (/org/freedesktop/NetworkManager/Devices/4)\nAug 02 12:39:03 managed-node2 systemd-udevd[29006]: Using default interface naming scheme 'rhel-8.0'.\nAug 02 12:39:03 managed-node2 systemd-udevd[29006]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:03 managed-node2 systemd-udevd[29006]: Could not generate persistent MAC address for cni-podman1: No such file or directory\nAug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth502e5636: link is not ready\nAug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered blocking state\nAug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state\nAug 02 12:39:03 managed-node2 kernel: device veth502e5636 entered promiscuous mode\nAug 02 12:39:03 managed-node2 systemd-udevd[29007]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:03 managed-node2 systemd-udevd[29007]: Could not generate persistent MAC address for veth502e5636: No such file or directory\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8196] device (cni-podman1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8201] device (cni-podman1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8209] device (cni-podman1): Activation: starting connection 'cni-podman1' (0ddcaf44-4d9a-41cb-acd9-42060ce7dc76)\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8210] device (cni-podman1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8212] device (cni-podman1): state change: prepare -> config (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8215] device (cni-podman1): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8217] device (cni-podman1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 dbus-daemon[595]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.5' (uid=0 pid=665 comm=\"/usr/sbin/NetworkManager --no-daemon \" label=\"system_u:system_r:NetworkManager_t:s0\")\nAug 02 12:39:03 managed-node2 systemd[1]: Starting Network Manager Script Dispatcher Service...\n-- Subject: Unit NetworkManager-dispatcher.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit NetworkManager-dispatcher.service has begun starting up.\nAug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nAug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nAug 02 12:39:03 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth502e5636: link becomes ready\nAug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered blocking state\nAug 02 12:39:03 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered forwarding state\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8506] device (veth502e5636): carrier: link connected\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8508] device (cni-podman1): carrier: link connected\nAug 02 12:39:03 managed-node2 dbus-daemon[595]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8678] device (cni-podman1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8680] device (cni-podman1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')\nAug 02 12:39:03 managed-node2 NetworkManager[665]: [1754152743.8684] device (cni-podman1): Activation: successful, device activated.\nAug 02 12:39:03 managed-node2 systemd[1]: Started Network Manager Script Dispatcher Service.\n-- Subject: Unit NetworkManager-dispatcher.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit NetworkManager-dispatcher.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:03 managed-node2 dnsmasq[29128]: listening on cni-podman1(#3): 10.89.0.1\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: started, version 2.79 cachesize 150\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using local addresses only for domain dns.podman\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: reading /etc/resolv.conf\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using local addresses only for domain dns.podman\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.29.169.13#53\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.29.170.12#53\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: using nameserver 10.2.32.1#53\nAug 02 12:39:03 managed-node2 dnsmasq[29132]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:39:04 managed-node2 systemd[1]: Started libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope.\n-- Subject: Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/13/attach}\nAug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : terminal_ctrl_fd: 13\nAug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : winsz read side: 17, winsz write side: 18\nAug 02 12:39:04 managed-node2 systemd[1]: Started libcontainer container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.\n-- Subject: Unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:04 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : container PID: 29144\nAug 02 12:39:04 managed-node2 systemd[1]: Started libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope.\n-- Subject: Unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}\nAug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : terminal_ctrl_fd: 12\nAug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : winsz read side: 16, winsz write side: 17\nAug 02 12:39:04 managed-node2 systemd[1]: Started libcontainer container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.\n-- Subject: Unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:04 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : container PID: 29166\nAug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nAug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pod:\n 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\n Container:\n 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\n \nAug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:36:24.472660556 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"setting container name 90922c8ca930-infra\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Allocated lock 1 for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Cached value indicated that idmapped mounts for overlay are not supported\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Check for idmapped mounts support \"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created container \\\"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Container \\\"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Container \\\"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\\\" has run directory \\\"/run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Pulling image quay.io/libpod/testimage:20210610 (policy: missing)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Looking up image \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Trying \\\"quay.io/libpod/testimage:20210610\\\" ...\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Found image \\\"quay.io/libpod/testimage:20210610\\\" as \\\"quay.io/libpod/testimage:20210610\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f)\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Inspecting image 9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"using systemd mode: false\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"adding container to pod httpd2\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"setting container name httpd2-httpd2\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Loading seccomp profile from \\\"/usr/share/containers/seccomp.json\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=info msg=\"Sysctl net.ipv4.ping_group_range=0 0 ignored in containers.conf, since Network Namespace set to host\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /proc\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /dev\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /dev/pts\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /dev/mqueue\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /sys\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Adding mount /sys/fs/cgroup\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Allocated lock 2 for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created container \\\"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Container \\\"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\\\" has work directory \\\"/var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Container \\\"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\\\" has run directory \\\"/run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Strongconnecting node ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Pushed ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 onto stack\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Finishing node ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89. Popped ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 off stack\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Strongconnecting node 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Pushed 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb onto stack\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Finishing node 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb. Popped 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb off stack\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/ZR5XOSU7O7VXY2BDL65A7UWKU6,upperdir=/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/diff,workdir=/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c784,c888\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Mounted container \\\"ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\\\" at \\\"/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\\\"\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Created root filesystem for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 at /var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"Made network namespace at /run/netns/netns-057bdf77-0e93-7270-6a44-66c62177cd73 for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:03-04:00\" level=debug msg=\"cni result for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 network podman-default-kube-network: &{0.4.0 [{Name:cni-podman1 Mac:96:46:b4:0c:81:50 Sandbox:} {Name:veth502e5636 Mac:4a:ea:32:89:32:4a Sandbox:} {Name:eth0 Mac:ae:ce:ef:99:2c:87 Sandbox:/run/netns/netns-057bdf77-0e93-7270-6a44-66c62177cd73}] [{Version:4 Interface:0xc00087bc58 Address:{IP:10.89.0.2 Mask:ffffff00} Gateway:10.89.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:}] {[10.89.0.1] [dns.podman] []}}\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Setting Cgroups for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 to machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice:libpod:ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Workdir \\\"/\\\" resolved to host path \\\"/var/lib/containers/storage/overlay/8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1/merged\\\"\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created OCI spec for container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 at /var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/config.json\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 -u ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata -p /run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/pidfile -n 90922c8ca930-infra --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89]\"\n time=\"2025-08-02T12:39:04-04:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice and unitName libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Received: 29144\"\n time=\"2025-08-02T12:39:04-04:00\" level=info msg=\"Got Conmon PID as 29134\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 in OCI runtime\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Adding nameserver(s) from network status of '[\\\"10.89.0.1\\\"]'\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Adding search domain(s) from network status of '[\\\"dns.podman\\\"]'\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Starting container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89 with command [/catatonit -P]\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Started container ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/HKP6QAO57O46FRNHGFBKAKZZRC,upperdir=/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/diff,workdir=/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/work,nodev,metacopy=on,context=\\\"system_u:object_r:container_file_t:s0:c784,c888\\\"\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Mounted container \\\"071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\\\" at \\\"/var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/merged\\\"\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created root filesystem for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb at /var/lib/containers/storage/overlay/7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767/merged\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"/etc/system-fips does not exist on host, not mounting FIPS mode subscription\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Setting Cgroups for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb to machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice:libpod:071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"reading hooks from /usr/share/containers/oci/hooks.d\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Workdir \\\"/var/www\\\" resolved to a volume or mount\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created OCI spec for container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb at /var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/config.json\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice for parent machine.slice and name libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"/usr/bin/conmon messages will be logged to syslog\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"running conmon: /usr/bin/conmon\" args=\"[--api-version 1 -c 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb -u 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata -p /run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/pidfile -n httpd2-httpd2 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg cni --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb]\"\n time=\"2025-08-02T12:39:04-04:00\" level=info msg=\"Running conmon under slice machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice and unitName libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Received: 29166\"\n time=\"2025-08-02T12:39:04-04:00\" level=info msg=\"Got Conmon PID as 29155\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Created container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb in OCI runtime\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Starting container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb with command [/bin/busybox-extras httpd -f -p 80]\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Started container 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Called kube.PersistentPostRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-08-02T12:39:04-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:39:04 managed-node2 platform-python[28959]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nAug 02 12:39:04 managed-node2 platform-python[29297]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nAug 02 12:39:04 managed-node2 systemd[1]: Reloading.\nAug 02 12:39:05 managed-node2 dnsmasq[29132]: listening on cni-podman1(#3): fe80::9446:b4ff:fe0c:8150%cni-podman1\nAug 02 12:39:05 managed-node2 platform-python[29458]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nAug 02 12:39:05 managed-node2 systemd[1]: Reloading.\nAug 02 12:39:06 managed-node2 platform-python[29621]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nAug 02 12:39:06 managed-node2 systemd[1]: Created slice system-podman\\x2dkube.slice.\n-- Subject: Unit system-podman\\x2dkube.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit system-podman\\x2dkube.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:06 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun starting up.\nAug 02 12:39:06 managed-node2 conmon[29134]: conmon ef1687323c945d3eead4 : container 29144 exited with status 137\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Consumed 32ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope completed and consumed the indicated resources.\nAug 02 12:39:06 managed-node2 conmon[29155]: conmon 071b72fb9b8953f4c690 : container 29166 exited with status 137\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89)\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Using sqlite as database backend\"\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope completed and consumed the indicated resources.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Called cleanup.PersistentPreRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb)\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Setting custom database backend: \\\"sqlite\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Using sqlite as database backend\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using graph driver overlay\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using transient store: false\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Initializing event backend file\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Setting parallel job count to 7\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using graph driver overlay\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using transient store: false\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Initializing event backend file\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=info msg=\"Setting parallel job count to 7\"\nAug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-7dc51b90cbe66e1101e8a4deef1e2c7564683b0f26cdb1c61be46e2691ec9767-merged.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup 071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb)\"\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29653]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-071b72fb9b8953f4c6902c265cbc0c8c6197bb6b0029ec39ad2de73b828a51cb.scope has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state\nAug 02 12:39:06 managed-node2 kernel: device veth502e5636 left promiscuous mode\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(veth502e5636) entered disabled state\nAug 02 12:39:06 managed-node2 systemd[1]: run-netns-netns\\x2d057bdf77\\x2d0e93\\x2d7270\\x2d6a44\\x2d66c62177cd73.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d057bdf77\\x2d0e93\\x2d7270\\x2d6a44\\x2d66c62177cd73.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay-8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-8359c6ef34c10a4b08d16a1e6dcc5c06c9c16194822b900ef3c96770dde0b7c1-merged.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Called cleanup.PersistentPostRunE(/usr/bin/podman --root /var/lib/containers/storage --runroot /run/containers/storage --log-level debug --cgroup-manager systemd --tmpdir /run/libpod --network-config-dir --network-backend cni --volumepath /var/lib/containers/storage/volumes --db-backend sqlite --transient-store=false --runtime runc --storage-driver overlay --storage-opt overlay.mountopt=nodev,metacopy=on --events-backend file --syslog container cleanup ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89)\"\nAug 02 12:39:06 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 /usr/bin/podman[29646]: time=\"2025-08-02T12:39:06-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:39:06 managed-node2 systemd[1]: libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has successfully entered the 'dead' state.\nAug 02 12:39:06 managed-node2 systemd[1]: Stopped libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope.\n-- Subject: Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-ef1687323c945d3eead48cef55cd6938131467b3e2011ef9db2e83a2e69dcc89.scope has finished shutting down.\nAug 02 12:39:06 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice.\n-- Subject: Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice has finished shutting down.\nAug 02 12:39:06 managed-node2 systemd[1]: machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice: Consumed 209ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496.slice completed and consumed the indicated resources.\nAug 02 12:39:06 managed-node2 podman[29628]: Pods stopped:\nAug 02 12:39:06 managed-node2 podman[29628]: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\nAug 02 12:39:06 managed-node2 podman[29628]: Pods removed:\nAug 02 12:39:06 managed-node2 podman[29628]: 90922c8ca930e722bb0bbd3a647bdae3a2e810f555036c6daf6c9db928410496\nAug 02 12:39:06 managed-node2 podman[29628]: Secrets removed:\nAug 02 12:39:06 managed-node2 podman[29628]: Volumes removed:\nAug 02 12:39:06 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice.\n-- Subject: Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:06 managed-node2 systemd[1]: Started libcontainer container 1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.\n-- Subject: Unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:06 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethb3f38e19: link is not ready\nAug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7477] manager: (vethb3f38e19): new Veth device (/org/freedesktop/NetworkManager/Devices/5)\nAug 02 12:39:06 managed-node2 systemd-udevd[29789]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:06 managed-node2 systemd-udevd[29789]: Could not generate persistent MAC address for vethb3f38e19: No such file or directory\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state\nAug 02 12:39:06 managed-node2 kernel: device vethb3f38e19 entered promiscuous mode\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered forwarding state\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state\nAug 02 12:39:06 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb3f38e19: link becomes ready\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered blocking state\nAug 02 12:39:06 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered forwarding state\nAug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7761] device (vethb3f38e19): carrier: link connected\nAug 02 12:39:06 managed-node2 NetworkManager[665]: [1754152746.7763] device (cni-podman1): carrier: link connected\nAug 02 12:39:06 managed-node2 dnsmasq[29859]: listening on cni-podman1(#3): 10.89.0.1\nAug 02 12:39:06 managed-node2 dnsmasq[29859]: listening on cni-podman1(#3): fe80::9446:b4ff:fe0c:8150%cni-podman1\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: started, version 2.79 cachesize 150\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using local addresses only for domain dns.podman\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: reading /etc/resolv.conf\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using local addresses only for domain dns.podman\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.29.169.13#53\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.29.170.12#53\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: using nameserver 10.2.32.1#53\nAug 02 12:39:06 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:39:06 managed-node2 systemd[1]: Started libcontainer container 36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.\n-- Subject: Unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:07 managed-node2 systemd[1]: Started libcontainer container 58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.\n-- Subject: Unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:07 managed-node2 podman[29628]: Pod:\nAug 02 12:39:07 managed-node2 podman[29628]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3\nAug 02 12:39:07 managed-node2 podman[29628]: Container:\nAug 02 12:39:07 managed-node2 podman[29628]: 58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5\nAug 02 12:39:07 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:08 managed-node2 platform-python[30036]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:09 managed-node2 platform-python[30161]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:10 managed-node2 platform-python[30285]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:11 managed-node2 platform-python[30408]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:12 managed-node2 platform-python[30696]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:13 managed-node2 platform-python[30819]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:13 managed-node2 platform-python[30942]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:39:13 managed-node2 platform-python[31041]: ansible-copy Invoked with dest=/etc/containers/ansible-kubernetes.d/httpd3.yml owner=root group=0 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1754152753.1569695-17471-186787888155164/source _original_basename=tmpca25d1vk follow=False checksum=0ee95d54856ad9dce4aa168ba4cfda0f7aaf74cc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None regexp=None delimiter=None\nAug 02 12:39:13 managed-node2 systemd[1]: NetworkManager-dispatcher.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit NetworkManager-dispatcher.service has successfully entered the 'dead' state.\nAug 02 12:39:14 managed-node2 platform-python[31167]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:39:14 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice.\n-- Subject: Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): vethe290c1c0: link is not ready\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state\nAug 02 12:39:14 managed-node2 kernel: device vethe290c1c0 entered promiscuous mode\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered forwarding state\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state\nAug 02 12:39:14 managed-node2 NetworkManager[665]: [1754152754.3788] manager: (vethe290c1c0): new Veth device (/org/freedesktop/NetworkManager/Devices/6)\nAug 02 12:39:14 managed-node2 systemd-udevd[31214]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:14 managed-node2 systemd-udevd[31214]: Could not generate persistent MAC address for vethe290c1c0: No such file or directory\nAug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready\nAug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\nAug 02 12:39:14 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe290c1c0: link becomes ready\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered blocking state\nAug 02 12:39:14 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered forwarding state\nAug 02 12:39:14 managed-node2 NetworkManager[665]: [1754152754.3907] device (vethe290c1c0): carrier: link connected\nAug 02 12:39:14 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses\nAug 02 12:39:14 managed-node2 systemd[1]: Started libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope.\n-- Subject: Unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:14 managed-node2 systemd[1]: Started libcontainer container 757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.\n-- Subject: Unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:14 managed-node2 systemd[1]: Started libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope.\n-- Subject: Unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:14 managed-node2 systemd[1]: Started libcontainer container c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.\n-- Subject: Unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:15 managed-node2 platform-python[31446]: ansible-systemd Invoked with daemon_reload=True scope=system daemon_reexec=False no_block=False name=None state=None enabled=None force=None masked=None user=None\nAug 02 12:39:15 managed-node2 systemd[1]: Reloading.\nAug 02 12:39:15 managed-node2 platform-python[31599]: ansible-systemd Invoked with name= scope=system enabled=True daemon_reload=False daemon_reexec=False no_block=False state=None force=None masked=None user=None\nAug 02 12:39:16 managed-node2 systemd[1]: Reloading.\nAug 02 12:39:16 managed-node2 platform-python[31762]: ansible-systemd Invoked with name= scope=system state=started daemon_reload=False daemon_reexec=False no_block=False enabled=None force=None masked=None user=None\nAug 02 12:39:16 managed-node2 systemd[1]: Starting A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun starting up.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Consumed 33ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope completed and consumed the indicated resources.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Consumed 35ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope completed and consumed the indicated resources.\nAug 02 12:39:16 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay-560605836dd57768a1625bf83fb2efde4d0b4be2bd75173250f2c981226dcfec-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-560605836dd57768a1625bf83fb2efde4d0b4be2bd75173250f2c981226dcfec-merged.mount has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-c91c1dc0fc3844d2d0646b20988d801db54cc522691363260a59fa62e7f00bb8.scope has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state\nAug 02 12:39:16 managed-node2 kernel: device vethe290c1c0 left promiscuous mode\nAug 02 12:39:16 managed-node2 kernel: cni-podman1: port 2(vethe290c1c0) entered disabled state\nAug 02 12:39:16 managed-node2 systemd[1]: run-netns-netns\\x2d925a2bce\\x2dbdb1\\x2deec4\\x2d32ca\\x2dde6b846181b3.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2d925a2bce\\x2dbdb1\\x2deec4\\x2d32ca\\x2dde6b846181b3.mount has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay-aecbd6406a5b43a9f34c80dc98df8e948a604d44b70ab60cb4952ed4aea64143-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-aecbd6406a5b43a9f34c80dc98df8e948a604d44b70ab60cb4952ed4aea64143-merged.mount has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-conmon-757a27c752169bcc239c71a97efc4bdae91365f5c1f07ba574bb9db3168f980b.scope has successfully entered the 'dead' state.\nAug 02 12:39:16 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice.\n-- Subject: Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice has finished shutting down.\nAug 02 12:39:16 managed-node2 systemd[1]: machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice: Consumed 199ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd.slice completed and consumed the indicated resources.\nAug 02 12:39:17 managed-node2 podman[31769]: Pods stopped:\nAug 02 12:39:17 managed-node2 podman[31769]: c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd\nAug 02 12:39:17 managed-node2 podman[31769]: Pods removed:\nAug 02 12:39:17 managed-node2 podman[31769]: c1cc1d3a8c218c834958ac986285b0680d72018f6c1199f564653a3c6af0b1bd\nAug 02 12:39:17 managed-node2 podman[31769]: Secrets removed:\nAug 02 12:39:17 managed-node2 podman[31769]: Volumes removed:\nAug 02 12:39:17 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice.\n-- Subject: Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.\n-- Subject: Unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:17 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_UP): veth69cd15af: link is not ready\nAug 02 12:39:17 managed-node2 NetworkManager[665]: [1754152757.2610] manager: (veth69cd15af): new Veth device (/org/freedesktop/NetworkManager/Devices/7)\nAug 02 12:39:17 managed-node2 systemd-udevd[31935]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.\nAug 02 12:39:17 managed-node2 systemd-udevd[31935]: Could not generate persistent MAC address for veth69cd15af: No such file or directory\nAug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered blocking state\nAug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state\nAug 02 12:39:17 managed-node2 kernel: device veth69cd15af entered promiscuous mode\nAug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered blocking state\nAug 02 12:39:17 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered forwarding state\nAug 02 12:39:17 managed-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth69cd15af: link becomes ready\nAug 02 12:39:17 managed-node2 NetworkManager[665]: [1754152757.2829] device (veth69cd15af): carrier: link connected\nAug 02 12:39:17 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 2 addresses\nAug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container 686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.\n-- Subject: Unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:17 managed-node2 systemd[1]: Started libcontainer container 42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.\n-- Subject: Unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:17 managed-node2 podman[31769]: Pod:\nAug 02 12:39:17 managed-node2 podman[31769]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748\nAug 02 12:39:17 managed-node2 podman[31769]: Container:\nAug 02 12:39:17 managed-node2 podman[31769]: 42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c\nAug 02 12:39:17 managed-node2 systemd[1]: Started A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:18 managed-node2 sudo[32165]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqlhppzgtldczaoizfnuaorgtcfrvcv ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152758.2331746-17700-125571240428666/AnsiballZ_command.py'\nAug 02 12:39:18 managed-node2 sudo[32165]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:39:18 managed-node2 platform-python[32168]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:18 managed-node2 systemd[25590]: Started podman-32177.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:18 managed-node2 sudo[32165]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:39:18 managed-node2 platform-python[32306]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:19 managed-node2 platform-python[32437]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:19 managed-node2 sudo[32575]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fljlyrnggisgqfazmxwyyzqdsrmpnozw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152759.5230556-17753-36771943884398/AnsiballZ_command.py'\nAug 02 12:39:19 managed-node2 sudo[32575]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:39:19 managed-node2 platform-python[32578]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:19 managed-node2 sudo[32575]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:39:20 managed-node2 platform-python[32704]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:20 managed-node2 platform-python[32830]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:21 managed-node2 platform-python[32956]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:21 managed-node2 platform-python[33080]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:21 managed-node2 rsyslogd[1025]: imjournal: journal files changed, reloading... [v8.2102.0-15.el8 try https://www.rsyslog.com/e/0 ]\nAug 02 12:39:22 managed-node2 platform-python[33205]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd1-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:22 managed-node2 platform-python[33329]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd2-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:22 managed-node2 platform-python[33453]: ansible-command Invoked with _raw_params=ls -alrtF /tmp/lsr_ga01o8zo_podman/httpd3-create warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:25 managed-node2 platform-python[33702]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:26 managed-node2 platform-python[33831]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:30 managed-node2 platform-python[33956]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:39:33 managed-node2 platform-python[34079]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:39:33 managed-node2 platform-python[34206]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:39:34 managed-node2 platform-python[34333]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['15001-15003/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:39:36 managed-node2 platform-python[34456]: ansible-dnf Invoked with name=['python3-libselinux', 'python3-policycoreutils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:39:39 managed-node2 platform-python[34579]: ansible-dnf Invoked with name=['grubby'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:39:42 managed-node2 platform-python[34702]: ansible-dnf Invoked with name=['policycoreutils-python-utils'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:39:45 managed-node2 platform-python[34825]: ansible-setup Invoked with filter=ansible_selinux gather_subset=['all'] gather_timeout=10 fact_path=/etc/ansible/facts.d\nAug 02 12:39:47 managed-node2 platform-python[34986]: ansible-fedora.linux_system_roles.local_seport Invoked with ports=['15001-15003'] proto=tcp setype=http_port_t state=present local=False ignore_selinux_state=False reload=True\nAug 02 12:39:48 managed-node2 platform-python[35109]: ansible-fedora.linux_system_roles.selinux_modules_facts Invoked\nAug 02 12:39:53 managed-node2 platform-python[35232]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nAug 02 12:39:53 managed-node2 platform-python[35356]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:54 managed-node2 platform-python[35481]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:54 managed-node2 platform-python[35605]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:55 managed-node2 platform-python[35729]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:39:56 managed-node2 platform-python[35853]: ansible-command Invoked with creates=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl enable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None\nAug 02 12:39:57 managed-node2 platform-python[35976]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1 state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:57 managed-node2 platform-python[36099]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd1-create state=directory owner=podman_basic_user group=3001 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:39:58 managed-node2 sudo[36222]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgimaanjcdkkcebhasfqpdcfpgwkfami ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152797.8648844-19514-81631354606804/AnsiballZ_podman_image.py'\nAug 02 12:39:58 managed-node2 sudo[36222]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36227.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36235.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36243.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36251.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36259.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 systemd[25590]: Started podman-36268.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:39:58 managed-node2 sudo[36222]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:39:59 managed-node2 platform-python[36397]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:39:59 managed-node2 platform-python[36522]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d state=directory owner=podman_basic_user group=3001 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:00 managed-node2 platform-python[36645]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:40:00 managed-node2 platform-python[36709]: ansible-file Invoked with owner=podman_basic_user group=3001 mode=0644 dest=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml _original_basename=tmp5c1b5ldh recurse=False state=file path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:00 managed-node2 sudo[36832]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkfbzsoxqmfspcyzxykzglzhyzsybbor ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152800.4994206-19649-125491116375657/AnsiballZ_podman_play.py'\nAug 02 12:40:00 managed-node2 sudo[36832]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:00 managed-node2 systemd[25590]: Started podman-36843.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-08-02T12:40:00-04:00\" level=info msg=\"/bin/podman filtering at log level debug\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/bin/podman play kube --start=true --log-level=debug /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml)\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using graph root /home/podman_basic_user/.local/share/containers/storage\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using run root /run/user/3001/containers\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using static dir /home/podman_basic_user/.local/share/containers/storage/libpod\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using tmp dir /run/user/3001/libpod/tmp\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using volume path /home/podman_basic_user/.local/share/containers/storage/volumes\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Cached value indicated that metacopy is not being used\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Cached value indicated that native-diff is usable\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:38:52.156183843 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/home/podman_basic_user/.local/share/containers/storage+/run/user/3001/containers]@60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778)\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:60bca81c659f869d46895534969e53b947722cd9ad8dfd041bbfe1b299f10778\\\"\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Got pod cgroup as /libpod_parent/af868cea690b52212d50213e7cf00f2f99a7e0af0fbb1c22376a1c8272177aef\"\n Error: adding pod to state: name \"httpd1\" is in use: pod already exists\n time=\"2025-08-02T12:40:00-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:40:00 managed-node2 platform-python[36835]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nAug 02 12:40:00 managed-node2 sudo[36832]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:01 managed-node2 platform-python[36997]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:40:02 managed-node2 platform-python[37121]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:03 managed-node2 platform-python[37246]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:04 managed-node2 platform-python[37370]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:05 managed-node2 platform-python[37493]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd2 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:06 managed-node2 platform-python[37784]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:07 managed-node2 platform-python[37909]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:07 managed-node2 platform-python[38032]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:40:07 managed-node2 platform-python[38096]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd2.yml _original_basename=tmp582cc1u4 recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd2.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play Invoked with state=started debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:08 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice.\n-- Subject: Unit machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml\nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: \nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: time=\"2025-08-02T12:40:08-04:00\" level=info msg=\"/usr/bin/podman filtering at log level debug\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Called kube.PersistentPreRunE(/usr/bin/podman play kube --start=true --log-level=debug /etc/containers/ansible-kubernetes.d/httpd2.yml)\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using conmon: \\\"/usr/bin/conmon\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=info msg=\"Using sqlite as database backend\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using graph driver overlay\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using graph root /var/lib/containers/storage\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using run root /run/containers/storage\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using static dir /var/lib/containers/storage/libpod\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using tmp dir /run/libpod\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using volume path /var/lib/containers/storage/volumes\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using transient store: false\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"[graphdriver] trying provided driver \\\"overlay\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Cached value indicated that overlay is supported\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Cached value indicated that metacopy is being used\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Cached value indicated that native-diff is not being used\"\n time=\"2025-08-02T12:40:08-04:00\" level=info msg=\"Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Initializing event backend file\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Using OCI runtime \\\"/usr/bin/runc\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=info msg=\"Setting parallel job count to 7\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Successfully loaded network podman-default-kube-network: &{podman-default-kube-network a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc bridge cni-podman1 2025-08-02 12:36:24.472660556 -0400 EDT [{{{10.89.0.0 ffffff00}} 10.89.0.1 }] [] false false true [] map[] map[] map[driver:host-local]}\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Successfully loaded 2 networks\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Looking up image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Normalized platform linux/amd64 to {amd64 linux [] }\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Trying \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" ...\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"parsed reference into \\\"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Found image \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" as \\\"localhost/podman-pause:4.9.4-dev-1708535009\\\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb)\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"exporting opaque data as blob \\\"sha256:e0389c54b0bc667ef53ffdb61e8f9857c4cf16a18f80e2fc739df5abab956efb\\\"\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Pod using bridge network mode\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Created cgroup path machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice for parent machine.slice and name libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Created cgroup machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice\"\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Got pod cgroup as machine.slice/machine-libpod_pod_7a12242e8257e70e6da2b9b5a86bb75019710b9f3951bda738f6e64b54b4ce8d.slice\"\n Error: adding pod to state: name \"httpd2\" is in use: pod already exists\n time=\"2025-08-02T12:40:08-04:00\" level=debug msg=\"Shutting down engines\"\nAug 02 12:40:08 managed-node2 platform-python[38219]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 125\nAug 02 12:40:09 managed-node2 platform-python[38380]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:11 managed-node2 platform-python[38505]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:12 managed-node2 platform-python[38629]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3 state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:12 managed-node2 platform-python[38752]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman/httpd3-create state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:14 managed-node2 platform-python[39041]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:14 managed-node2 platform-python[39166]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d state=directory owner=root group=0 mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:15 managed-node2 platform-python[39289]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True\nAug 02 12:40:15 managed-node2 platform-python[39353]: ansible-file Invoked with owner=root group=0 mode=0644 dest=/etc/containers/ansible-kubernetes.d/httpd3.yml _original_basename=tmp1h9opetg recurse=False state=file path=/etc/containers/ansible-kubernetes.d/httpd3.yml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:15 managed-node2 platform-python[39476]: ansible-containers.podman.podman_play Invoked with state=started kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:15 managed-node2 systemd[1]: Created slice cgroup machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice.\n-- Subject: Unit machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_09fcb92602708a13ddcd36fafab1114193416cfb9688e13dc060dc675a9ebb02.slice has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:16 managed-node2 sudo[39637]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqvcznhcsisjvfqgdskaqojrqwhygefl ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152816.415957-20432-182641700984178/AnsiballZ_command.py'\nAug 02 12:40:16 managed-node2 sudo[39637]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:16 managed-node2 platform-python[39640]: ansible-command Invoked with _raw_params=podman pod inspect httpd1 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:16 managed-node2 systemd[25590]: Started podman-39648.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:16 managed-node2 sudo[39637]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:17 managed-node2 platform-python[39778]: ansible-command Invoked with _raw_params=podman pod inspect httpd2 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:17 managed-node2 platform-python[39909]: ansible-command Invoked with _raw_params=podman pod inspect httpd3 --format '{{.State}}' warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:17 managed-node2 sudo[40040]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omdhsbbfmibalzpyvmgfxmtrjimxacss ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152817.7911968-20497-78215437807137/AnsiballZ_command.py'\nAug 02 12:40:17 managed-node2 sudo[40040]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:18 managed-node2 platform-python[40043]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:18 managed-node2 sudo[40040]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:18 managed-node2 platform-python[40169]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:18 managed-node2 platform-python[40295]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:19 managed-node2 platform-python[40421]: ansible-uri Invoked with url=http://localhost:15001/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:19 managed-node2 platform-python[40545]: ansible-uri Invoked with url=http://localhost:15002/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:20 managed-node2 platform-python[40669]: ansible-uri Invoked with url=http://localhost:15003/index.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} follow=False unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:23 managed-node2 platform-python[40918]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:24 managed-node2 platform-python[41047]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:27 managed-node2 platform-python[41172]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nAug 02 12:40:28 managed-node2 platform-python[41296]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:28 managed-node2 platform-python[41421]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:29 managed-node2 platform-python[41545]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:30 managed-node2 platform-python[41669]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:31 managed-node2 platform-python[41793]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:31 managed-node2 sudo[41918]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxfxukflwegbkcgylwjhylqbzyvcoltj ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152831.2720408-21172-116283974621483/AnsiballZ_systemd.py'\nAug 02 12:40:31 managed-node2 sudo[41918]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:31 managed-node2 platform-python[41921]: ansible-systemd Invoked with name= scope=user state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:40:31 managed-node2 systemd[25590]: Reloading.\nAug 02 12:40:31 managed-node2 systemd[25590]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nAug 02 12:40:31 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state\nAug 02 12:40:31 managed-node2 kernel: device vethfa4f074b left promiscuous mode\nAug 02 12:40:31 managed-node2 kernel: cni-podman1: port 1(vethfa4f074b) entered disabled state\nAug 02 12:40:32 managed-node2 podman[41937]: Pods stopped:\nAug 02 12:40:32 managed-node2 podman[41937]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a\nAug 02 12:40:32 managed-node2 podman[41937]: Pods removed:\nAug 02 12:40:32 managed-node2 podman[41937]: 5e85626264c7b01bb8a75935ff611e0d589096a98e89e05fd7aa9037e892318a\nAug 02 12:40:32 managed-node2 podman[41937]: Secrets removed:\nAug 02 12:40:32 managed-node2 podman[41937]: Volumes removed:\nAug 02 12:40:32 managed-node2 systemd[25590]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:32 managed-node2 sudo[41918]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:32 managed-node2 platform-python[42211]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:32 managed-node2 sudo[42336]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvztponjzcqifptzcqpojjxosoczfayg ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152832.762454-21237-209128215005834/AnsiballZ_podman_play.py'\nAug 02 12:40:32 managed-node2 sudo[42336]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nAug 02 12:40:33 managed-node2 systemd[25590]: Started podman-42347.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /bin/podman kube play --down /home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nAug 02 12:40:33 managed-node2 platform-python[42339]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nAug 02 12:40:33 managed-node2 sudo[42336]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:33 managed-node2 platform-python[42476]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:34 managed-node2 platform-python[42599]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:40:35 managed-node2 platform-python[42723]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:36 managed-node2 platform-python[42848]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:37 managed-node2 platform-python[42972]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:40:37 managed-node2 systemd[1]: Reloading.\nAug 02 12:40:37 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has begun shutting down.\nAug 02 12:40:37 managed-node2 systemd[1]: libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope: Consumed 31ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561.scope completed and consumed the indicated resources.\nAug 02 12:40:37 managed-node2 systemd[1]: libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope: Consumed 35ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-58979bb0ea7897cf20cecbde7b235678afd1011553726c8cf270c8d4515b4ec5.scope completed and consumed the indicated resources.\nAug 02 12:40:37 managed-node2 dnsmasq[29863]: read /run/containers/cni/dnsname/podman-default-kube-network/addnhosts - 1 addresses\nAug 02 12:40:37 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state\nAug 02 12:40:37 managed-node2 kernel: device vethb3f38e19 left promiscuous mode\nAug 02 12:40:37 managed-node2 kernel: cni-podman1: port 1(vethb3f38e19) entered disabled state\nAug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-5a0e125408b3f62c274917d9a997808220e1c2685a2d8ff8405416971a11f6c0-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-5a0e125408b3f62c274917d9a997808220e1c2685a2d8ff8405416971a11f6c0-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: run-netns-netns\\x2de8368567\\x2d59b6\\x2d542f\\x2d1a97\\x2df2ca68e931e3.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2de8368567\\x2d59b6\\x2d542f\\x2d1a97\\x2df2ca68e931e3.mount has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-36356d248f8cff318780fef08ed41c8370d8e10467136bf475d246e3c43a2561-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: var-lib-containers-storage-overlay-15c93535d4532096f93e6259ac42e3e35cf574dd19355624da5c37ad60d78144-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-15c93535d4532096f93e6259ac42e3e35cf574dd19355624da5c37ad60d78144-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:37 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice.\n-- Subject: Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice has finished shutting down.\nAug 02 12:40:37 managed-node2 systemd[1]: machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice: Consumed 67ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3.slice completed and consumed the indicated resources.\nAug 02 12:40:38 managed-node2 systemd[1]: libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope has successfully entered the 'dead' state.\nAug 02 12:40:38 managed-node2 systemd[1]: libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope: Consumed 35ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27.scope completed and consumed the indicated resources.\nAug 02 12:40:38 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-1b937bf96e254586b6101be9079bb1aa7881bf37d608e950b4de5c1cfc4b7f27-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:40:38 managed-node2 podman[43008]: Pods stopped:\nAug 02 12:40:38 managed-node2 podman[43008]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3\nAug 02 12:40:38 managed-node2 podman[43008]: Pods removed:\nAug 02 12:40:38 managed-node2 podman[43008]: 7f2562380f2fb5e7151f9201b8f95d3fe8364b3a5cb9e0a571d6590566f058f3\nAug 02 12:40:38 managed-node2 podman[43008]: Secrets removed:\nAug 02 12:40:38 managed-node2 podman[43008]: Volumes removed:\nAug 02 12:40:38 managed-node2 dnsmasq[29863]: exiting on receipt of SIGTERM\nAug 02 12:40:38 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has successfully entered the 'dead' state.\nAug 02 12:40:38 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd2.yml.service has finished shutting down.\nAug 02 12:40:38 managed-node2 platform-python[43285]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:38 managed-node2 systemd[1]: var-lib-containers-storage-overlay-1ad187e54185c2c7cbe64b95feb5c0fbe8c425581baa88a9f71bb6eaaa92a272-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-1ad187e54185c2c7cbe64b95feb5c0fbe8c425581baa88a9f71bb6eaaa92a272-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:38 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play Invoked with state=absent debug=True log_level=debug kube_file=/etc/containers/ansible-kubernetes.d/httpd2.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None quiet=None recreate=None userns=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd2.yml\nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE command: /usr/bin/podman kube play --down /etc/containers/ansible-kubernetes.d/httpd2.yml\nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stdout: Pods stopped:\n Pods removed:\n Secrets removed:\n Volumes removed:\nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE stderr: \nAug 02 12:40:39 managed-node2 platform-python[43410]: ansible-containers.podman.podman_play PODMAN-PLAY-KUBE rc: 0\nAug 02 12:40:39 managed-node2 platform-python[43546]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:40 managed-node2 platform-python[43669]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:42 managed-node2 platform-python[43794]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:42 managed-node2 platform-python[43918]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:40:42 managed-node2 systemd[1]: Reloading.\nAug 02 12:40:43 managed-node2 systemd[1]: Stopping A template for running K8s workloads via podman-kube-play...\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has begun shutting down.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope: Consumed 32ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c.scope completed and consumed the indicated resources.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope: Consumed 34ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-42dc9c7d692e57d23e907db57fa18f4d96597863ff5dcb18ed9ec6379ff6707c.scope completed and consumed the indicated resources.\nAug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay-65e7014be45047102e9045dcdd9345e82206f8672e1c2920f53097bbdf3fcc43-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-65e7014be45047102e9045dcdd9345e82206f8672e1c2920f53097bbdf3fcc43-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state\nAug 02 12:40:43 managed-node2 kernel: device veth69cd15af left promiscuous mode\nAug 02 12:40:43 managed-node2 kernel: cni-podman1: port 2(veth69cd15af) entered disabled state\nAug 02 12:40:43 managed-node2 systemd[1]: run-netns-netns\\x2dfd171033\\x2dc8d0\\x2d5ddd\\x2d985b\\x2d865fa20d123b.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-netns-netns\\x2dfd171033\\x2dc8d0\\x2d5ddd\\x2d985b\\x2d865fa20d123b.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-686ac85bff102454f287117c77be26468e0b4318b86946889dec7efd392dd57c-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay-bd51c7fcb084eb966e3f3b68637936b0c8d50499aba52b4ffce32c4ad877cf0d-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-bd51c7fcb084eb966e3f3b68637936b0c8d50499aba52b4ffce32c4ad877cf0d-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: Removed slice cgroup machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice.\n-- Subject: Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice has finished shutting down.\nAug 02 12:40:43 managed-node2 systemd[1]: machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice: Consumed 66ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit machine-libpod_pod_728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748.slice completed and consumed the indicated resources.\nAug 02 12:40:43 managed-node2 podman[43954]: Pods stopped:\nAug 02 12:40:43 managed-node2 podman[43954]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748\nAug 02 12:40:43 managed-node2 podman[43954]: Pods removed:\nAug 02 12:40:43 managed-node2 podman[43954]: 728dad2a9b09f236b206b93b4a41b9319629f70ca5ea7a2d108706836b531748\nAug 02 12:40:43 managed-node2 podman[43954]: Secrets removed:\nAug 02 12:40:43 managed-node2 podman[43954]: Volumes removed:\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope: Consumed 35ms CPU time\n-- Subject: Resources consumed by unit runtime\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit libpod-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124.scope completed and consumed the indicated resources.\nAug 02 12:40:43 managed-node2 systemd[1]: var-lib-containers-storage-overlay\\x2dcontainers-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124-userdata-shm.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay\\x2dcontainers-c7345a83202129d9fa9036452e4cb3d01d6f9a4c20ef28fa51f738dffee12124-userdata-shm.mount has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has successfully entered the 'dead' state.\nAug 02 12:40:43 managed-node2 systemd[1]: Stopped A template for running K8s workloads via podman-kube-play.\n-- Subject: Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit podman-kube@-etc-containers-ansible\\x2dkubernetes.d-httpd3.yml.service has finished shutting down.\nAug 02 12:40:44 managed-node2 platform-python[44224]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay-7cab78c749c43bcbce8111bde995b9f5abcdf949ac791b25a3b873fde27f6845-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-7cab78c749c43bcbce8111bde995b9f5abcdf949ac791b25a3b873fde27f6845-merged.mount has successfully entered the 'dead' state.\nAug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:40:44 managed-node2 platform-python[44349]: ansible-containers.podman.podman_play Invoked with state=absent kube_file=/etc/containers/ansible-kubernetes.d/httpd3.yml executable=podman annotation=None kube_file_content=None authfile=None build=None cert_dir=None configmap=None context_dir=None seccomp_profile_root=None username=None password=NOT_LOGGING_PARAMETER log_driver=None log_opt=None network=None tls_verify=None debug=None quiet=None recreate=None userns=None log_level=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None\nAug 02 12:40:44 managed-node2 platform-python[44349]: ansible-containers.podman.podman_play version: 4.9.4-dev, kube file /etc/containers/ansible-kubernetes.d/httpd3.yml\nAug 02 12:40:44 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:40:44 managed-node2 platform-python[44486]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:40:45 managed-node2 platform-python[44609]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nAug 02 12:40:46 managed-node2 platform-python[44733]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:47 managed-node2 sudo[44858]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olrmvkvglxbsttmchncdtptzsqpgbnrc ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152846.6847095-21950-187401977583699/AnsiballZ_podman_container_info.py'\nAug 02 12:40:47 managed-node2 sudo[44858]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:47 managed-node2 platform-python[44861]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None\nAug 02 12:40:47 managed-node2 systemd[25590]: Started podman-44863.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:47 managed-node2 sudo[44858]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:47 managed-node2 sudo[44992]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpjmlfvygeamsbgxiibmynquopakmqzw ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152847.3914187-21987-273657899931369/AnsiballZ_command.py'\nAug 02 12:40:47 managed-node2 sudo[44992]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:47 managed-node2 platform-python[44995]: ansible-command Invoked with _raw_params=podman network ls -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:47 managed-node2 systemd[25590]: Started podman-44997.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:47 managed-node2 sudo[44992]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:47 managed-node2 sudo[45152]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkszomlmijhetpvbizxheimorvpsvgdk ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152847.866539-22018-22833626881705/AnsiballZ_command.py'\nAug 02 12:40:47 managed-node2 sudo[45152]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:48 managed-node2 platform-python[45155]: ansible-command Invoked with _raw_params=podman secret ls -n -q warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:48 managed-node2 systemd[25590]: Started podman-45157.scope.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:48 managed-node2 sudo[45152]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:48 managed-node2 platform-python[45287]: ansible-command Invoked with removes=/var/lib/systemd/linger/podman_basic_user _raw_params=loginctl disable-linger podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None stdin=None\nAug 02 12:40:48 managed-node2 systemd[1]: Stopping User Manager for UID 3001...\n-- Subject: Unit user@3001.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has begun shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopping podman-pause-9fcbd008.scope.\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Default.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopping D-Bus User Message Bus...\n-- Subject: Unit UNIT has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has begun shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Removed slice podman\\x2dkube.slice.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped D-Bus User Message Bus.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Basic System.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Sockets.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Timers.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped Mark boot as successful after the user session has run 2 minutes.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped target Paths.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Closed D-Bus User Message Bus Socket.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Stopped podman-pause-9fcbd008.scope.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Removed slice user.slice.\n-- Subject: Unit UNIT has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[25590]: Reached target Shutdown.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:48 managed-node2 systemd[25590]: Started Exit the Session.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:48 managed-node2 systemd[25590]: Reached target Exit the Session.\n-- Subject: Unit UNIT has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit UNIT has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:40:48 managed-node2 systemd[1]: user@3001.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit user@3001.service has successfully entered the 'dead' state.\nAug 02 12:40:48 managed-node2 systemd[1]: Stopped User Manager for UID 3001.\n-- Subject: Unit user@3001.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user@3001.service has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[1]: Stopping User runtime directory /run/user/3001...\n-- Subject: Unit user-runtime-dir@3001.service has begun shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has begun shutting down.\nAug 02 12:40:48 managed-node2 systemd[1]: run-user-3001.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-user-3001.mount has successfully entered the 'dead' state.\nAug 02 12:40:48 managed-node2 systemd[1]: user-runtime-dir@3001.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit user-runtime-dir@3001.service has successfully entered the 'dead' state.\nAug 02 12:40:48 managed-node2 systemd[1]: Stopped User runtime directory /run/user/3001.\n-- Subject: Unit user-runtime-dir@3001.service has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-runtime-dir@3001.service has finished shutting down.\nAug 02 12:40:48 managed-node2 systemd[1]: Removed slice User Slice of UID 3001.\n-- Subject: Unit user-3001.slice has finished shutting down\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit user-3001.slice has finished shutting down.\nAug 02 12:40:48 managed-node2 platform-python[45419]: ansible-command Invoked with _raw_params=loginctl show-user --value -p State podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:49 managed-node2 sudo[45543]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqbllrdzcpiqglhdxzdzawivaqzmpyy ; /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152849.62423-22100-37847572003711/AnsiballZ_command.py'\nAug 02 12:40:49 managed-node2 sudo[45543]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:49 managed-node2 platform-python[45546]: ansible-command Invoked with _raw_params=podman pod exists httpd1 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:49 managed-node2 sudo[45543]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:50 managed-node2 platform-python[45676]: ansible-command Invoked with _raw_params=podman pod exists httpd2 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:40:50 managed-node2 platform-python[45806]: ansible-command Invoked with _raw_params=podman pod exists httpd3 warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:50 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:40:50 managed-node2 sudo[45937]: root : TTY=pts/0 ; PWD=/root ; USER=podman_basic_user ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kquekbnhuqoxfuhqzmvgbrdfuqcyvrpf ; XDG_RUNTIME_DIR=/run/user/3001 /usr/libexec/platform-python /var/tmp/ansible-tmp-1754152850.8293648-22145-217225041127031/AnsiballZ_command.py'\nAug 02 12:40:50 managed-node2 sudo[45937]: pam_unix(sudo:session): session opened for user podman_basic_user by root(uid=0)\nAug 02 12:40:51 managed-node2 platform-python[45940]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:51 managed-node2 sudo[45937]: pam_unix(sudo:session): session closed for user podman_basic_user\nAug 02 12:40:51 managed-node2 platform-python[46066]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:51 managed-node2 platform-python[46192]: ansible-command Invoked with _raw_params= _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:52 managed-node2 platform-python[46318]: ansible-stat Invoked with path=/var/lib/systemd/linger/podman_basic_user follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:40:55 managed-node2 platform-python[46566]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:40:56 managed-node2 platform-python[46695]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:40:57 managed-node2 platform-python[46819]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:00 managed-node2 platform-python[46944]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=False service=None split=None\nAug 02 12:41:01 managed-node2 platform-python[47068]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:01 managed-node2 platform-python[47193]: ansible-command Invoked with _raw_params=getsubids podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:01 managed-node2 platform-python[47317]: ansible-command Invoked with _raw_params=getsubids -g podman_basic_user warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:03 managed-node2 platform-python[47441]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:03 managed-node2 platform-python[47565]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:04 managed-node2 platform-python[47688]: ansible-stat Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:04 managed-node2 platform-python[47811]: ansible-file Invoked with path=/home/podman_basic_user/.config/containers/ansible-kubernetes.d/httpd1.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:06 managed-node2 platform-python[47934]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:41:06 managed-node2 platform-python[48058]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:08 managed-node2 platform-python[48183]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:08 managed-node2 platform-python[48307]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:41:09 managed-node2 platform-python[48434]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:09 managed-node2 platform-python[48557]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd2.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:11 managed-node2 platform-python[48680]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:12 managed-node2 platform-python[48805]: ansible-command Invoked with _raw_params= warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:13 managed-node2 platform-python[48929]: ansible-systemd Invoked with name= scope=system state=stopped enabled=False daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None\nAug 02 12:41:14 managed-node2 platform-python[49056]: ansible-stat Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:14 managed-node2 platform-python[49179]: ansible-file Invoked with path=/etc/containers/ansible-kubernetes.d/httpd3.yml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:15 managed-node2 platform-python[49302]: ansible-getent Invoked with database=passwd key=podman_basic_user fail_key=True service=None split=None\nAug 02 12:41:16 managed-node2 platform-python[49426]: ansible-stat Invoked with path=/run/user/3001 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:17 managed-node2 platform-python[49549]: ansible-file Invoked with path=/etc/containers/storage.conf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:17 managed-node2 platform-python[49672]: ansible-file Invoked with path=/tmp/lsr_ga01o8zo_podman state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:18 managed-node2 sshd[49693]: Accepted publickey for root from 10.31.46.71 port 34968 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nAug 02 12:41:18 managed-node2 systemd-logind[591]: New session 9 of user root.\n-- Subject: A new session 9 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 9 has been created for the user root.\n-- \n-- The leading process of the session is 49693.\nAug 02 12:41:18 managed-node2 systemd[1]: Started Session 9 of user root.\n-- Subject: Unit session-9.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-9.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:18 managed-node2 sshd[49693]: pam_unix(sshd:session): session opened for user root by (uid=0)\nAug 02 12:41:18 managed-node2 sshd[49696]: Received disconnect from 10.31.46.71 port 34968:11: disconnected by user\nAug 02 12:41:18 managed-node2 sshd[49696]: Disconnected from user root 10.31.46.71 port 34968\nAug 02 12:41:18 managed-node2 sshd[49693]: pam_unix(sshd:session): session closed for user root\nAug 02 12:41:18 managed-node2 systemd[1]: session-9.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-9.scope has successfully entered the 'dead' state.\nAug 02 12:41:18 managed-node2 systemd-logind[591]: Session 9 logged out. Waiting for processes to exit.\nAug 02 12:41:18 managed-node2 systemd-logind[591]: Removed session 9.\n-- Subject: Session 9 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 9 has been terminated.\nAug 02 12:41:20 managed-node2 platform-python[49858]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nAug 02 12:41:21 managed-node2 platform-python[49985]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:21 managed-node2 platform-python[50108]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:24 managed-node2 platform-python[50356]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:25 managed-node2 platform-python[50485]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:41:26 managed-node2 platform-python[50609]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:27 managed-node2 sshd[50632]: Accepted publickey for root from 10.31.46.71 port 55872 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nAug 02 12:41:27 managed-node2 systemd-logind[591]: New session 10 of user root.\n-- Subject: A new session 10 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 10 has been created for the user root.\n-- \n-- The leading process of the session is 50632.\nAug 02 12:41:27 managed-node2 systemd[1]: Started Session 10 of user root.\n-- Subject: Unit session-10.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-10.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:27 managed-node2 sshd[50632]: pam_unix(sshd:session): session opened for user root by (uid=0)\nAug 02 12:41:27 managed-node2 sshd[50635]: Received disconnect from 10.31.46.71 port 55872:11: disconnected by user\nAug 02 12:41:27 managed-node2 sshd[50635]: Disconnected from user root 10.31.46.71 port 55872\nAug 02 12:41:27 managed-node2 sshd[50632]: pam_unix(sshd:session): session closed for user root\nAug 02 12:41:27 managed-node2 systemd[1]: session-10.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-10.scope has successfully entered the 'dead' state.\nAug 02 12:41:27 managed-node2 systemd-logind[591]: Session 10 logged out. Waiting for processes to exit.\nAug 02 12:41:27 managed-node2 systemd-logind[591]: Removed session 10.\n-- Subject: Session 10 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 10 has been terminated.\nAug 02 12:41:29 managed-node2 platform-python[50797]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nAug 02 12:41:33 managed-node2 platform-python[50949]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:33 managed-node2 platform-python[51072]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:35 managed-node2 platform-python[51320]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:36 managed-node2 platform-python[51449]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:41:37 managed-node2 platform-python[51573]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:41 managed-node2 sshd[51596]: Accepted publickey for root from 10.31.46.71 port 38190 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE\nAug 02 12:41:41 managed-node2 systemd[1]: Started Session 11 of user root.\n-- Subject: Unit session-11.scope has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit session-11.scope has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:41 managed-node2 systemd-logind[591]: New session 11 of user root.\n-- Subject: A new session 11 has been created for user root\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A new session with the ID 11 has been created for the user root.\n-- \n-- The leading process of the session is 51596.\nAug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session opened for user root by (uid=0)\nAug 02 12:41:41 managed-node2 sshd[51599]: Received disconnect from 10.31.46.71 port 38190:11: disconnected by user\nAug 02 12:41:41 managed-node2 sshd[51599]: Disconnected from user root 10.31.46.71 port 38190\nAug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session closed for user root\nAug 02 12:41:41 managed-node2 systemd[1]: session-11.scope: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit session-11.scope has successfully entered the 'dead' state.\nAug 02 12:41:41 managed-node2 systemd-logind[591]: Session 11 logged out. Waiting for processes to exit.\nAug 02 12:41:41 managed-node2 systemd-logind[591]: Removed session 11.\n-- Subject: Session 11 has been terminated\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat\n-- \n-- A session with the ID 11 has been terminated.\nAug 02 12:41:43 managed-node2 platform-python[51761]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d\nAug 02 12:41:43 managed-node2 platform-python[51913]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:44 managed-node2 platform-python[52036]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:45 managed-node2 platform-python[52160]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:41:48 managed-node2 platform-python[52288]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration\nAug 02 12:41:51 managed-node2 systemd[1]: Reloading.\nAug 02 12:41:51 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.\n-- Subject: Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:51 managed-node2 systemd[1]: Starting man-db-cache-update.service...\n-- Subject: Unit man-db-cache-update.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has begun starting up.\nAug 02 12:41:52 managed-node2 systemd[1]: Reloading.\nAug 02 12:41:52 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit man-db-cache-update.service has successfully entered the 'dead' state.\nAug 02 12:41:52 managed-node2 systemd[1]: Started man-db-cache-update.service.\n-- Subject: Unit man-db-cache-update.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit man-db-cache-update.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:52 managed-node2 systemd[1]: run-r5b158d19759a4bbaa61aee183ab0cad0.service: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has successfully entered the 'dead' state.\nAug 02 12:41:53 managed-node2 platform-python[52920]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:53 managed-node2 platform-python[53043]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:54 managed-node2 platform-python[53166]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:41:54 managed-node2 systemd[1]: Reloading.\nAug 02 12:41:54 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment...\n-- Subject: Unit certmonger.service has begun start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit certmonger.service has begun starting up.\nAug 02 12:41:54 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment.\n-- Subject: Unit certmonger.service has finished start-up\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- Unit certmonger.service has finished starting up.\n-- \n-- The start-up result is done.\nAug 02 12:41:55 managed-node2 platform-python[53359]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=#\n # Ansible managed\n #\n # system_role:certificate\n booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:55 managed-node2 certmonger[53375]: Certificate in file \"/etc/pki/tls/certs/quadlet_demo.crt\" issued by CA and saved.\nAug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:56 managed-node2 platform-python[53497]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt\nAug 02 12:41:56 managed-node2 platform-python[53620]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key\nAug 02 12:41:57 managed-node2 platform-python[53743]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt\nAug 02 12:41:57 managed-node2 platform-python[53866]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:41:57 managed-node2 certmonger[53202]: 2025-08-02 12:41:57 [53202] Wrote to /var/lib/certmonger/requests/20250802164155\nAug 02 12:41:58 managed-node2 platform-python[53990]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:58 managed-node2 platform-python[54113]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:58 managed-node2 platform-python[54236]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None\nAug 02 12:41:59 managed-node2 platform-python[54359]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:41:59 managed-node2 platform-python[54482]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:02 managed-node2 platform-python[54730]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:03 managed-node2 platform-python[54859]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None\nAug 02 12:42:03 managed-node2 platform-python[54983]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:06 managed-node2 platform-python[55108]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:06 managed-node2 platform-python[55231]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:07 managed-node2 platform-python[55354]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:08 managed-node2 platform-python[55478]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:42:11 managed-node2 platform-python[55601]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:42:11 managed-node2 platform-python[55728]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:42:12 managed-node2 platform-python[55855]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:42:13 managed-node2 platform-python[55978]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:42:15 managed-node2 platform-python[56101]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:15 managed-node2 platform-python[56225]: ansible-command Invoked with _raw_params=podman ps -a warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:15 managed-node2 systemd[1]: var-lib-containers-storage-overlay-metacopy\\x2dcheck89620470-merged.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay-metacopy\\x2dcheck89620470-merged.mount has successfully entered the 'dead' state.\nAug 02 12:42:15 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:42:16 managed-node2 platform-python[56356]: ansible-command Invoked with _raw_params=podman pod ps --ctr-ids --ctr-names --ctr-status warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:16 managed-node2 platform-python[56486]: ansible-command Invoked with _raw_params=set -euo pipefail; systemctl list-units --all | grep quadlet _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded.\n-- Subject: Unit succeeded\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n-- \n-- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state.\nAug 02 12:42:17 managed-node2 platform-python[56612]: ansible-command Invoked with _raw_params=ls -alrtF /etc/systemd/system warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:20 managed-node2 platform-python[56861]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:21 managed-node2 platform-python[56990]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1\nAug 02 12:42:23 managed-node2 platform-python[57115]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None\nAug 02 12:42:27 managed-node2 platform-python[57238]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None\nAug 02 12:42:27 managed-node2 platform-python[57365]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None\nAug 02 12:42:28 managed-node2 platform-python[57492]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:42:28 managed-node2 platform-python[57615]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None\nAug 02 12:42:30 managed-node2 platform-python[57738]: ansible-command Invoked with _raw_params=exec 1>&2\n set -x\n set -o pipefail\n systemctl list-units --plain -l --all | grep quadlet || :\n systemctl list-unit-files --all | grep quadlet || :\n systemctl list-units --plain --failed -l --all | grep quadlet || :\n _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None\nAug 02 12:42:31 managed-node2 platform-python[57868]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None", "task_name": "Get journald", "task_path": "/tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:209" } ] SYSTEM ROLES ERRORS END v1 TASKS RECAP ******************************************************************** Saturday 02 August 2025 12:42:31 -0400 (0:00:00.415) 0:00:49.197 ******* =============================================================================== fedora.linux_system_roles.certificate : Ensure provider packages are installed --- 4.46s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:15 fedora.linux_system_roles.certificate : Ensure certificate role dependencies are installed --- 3.72s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:5 fedora.linux_system_roles.firewall : Install firewalld ------------------ 2.91s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:51 fedora.linux_system_roles.firewall : Install firewalld ------------------ 2.90s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/firewalld.yml:51 fedora.linux_system_roles.firewall : Configure firewall ----------------- 2.57s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:74 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.82s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.podman : Gather the package facts ------------- 1.62s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/main.yml:6 fedora.linux_system_roles.certificate : Slurp the contents of the files --- 1.27s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:143 fedora.linux_system_roles.certificate : Remove files -------------------- 1.15s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:174 fedora.linux_system_roles.certificate : Ensure provider service is running --- 1.08s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:76 fedora.linux_system_roles.firewall : Enable and start firewalld service --- 1.06s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:30 fedora.linux_system_roles.certificate : Ensure certificate requests ----- 1.05s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:86 fedora.linux_system_roles.firewall : Unmask firewalld service ----------- 1.03s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/firewall/tasks/main.yml:24 Gathering Facts --------------------------------------------------------- 0.97s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:9 Debug ------------------------------------------------------------------- 0.76s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:199 fedora.linux_system_roles.certificate : Ensure pre-scripts hooks directory exists --- 0.58s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/main.yml:25 fedora.linux_system_roles.podman : Get user information ----------------- 0.51s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/podman/tasks/handle_user_group.yml:10 Dump journal ------------------------------------------------------------ 0.50s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/tests/podman/tests_quadlet_demo.yml:142 fedora.linux_system_roles.certificate : Run systemctl ------------------- 0.49s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:22 fedora.linux_system_roles.certificate : Check if system is ostree ------- 0.48s /tmp/collections-zg2/ansible_collections/fedora/linux_system_roles/roles/certificate/tasks/set_vars.yml:10 -- Logs begin at Sat 2025-08-02 12:30:34 EDT, end at Sat 2025-08-02 12:42:32 EDT. -- Aug 02 12:41:41 managed-node2 sshd[51596]: Accepted publickey for root from 10.31.46.71 port 38190 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:41:41 managed-node2 systemd[1]: Started Session 11 of user root. -- Subject: Unit session-11.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-11.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:41:41 managed-node2 systemd-logind[591]: New session 11 of user root. -- Subject: A new session 11 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 11 has been created for the user root. -- -- The leading process of the session is 51596. Aug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session opened for user root by (uid=0) Aug 02 12:41:41 managed-node2 sshd[51599]: Received disconnect from 10.31.46.71 port 38190:11: disconnected by user Aug 02 12:41:41 managed-node2 sshd[51599]: Disconnected from user root 10.31.46.71 port 38190 Aug 02 12:41:41 managed-node2 sshd[51596]: pam_unix(sshd:session): session closed for user root Aug 02 12:41:41 managed-node2 systemd[1]: session-11.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-11.scope has successfully entered the 'dead' state. Aug 02 12:41:41 managed-node2 systemd-logind[591]: Session 11 logged out. Waiting for processes to exit. Aug 02 12:41:41 managed-node2 systemd-logind[591]: Removed session 11. -- Subject: Session 11 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 11 has been terminated. Aug 02 12:41:43 managed-node2 platform-python[51761]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=* fact_path=/etc/ansible/facts.d Aug 02 12:41:43 managed-node2 platform-python[51913]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:44 managed-node2 platform-python[52036]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:45 managed-node2 platform-python[52160]: ansible-dnf Invoked with name=['python3-pyasn1', 'python3-cryptography', 'python3-dbus'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:41:48 managed-node2 platform-python[52288]: ansible-dnf Invoked with name=['certmonger'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 dbus-daemon[595]: [system] Reloaded configuration Aug 02 12:41:51 managed-node2 systemd[1]: Reloading. Aug 02 12:41:51 managed-node2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. -- Subject: Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:51 managed-node2 systemd[1]: Starting man-db-cache-update.service... -- Subject: Unit man-db-cache-update.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has begun starting up. Aug 02 12:41:52 managed-node2 systemd[1]: Reloading. Aug 02 12:41:52 managed-node2 systemd[1]: man-db-cache-update.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit man-db-cache-update.service has successfully entered the 'dead' state. Aug 02 12:41:52 managed-node2 systemd[1]: Started man-db-cache-update.service. -- Subject: Unit man-db-cache-update.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit man-db-cache-update.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:52 managed-node2 systemd[1]: run-r5b158d19759a4bbaa61aee183ab0cad0.service: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit run-r5b158d19759a4bbaa61aee183ab0cad0.service has successfully entered the 'dead' state. Aug 02 12:41:53 managed-node2 platform-python[52920]: ansible-file Invoked with name=/etc/certmonger//pre-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//pre-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:53 managed-node2 platform-python[53043]: ansible-file Invoked with name=/etc/certmonger//post-scripts owner=root group=root mode=0700 state=directory path=/etc/certmonger//post-scripts recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:54 managed-node2 platform-python[53166]: ansible-systemd Invoked with name=certmonger state=started enabled=True daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:41:54 managed-node2 systemd[1]: Reloading. Aug 02 12:41:54 managed-node2 systemd[1]: Starting Certificate monitoring and PKI enrollment... -- Subject: Unit certmonger.service has begun start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has begun starting up. Aug 02 12:41:54 managed-node2 systemd[1]: Started Certificate monitoring and PKI enrollment. -- Subject: Unit certmonger.service has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit certmonger.service has finished starting up. -- -- The start-up result is done. Aug 02 12:41:55 managed-node2 platform-python[53359]: ansible-fedora.linux_system_roles.certificate_request Invoked with name=quadlet_demo dns=['localhost'] directory=/etc/pki/tls wait=True ca=self-sign __header=# # Ansible managed # # system_role:certificate booted=True provider_config_directory=/etc/certmonger provider=certmonger key_usage=['digitalSignature', 'keyEncipherment'] extended_key_usage=['id-kp-serverAuth', 'id-kp-clientAuth'] auto_renew=True ip=None email=None common_name=None country=None state=None locality=None organization=None organizational_unit=None contact_email=None key_size=None owner=None group=None mode=None principal=None run_before=None run_after=None Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:55 managed-node2 certmonger[53375]: Certificate in file "/etc/pki/tls/certs/quadlet_demo.crt" issued by CA and saved. Aug 02 12:41:55 managed-node2 certmonger[53202]: 2025-08-02 12:41:55 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:56 managed-node2 platform-python[53497]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Aug 02 12:41:56 managed-node2 platform-python[53620]: ansible-slurp Invoked with path=/etc/pki/tls/private/quadlet_demo.key src=/etc/pki/tls/private/quadlet_demo.key Aug 02 12:41:57 managed-node2 platform-python[53743]: ansible-slurp Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt src=/etc/pki/tls/certs/quadlet_demo.crt Aug 02 12:41:57 managed-node2 platform-python[53866]: ansible-command Invoked with _raw_params=getcert stop-tracking -f /etc/pki/tls/certs/quadlet_demo.crt warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:41:57 managed-node2 certmonger[53202]: 2025-08-02 12:41:57 [53202] Wrote to /var/lib/certmonger/requests/20250802164155 Aug 02 12:41:58 managed-node2 platform-python[53990]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:58 managed-node2 platform-python[54113]: ansible-file Invoked with path=/etc/pki/tls/private/quadlet_demo.key state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:58 managed-node2 platform-python[54236]: ansible-file Invoked with path=/etc/pki/tls/certs/quadlet_demo.crt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None content=NOT_LOGGING_PARAMETER backup=None remote_src=None regexp=None delimiter=None directory_mode=None Aug 02 12:41:59 managed-node2 platform-python[54359]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:41:59 managed-node2 platform-python[54482]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:02 managed-node2 platform-python[54730]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:03 managed-node2 platform-python[54859]: ansible-getent Invoked with database=passwd key=root fail_key=False service=None split=None Aug 02 12:42:03 managed-node2 platform-python[54983]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:06 managed-node2 platform-python[55108]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:06 managed-node2 platform-python[55231]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:07 managed-node2 platform-python[55354]: ansible-command Invoked with _raw_params=systemctl is-system-running warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:08 managed-node2 platform-python[55478]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:42:11 managed-node2 platform-python[55601]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:42:11 managed-node2 platform-python[55728]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:42:12 managed-node2 platform-python[55855]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:13 managed-node2 platform-python[55978]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:15 managed-node2 platform-python[56101]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:15 managed-node2 platform-python[56225]: ansible-command Invoked with _raw_params=podman ps -a warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:15 managed-node2 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck89620470-merged.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay-metacopy\x2dcheck89620470-merged.mount has successfully entered the 'dead' state. Aug 02 12:42:15 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:42:16 managed-node2 platform-python[56356]: ansible-command Invoked with _raw_params=podman pod ps --ctr-ids --ctr-names --ctr-status warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:16 managed-node2 platform-python[56486]: ansible-command Invoked with _raw_params=set -euo pipefail; systemctl list-units --all | grep quadlet _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:16 managed-node2 systemd[1]: var-lib-containers-storage-overlay.mount: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit var-lib-containers-storage-overlay.mount has successfully entered the 'dead' state. Aug 02 12:42:17 managed-node2 platform-python[56612]: ansible-command Invoked with _raw_params=ls -alrtF /etc/systemd/system warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:20 managed-node2 platform-python[56861]: ansible-command Invoked with _raw_params=podman --version warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:21 managed-node2 platform-python[56990]: ansible-stat Invoked with path=/usr/bin/getsubids follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Aug 02 12:42:23 managed-node2 platform-python[57115]: ansible-dnf Invoked with name=['firewalld'] state=present allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True lock_timeout=30 conf_file=None disable_excludes=None download_dir=None list=None releasever=None Aug 02 12:42:27 managed-node2 platform-python[57238]: ansible-systemd Invoked with name=firewalld masked=False daemon_reload=False daemon_reexec=False no_block=False state=None enabled=None force=None user=None scope=None Aug 02 12:42:27 managed-node2 platform-python[57365]: ansible-systemd Invoked with name=firewalld enabled=True state=started daemon_reload=False daemon_reexec=False no_block=False force=None masked=None user=None scope=None Aug 02 12:42:28 managed-node2 platform-python[57492]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['8000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:28 managed-node2 platform-python[57615]: ansible-fedora.linux_system_roles.firewall_lib Invoked with port=['9000/tcp'] permanent=True runtime=True state=enabled __report_changed=True online=True service=[] source_port=[] forward_port=[] rich_rule=[] source=[] interface=[] interface_pci_id=[] icmp_block=[] timeout=0 ipset_entries=[] protocol=[] helper_module=[] destination=[] includes=[] firewalld_conf=None masquerade=None icmp_block_inversion=None target=None zone=None set_default_zone=None ipset=None ipset_type=None description=None short=None Aug 02 12:42:30 managed-node2 platform-python[57738]: ansible-command Invoked with _raw_params=exec 1>&2 set -x set -o pipefail systemctl list-units --plain -l --all | grep quadlet || : systemctl list-unit-files --all | grep quadlet || : systemctl list-units --plain --failed -l --all | grep quadlet || : _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:31 managed-node2 platform-python[57868]: ansible-command Invoked with _raw_params=journalctl -ex warn=True _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Aug 02 12:42:31 managed-node2 sshd[57890]: Accepted publickey for root from 10.31.46.71 port 57122 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:42:31 managed-node2 systemd[1]: Started Session 12 of user root. -- Subject: Unit session-12.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-12.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:42:31 managed-node2 systemd-logind[591]: New session 12 of user root. -- Subject: A new session 12 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 12 has been created for the user root. -- -- The leading process of the session is 57890. Aug 02 12:42:31 managed-node2 sshd[57890]: pam_unix(sshd:session): session opened for user root by (uid=0) Aug 02 12:42:31 managed-node2 sshd[57893]: Received disconnect from 10.31.46.71 port 57122:11: disconnected by user Aug 02 12:42:31 managed-node2 sshd[57893]: Disconnected from user root 10.31.46.71 port 57122 Aug 02 12:42:31 managed-node2 sshd[57890]: pam_unix(sshd:session): session closed for user root Aug 02 12:42:31 managed-node2 systemd[1]: session-12.scope: Succeeded. -- Subject: Unit succeeded -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- The unit session-12.scope has successfully entered the 'dead' state. Aug 02 12:42:31 managed-node2 systemd-logind[591]: Session 12 logged out. Waiting for processes to exit. Aug 02 12:42:31 managed-node2 systemd-logind[591]: Removed session 12. -- Subject: Session 12 has been terminated -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A session with the ID 12 has been terminated. Aug 02 12:42:32 managed-node2 sshd[57914]: Accepted publickey for root from 10.31.46.71 port 57126 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Aug 02 12:42:32 managed-node2 systemd[1]: Started Session 13 of user root. -- Subject: Unit session-13.scope has finished start-up -- Defined-By: systemd -- Support: https://access.redhat.com/support -- -- Unit session-13.scope has finished starting up. -- -- The start-up result is done. Aug 02 12:42:32 managed-node2 systemd-logind[591]: New session 13 of user root. -- Subject: A new session 13 has been created for user root -- Defined-By: systemd -- Support: https://access.redhat.com/support -- Documentation: https://www.freedesktop.org/wiki/Software/systemd/multiseat -- -- A new session with the ID 13 has been created for the user root. -- -- The leading process of the session is 57914. Aug 02 12:42:32 managed-node2 sshd[57914]: pam_unix(sshd:session): session opened for user root by (uid=0)