Trouble running containers with podman

Hey everyone.
First of all: This is a repost of a post i put on openwrt forum. They refused to talk to me because turris os has some modifications. So please don’t be confused if you see this thread over there and here.
I recently purched a turris mox as my new router, it works fine so far and i’m happy about it.
However i also want it to perform some basic server tasks (i.E. running a pihole) in containers.
That’s were i run into problems.
Docker is not cutting it for me because it does to much network magic meaning that i ended up with a pihole exposed to the entire internet.
So my new plan is to use podman and a dedicated network namespace, but i can’t get podman to run containers. I’ve installed it and crun but when i run
podman run --rm -it ubuntu:rolling
it fails with
Error: container create failed (no logs from conmon): EOF

I’d be grateful for any hints how to fix this.
Some more logs/versions:

DISTRIB_ID='TurrisOS'
DISTRIB_RELEASE='6.4.4'
DISTRIB_REVISION='r16872+128-42374bcee6'
DISTRIB_TARGET='mvebu/cortexa53'
DISTRIB_ARCH='aarch64_cortex-a53'
DISTRIB_DESCRIPTION='TurrisOS 6.4.4 42374bcee6b51b78848c7031900e908d2d7fe74d'
DISTRIB_TAINTS='busybox'
root@turris:~# podman -v
podman version 3.4.4
root@turris:~# uname -r
5.15.135

Debug log:

INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run --rm --runtime crun --log-level debug -it ubuntu:rolling)
DEBU[0000] Merged system config "/etc/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /tmp/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Overriding graph root "/tmp/lib/containers/storage" with "/var/lib/containers/storage" from database
DEBU[0000] Overriding static dir "/tmp/lib/containers/storage/libpod" with "/var/lib/containers/storage/libpod" from database
DEBU[0000] Overriding volume path "/tmp/lib/containers/storage/volumes" with "/var/lib/containers/storage/volumes" from database
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=tmpfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend none
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] configured OCI runtime uxc initialization failed: no valid executable found for OCI runtime uxc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
INFO[0000] Found CNI network cni-podman1 (type=macvlan) at /etc/cni/net.d/cni-podman1.conflist
INFO[0000] Found CNI network lan (type=macvlan) at /etc/cni/net.d/lan.conflist
DEBU[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 7
DEBU[0000] Pulling image ubuntu:rolling (policy: missing)
DEBU[0000] Looking up image "ubuntu:rolling" in local containers storage
DEBU[0000] Trying "ubuntu:rolling" ...
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Trying "docker.io/library/ubuntu:rolling" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] Found image "ubuntu:rolling" as "docker.io/library/ubuntu:rolling" in local containers storage
DEBU[0000] Found image "ubuntu:rolling" as "docker.io/library/ubuntu:rolling" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82)
DEBU[0000] Looking up image "docker.io/library/ubuntu:rolling" in local containers storage
DEBU[0000] Trying "docker.io/library/ubuntu:rolling" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] Found image "docker.io/library/ubuntu:rolling" as "docker.io/library/ubuntu:rolling" in local containers storage
DEBU[0000] Found image "docker.io/library/ubuntu:rolling" as "docker.io/library/ubuntu:rolling" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82)
DEBU[0000] Looking up image "ubuntu:rolling" in local containers storage
DEBU[0000] Trying "ubuntu:rolling" ...
DEBU[0000] Trying "docker.io/library/ubuntu:rolling" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] Found image "ubuntu:rolling" as "docker.io/library/ubuntu:rolling" in local containers storage
DEBU[0000] Found image "ubuntu:rolling" as "docker.io/library/ubuntu:rolling" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82)
DEBU[0000] Inspecting image 3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82
DEBU[0000] exporting opaque data as blob "sha256:3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] exporting opaque data as blob "sha256:3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] exporting opaque data as blob "sha256:3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] Looking up image "ubuntu:rolling" in local containers storage
DEBU[0000] Trying "ubuntu:rolling" ...
DEBU[0000] Trying "docker.io/library/ubuntu:rolling" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] Found image "ubuntu:rolling" as "docker.io/library/ubuntu:rolling" in local containers storage
DEBU[0000] Found image "ubuntu:rolling" as "docker.io/library/ubuntu:rolling" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82)
DEBU[0000] Inspecting image 3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82
DEBU[0000] exporting opaque data as blob "sha256:3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] exporting opaque data as blob "sha256:3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] exporting opaque data as blob "sha256:3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] Inspecting image 3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82
DEBU[0000] using systemd mode: false
DEBU[0000] Adding exposed ports
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Allocated lock 7 for container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] exporting opaque data as blob "sha256:3f9cf3a31fbf871322b774402c7c25b800d0e93c222d5a44339f2d3a99dede82"
DEBU[0000] created container "6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828"
DEBU[0000] container "6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828" has work directory "/var/lib/containers/storage/overlay-containers/6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828/userdata"
DEBU[0000] container "6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828" has run directory "/run/containers/storage/overlay-containers/6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828/userdata"
DEBU[0000] Handling terminal attach
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] backingFs=tmpfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Cached value indicated that volatile is being used
DEBU[0000] overlay: mount_data=nodev,lowerdir=/var/lib/containers/storage/overlay/l/XVPUQJM6BARDLHQMLLG7NESBOI,upperdir=/var/lib/containers/storage/overlay/ef20f93c3abc8bd27411e2c4eae424b626539a69c9512fc84e732360cd1f4abb/diff,workdir=/var/lib/containers/storage/overlay/ef20f93c3abc8bd27411e2c4eae424b626539a69c9512fc84e732360cd1f4abb/work,volatile
DEBU[0000] Made network namespace at /run/netns/cni-959ccb76-2477-4d1f-ece3-ee2b603c1ec6 for container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828
INFO[0000] Got pod network &{Name:bold_sanderson Namespace:bold_sanderson ID:6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 NetNS:/run/netns/cni-959ccb76-2477-4d1f-ece3-ee2b603c1ec6 Networks:[{Name:podman Ifname:eth0}] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}
INFO[0000] Adding pod bold_sanderson_bold_sanderson to CNI network "podman" (type=bridge)
DEBU[0000] mounted container "6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828" at "/var/lib/containers/storage/overlay/ef20f93c3abc8bd27411e2c4eae424b626539a69c9512fc84e732360cd1f4abb/merged"
DEBU[0000] Created root filesystem for container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 at /tmp/lib/containers/storage/overlay/ef20f93c3abc8bd27411e2c4eae424b626539a69c9512fc84e732360cd1f4abb/merged
DEBU[0000] [0] CNI result: &{0.4.0 [{Name:cni-podman0 Mac:b6:a1:3d:75:d8:3f Sandbox:} {Name:vethdf9731d0 Mac:e6:02:4b:98:83:84 Sandbox:} {Name:eth0 Mac:d6:d1:43:ab:e6:f1 Sandbox:/run/netns/cni-959ccb76-2477-4d1f-ece3-ee2b603c1ec6}] [{Version:4 Interface:0x400020a678 Address:{IP:10.88.0.15 Mask:ffff0000} Gateway:10.88.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}] {[]  [] []}}
INFO[0000] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] Setting CGroup path for container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 to /libpod_parent/libpod-6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Workdir "/" resolved to host path "/tmp/lib/containers/storage/overlay/ef20f93c3abc8bd27411e2c4eae424b626539a69c9512fc84e732360cd1f4abb/merged"
DEBU[0000] Created OCI spec for container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 at /var/lib/containers/storage/overlay-containers/6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 -u 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828/userdata -p /run/containers/storage/overlay-containers/6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828/userdata/pidfile -n bold_sanderson --exit-dir /run/libpod/exits --full-attach -l k8s-file:/var/lib/containers/storage/overlay-containers/6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828/userdata/ctr.log --log-level debug --syslog -t --conmon-pidfile /run/containers/storage/overlay-containers/6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev --exit-command-arg --events-backend --exit-command-arg none --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828]"
DEBU[0000] Cleaning up container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828
DEBU[0000] Tearing down network namespace at /run/netns/cni-959ccb76-2477-4d1f-ece3-ee2b603c1ec6 for container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828
INFO[0000] Got pod network &{Name:bold_sanderson Namespace:bold_sanderson ID:6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 NetNS:/run/netns/cni-959ccb76-2477-4d1f-ece3-ee2b603c1ec6 Networks:[{Name:podman Ifname:eth0}] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}
INFO[0000] Deleting pod bold_sanderson_bold_sanderson from CNI network "podman" (type=bridge)
DEBU[0000] unmounted container "6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828"
DEBU[0001] Removing container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828
DEBU[0001] Removing all exec sessions for container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828
DEBU[0001] Cleaning up container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828
DEBU[0001] Network is already cleaned up, skipping...
DEBU[0001] Container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 storage is already unmounted, skipping...
DEBU[0001] Container 6fc414d98a435e55c461a15ee9ae9bc5279c223de9f6e00612643434aa785828 storage is already unmounted, skipping...
DEBU[0001] ExitCode msg: "container create failed (no logs from conmon): eof"
Error: container create failed (no logs from conmon): EOF

First I would make sure to edit /etc/containers/containers.conf

And point a different storage msata drive cuz as you can see in the log backing storage is tmpfs cuz /var/lib is mounted to your RAM and maybe is filling it out or its lacking some mount options needed for podman.

Just a lucky guess. I never used podman.

tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noatime)

And /var is a symlink to /tmp

Did you ever succeed in running podman on turris OS?
Am I mistaken or is there no podman package available for Turris OS?

opkg search podman

No results.

Rg,

Arnaud

opkg update > /dev/null && opkg list | grep podman

Thanks for that. I have it installed now. One needs to install crun as well to get it to work. Also the storage backend and location is best to be changed as well. I’m now figuring out how to let podman use the existing bridge instead of creating its own network. However the version (3.4) is slightly old so current docs use a different network backend then cni.

Curious is there are any experiences running containers this way. I was looking into getting homeassistant to run this way.

podman run --network=host ...

Does the hello-world container work for you?

We can team up I am also trying to make HA work. I did some work on:

But in an unprivileged LXC container but HA complains about missing AppArmor in kernel and other stuff. But I managed to run docker in an unprivileged LXC container. HA installs and starts two containers hassio-supervised and hassio-multicast but they throw lots of errors.

Podman is unprivileged by design so maybe it will be easier.

I’m now testing without success yet. I’ve configured the cni network to just use a layer 2 bridge attached to existing bridge:

# cat /etc/cni/net.d/87-podman-bridge.conflist 
{
  "cniVersion": "0.4.0",
  "name": "podman",
  "plugins": [
    {
      "type": "bridge",
      "bridge": "br24",
      "isGateway": false,
      "ipMasq": false,
      "hairpinMode": false,
      "ipam": {}
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": false
      }
    },
    {
      "type": "firewall"
    },
    {
      "type": "tuning"
    }
  ]
}

But when I now run an alpine image like this: podman run --log-level debug --network podman -it alpine

I get:

Error: container create failed (no logs from conmon): EOF

When I run in detached mode: podman run --log-level debug --network podman -d alpine

I get:

DEBU[0000] ExitCode msg: "prctl: invalid argument: oci runtime error" 
Error: OCI runtime error: prctl: Invalid argument

So no luck yet. I’m also quite new to the podman/docket thing. Still reading my way into it. What’s the helloworld container?

podman run helloworld also fails:

DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug helloworld) 
DEBU[0000] Merged system config "/etc/containers/containers.conf" 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /srv/containers/graphroot/libpod/bolt_state.db 
DEBU[0000] Using graph driver btrfs                     
DEBU[0000] Using graph root /srv/containers/graphroot   
DEBU[0000] Using run root /srv/containers/runroot       
DEBU[0000] Using static dir /srv/containers/graphroot/libpod 
DEBU[0000] Using tmp dir /run/libpod                    
DEBU[0000] Using volume path /srv/containers/graphroot/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "btrfs" 
DEBU[0000] Initializing event backend none              
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] configured OCI runtime uxc initialization failed: no valid executable found for OCI runtime uxc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 7              
DEBU[0000] Pulling image helloworld (policy: missing)   
DEBU[0000] Looking up image "helloworld" in local containers storage 
DEBU[0000] Trying "helloworld" ...                      
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0000] Trying "localhost/helloworld:latest" ...     
DEBU[0000] Trying "docker.io/library/helloworld:latest" ... 
DEBU[0000] Trying "registry.fedoraproject.org/helloworld:latest" ... 
DEBU[0000] Trying "registry.access.redhat.com/helloworld:latest" ... 
DEBU[0000] Trying "docker.io/library/helloworld:latest" ... 
✔ docker.io/library/helloworld:latest
DEBU[0003] Attempting to pull candidate docker.io/library/helloworld:latest for helloworld 
DEBU[0003] parsed reference into "[btrfs@/srv/containers/graphroot+/srv/containers/runroot]docker.io/library/helloworld:latest" 
Trying to pull docker.io/library/helloworld:latest...
DEBU[0003] Copying source image //helloworld:latest to destination image [btrfs@/srv/containers/graphroot+/srv/containers/runroot]docker.io/library/helloworld:latest 
DEBU[0003] Trying to access "docker.io/library/helloworld:latest" 
DEBU[0003] No credentials for docker.io found           
DEBU[0003] Using registries.d directory /etc/containers/registries.d for sigstore configuration 
DEBU[0003]  No signature storage configuration found for docker.io/library/helloworld:latest, using built-in default file:///var/lib/containers/sigstore 
DEBU[0003] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io 
DEBU[0003] GET https://registry-1.docker.io/v2/         
DEBU[0003] Ping https://registry-1.docker.io/v2/ status 401 
DEBU[0003] GET https://auth.docker.io/token?scope=repository%3Alibrary%2Fhelloworld%3Apull&service=registry.docker.io 
DEBU[0004] GET https://registry-1.docker.io/v2/library/helloworld/manifests/latest 
DEBU[0004] Content-Type from manifest GET is "application/json" 
DEBU[0004] Accessing "docker.io/library/helloworld:latest" failed: reading manifest latest in docker.io/library/helloworld: errors:
denied: requested access to the resource is denied
unauthorized: authentication required 
DEBU[0004] Error pulling candidate docker.io/library/helloworld:latest: initializing source docker://helloworld:latest: reading manifest latest in docker.io/library/helloworld: errors:
denied: requested access to the resource is denied
unauthorized: authentication required 
Error: initializing source docker://helloworld:latest: reading manifest latest in docker.io/library/helloworld: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
podman run hello-world
Error: OCI runtime error: prctl: Invalid argument

So we got stuck.

switching to runc instead of crun I get:

# podman run hello-world
Error: OCI runtime error: runc create failed: unable to start container process: error loading seccomp filter into kernel: error loading seccomp filter: invalid argument
root@turris:~# podman run --security-opt="seccomp=unconfined" hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm32v7)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Nice. SECCOMP is for now disabled in kernel so the error message is expected.

Already reported it some time ago: [Kernel] [TOS 6.4.1] Seccomp is not enabled in the kernel (#416) · Issues · Turris / Turris OS / Turris Build · GitLab So maybe if you feel like go and complain there as well

So now we need to figure out docker in podman nested containers. I believe that
there will be problems. But we can start building Dockerfile already :slight_smile:

And wait for newer TOS version to use:

--net=slirp4netns

I just tested with crun again and the hello-world also works with crun when you disable seccomp. Nothing to do with the runtime crun or runc.

However the alpine container still fails at conmon:

Jan  6 17:02:46 turris : conmon 2c6806dc1fb2753e9a88 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.YG5BH2} 
Jan  6 17:02:46 turris : conmon 2c6806dc1fb2753e9a88 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach} 
Jan  6 17:02:46 turris : conmon 2c6806dc1fb2753e9a88 <ninfo>: terminal_ctrl_fd: 12 
Jan  6 17:02:46 turris : conmon 2c6806dc1fb2753e9a88 <ninfo>: winsz read side: 15, winsz write side: 15 
Jan  6 17:02:46 turris : conmon 2c6806dc1fb2753e9a88 <ninfo>: about to accept from console_socket_fd: 9 
Jan  6 17:02:46 turris : conmon 2c6806dc1fb2753e9a88 <ninfo>: about to recvfd from connfd: 11 
Jan  6 17:02:46 turris : conmon 2c6806dc1fb2753e9a88 <ninfo>: console = {.name = ' 
                                                                                   ڶ��'; .fd = 9} 

Definitely something wrong there… But when I run in the background podman run --security-opt="seccomp=unconfined" --log-level=debug -d alpine it does seem to start now. But I can’t attach to it:

# podman container ps
ERRO[0000] OCI Runtime runc is in use by a container, but is not available (not in configuration file or not installed) 
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

Some more testing tells me there is something wrong with assigning a tty to the container (-t). If I run on my laptop it runs fine as long as I use the -t switch. If I don’t the alpine container stops immediatelly.

So running:

podman run --security-opt="seccomp=unconfined" --log-level=debug -dt alpine

Will give this error in syslog:

Jan  6 17:44:34 turris : conmon e00462617db39f7468e0 <ninfo>: console = {.name = ' �����'; .fd = 9}

Running:

podman run --security-opt="seccomp=unconfined" --log-level=debug -d alpine

Will result in the container stopping immediatelly.

Thats expected. If container has no job to do it stops.
The same with docker.

Try running podman ps -a and see how many stopped containers are there

# podman ps -a
CONTAINER ID  IMAGE                            COMMAND     CREATED       STATUS                   PORTS       NAMES
afae16cc79e7  docker.io/library/alpine:latest  /bin/sh     17 hours ago  Created                              laughing_wiles
ff60da850095  docker.io/library/alpine:latest  /bin/sh     17 hours ago  Exited (0) 17 hours ago              elegant_ellis
66f96d3bc7f9  docker.io/library/alpine:latest  /bin/sh     16 hours ago  Created                              dazzling_ganguly
5bf3706e6330  docker.io/library/alpine:latest  /bin/sh     16 hours ago  Created                              pensive_shamir
e00462617db3  docker.io/library/alpine:latest  /bin/sh     16 hours ago  Created                              jovial_lewin

I’m also trying to run some command in the alpine container but that does not resolve the termination of the container:

# podman ps -a
CONTAINER ID  IMAGE                            COMMAND     CREATED         STATUS                     PORTS       NAMES
afae16cc79e7  docker.io/library/alpine:latest  /bin/sh     17 hours ago    Created                                laughing_wiles
ff60da850095  docker.io/library/alpine:latest  /bin/sh     17 hours ago    Exited (0) 17 hours ago                elegant_ellis
66f96d3bc7f9  docker.io/library/alpine:latest  /bin/sh     16 hours ago    Created                                dazzling_ganguly
5bf3706e6330  docker.io/library/alpine:latest  /bin/sh     16 hours ago    Created                                pensive_shamir
e00462617db3  docker.io/library/alpine:latest  /bin/sh     16 hours ago    Created                                jovial_lewin
541f83ca43af  docker.io/library/alpine:latest  /bin/sh     37 seconds ago  Exited (0) 37 seconds ago              fervent_gould
b99216e01ad3  docker.io/library/alpine:latest  ls -l /etc  11 seconds ago  Exited (0) 11 seconds ago              friendly_gauss

I’m just wondering wether we have a working setup or not? I wanted to test a simple container and access it to see whether the networking setup is working. If that works I could start testing a homeassistant container,

FYI: I’m running the homeassistant container on podman on my turris currently. it worked out of the box but I had to fiddle al lot with the cni network config to get right. It is now at

{
  "cniVersion": "0.4.0",
  "name": "podman",
  "plugins": [
    {
      "type": "bridge",
      "bridge": "br-lan",
      "isGateway": false,
      "ipMasq": false,
      "hairpinMode": false,
      "ipam": {
        "type": "static",
        "addresses": [
          {
            "address": "192.168.18.4/24",
	    "gateway": "192.168.18.254"
          }
        ],
	"routes": [
	  { "dst": "0.0.0.0/0" }
	],
        "dns": {
	  "nameservers": ["192.168.18.254"],
	  "domain": "dizzle.dom",
	  "search": ["dizzle.dom"]
	}
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": false
      }
    },
    {
      "type": "firewall"
    },
    {
      "type": "tuning"
    }
  ]
}

This is just valid for a single container. I have no idea yet how you can assign a specific address to a specific container. At least it runs and I can access the webinterface. The only error I got from the logs was:

[homeassistant.components.dhcp] Cannot watch for dhcp packets: [Errno 1] Operation not permitted

Which makes sense