LXC container veth interface gets disabled

Hi all,

I am trying to run a LXC container on my Turris router. I have had this running in the past with no problem, but now I can’t get it working.

My configuration of the container is pretty basic. Please see below.

# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)

# # Some workarounds
# Template to generate fixed MAC address

# Distribution configuration
lxc.arch = armv7l

# Container specific configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.hook.start-host = /usr/share/lxc/hooks/systemd-workaround
lxc.rootfs.path = btrfs:/srv/lxc/ubuntu-lxc/rootfs
lxc.uts.name = ubuntu-lxc

# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br-lab
lxc.net.0.flags = up
lxc.net.0.name = eth0
lxc.net.0.hwaddr = d2:71:cf:e6:67:c0
#lxc.net.0.ipv4.address = 192.168.1.30/24
#lxc.net.0.ipv4.gateway = 192.168.1.1

Basically, when I enable DHCP on the interface where the veth is supposed to be bridged, the container will start and get an IP address from the DHCP server. However, shortly after the interface just gets disabled. I don’t know why that is happening.

Nov 29 10:07:32 turris dnsmasq-dhcp[4467]: DHCPDISCOVER(br-lab) d2:71:cf:e6:67:c0
Nov 29 10:07:32 turris dnsmasq-dhcp[4467]: DHCPOFFER(br-lab) 192.168.1.243 d2:71:cf:e6:67:c0
Nov 29 10:07:32 turris dnsmasq-dhcp[4467]: DHCPDISCOVER(br-lab) d2:71:cf:e6:67:c0
Nov 29 10:07:32 turris dnsmasq-dhcp[4467]: DHCPOFFER(br-lab) 192.168.1.243 d2:71:cf:e6:67:c0
Nov 29 10:07:32 turris dnsmasq-dhcp[4467]: DHCPREQUEST(br-lab) 192.168.1.243 d2:71:cf:e6:67:c0
Nov 29 10:07:32 turris dnsmasq-dhcp[4467]: DHCPACK(br-lab) 192.168.1.243 d2:71:cf:e6:67:c0 ubuntu-lxc
Nov 29 09:07:33 turris dhcp_host_domain_ng.py: DHCP add new hostname [ubuntu-lxc,192.168.1.243]
Nov 29 09:07:33 turris dhcp_host_domain_ng.py: Refresh kresd leases
Nov 29 10:07:42 turris kernel: [48142.090058] br-lab: port 3(vethirXRFU) entered disabled state


[48126.674218] eth0: renamed from vethpXubjD
[48126.740006] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[48126.740127] IPv6: ADDRCONF(NETDEV_CHANGE): vethirXRFU: link becomes ready
[48126.740219] br-lab: port 3(vethirXRFU) entered blocking state
[48126.740231] br-lab: port 3(vethirXRFU) entered forwarding state
[48142.090058] br-lab: port 3(vethirXRFU) entered disabled state

The same behavior when I let the DHCP server enabled but assign a static IP in the configuration of the container.

If I disabled the DHCP server, I can assign a static IP via the configuration and the interface does NOT get disabled. This is unfortunately not an option, because then I have no DNS resolution.

It seems I can’t set the DNS server from the LXC configuration, nor can I create the resolv.conf. The container seems to be using systemd-resolved to automatically generate the resolv.conf when it gets a DHCP lease.

Does anyone have any suggestions to either fix the interface issue or set the DNS server statically?

Many thanks.

Kind regards,
Martin

Try to compare settings in LuCi Network-> Interfaces → Devices

Compare br-lan and br-lab if they have the same settings.

The settings are exactly the same on both bridges. Also, the issue is the same regardless of whether I would bridge the veth to br-lan or br-lab.

The only option to avoid the interface from being disabled is to disable the DHCP server.

Which Ubuntu are you using? There were some reports here on the forum that newer ubuntus are trying to do “cloud-init” and network fails because of that. Maybe its also your case. You could try to lxc-attach to the container and check some basic stuff. Like if there is a link on the interface. Or try to dhclient from inside the container.

I am using ubuntu 22.04.

Thanks a lot. You were right, it was because of cloud-init. I actually remove cloud-init from the container and now the interface does not get disabled by the failures.

I didn’t think about checking in the container, I was more thinking it’s on the host system where the interface gets disabled.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.