Help with unreachable LXC Container

I’m looking for help with a problem I’m having on my Turris Omnia 2020 (Firmware TurrisOS 5.1.10).
I bought an USB removable disk, (SSD Samsung T7 formatted with ext4), to use as a mount point on the router, I’ve attached it to the rear USB port of the router, then I’ve created an LXC-Container based on Ubuntu Focal and installed dnscrypt-proxy2 + pihole on it.

I assigned the ip to the pihole via static lease with lease time “infinite”, then I’ve set this ip as a custom DNS server on the WAN interfaces (WAN and LTE) and as DHCP-Option on the LAN interface:

Everything seems to work perfectly, I’m using cloudflare as DNS server, DoH is active and pihole is working fine.

The problem is that after a few hours the ip of the pihole is no longer reachable and therefore the DNS do not work, the LXC-Container status doesn’t change but to reach the pihole ip again I have to restart the container.
I used the router without any problems all last night, this morning as soon as I woke up I had to restart the container because of this problem.

What am I doing wrong? Does the USB device go into sleep mode even though it is an SSD? HDD-Idle service is disabled.
I’ve removed knot-resolver (kresd) to avoid conflicts, I don’t know if this has anything to do with it.
Can you help me?

dnscrypt-proxy configuration:

server_names = ['cloudflare']
listen_addresses = ['']
max_clients = 250
ipv4_servers = true
ipv6_servers = false
dnscrypt_servers = true
doh_servers = true
require_dnssec = true
require_nolog = true
require_nofilter = true
disabled_server_names = []
force_tcp = false

pihole configuration:

pihole DHCP server disabled

Static lease

Mount point

LXC-Container configuration

# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: --dist Ubuntu --release Focal --arch armv7l --server --no-validate
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)

# Some workarounds
# Template to generate fixed MAC address

# Distribution configuration
lxc.arch = armv7l

# Container specific configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.hook.start-host = /usr/share/lxc/hooks/systemd-workaround
lxc.rootfs.path = btrfs:/srv/lxc/pihole/rootfs = pihole = 1
lxc.start.delay = 1

# Network configuration = veth = br-lan = up = eth0 = f2:d4:d7:61:79:50 = =

dnscrypt-proxy service status

pihole status

not sure if this helps…
ad_dhcp: if you are using as DHCP, keep DNS service as well, you can have two DNS. You can add pihole ip/hostname to /etc/hosts and luci/networking/hostnames.
ad_static_lease: ensure that ip you picked for pihole is not within the reserved pool on your main dhcp server.
ad_ip_change: check and files for mac/ip of your pihole, maybe you will find two entries. Some time ago i had two ips assigned and one was from static lease, second from dhcp. I had to go use this tip:

You need to set the interface to "manual " in the "/etc/network/interfaces " file or else dhcpcd5 cant assign the static IP address configured in "/etc/dhcpcd.conf " to the interface.

ad_lxc: config seems fine. networking part.
ad_hdd_idle: that should not be an issue.

thanks for your reply Maxmilian, I think I’ve solved the problem:
I reinstalled knot-resolver and set a redirect to


then I removed the network interface daily reboot from the crontab (I didn’t even remember setting it)
00 05 * * * /etc/init.d/network restart

I have verified that restart the lan interface is the main cause of the problem, each time it happens the LXC container must also be restarted.
The changes seem to have worked, the ip of the pihole is still reachable and everything works normally.

Let me just add a note about ordering the DNS stacks when filtering. Your approach is apparently:

  • LAN client -> knot-resolver -> pi-hole -> some upstream resolver (cloudflare in your case)

The issue is with doing DNSSEC validation (knot-resolver) after filtering happened (pi-hole) and not the other way around. It’s much better to filter after the last validator, or inside it, as filtering is a modification and the validators have no way to tell if it was intentional/authorized, so (if the filtered name is DNSSEC-protected) they will retry repeatedly and eventually return failure instead of some kind of empty result. For the end client (e.g. browser) it will look similar, except for worse latency, caching, etc. And you will be spending extra resources on the ping-pong of repeated attempts and failures. Therefore, if you desire this combination, I’d instead recommend to switch the order to:

  • LAN client -> pi-hole -> knot-resolver (…)

On the internet side you also get more options, e.g. you may turn off forwarding and let knot-resolver ask authoritative servers directly instead of relying on other resolvers (like cloudflare). I think vanilla pi-hole can’t do that. Or you may use forwarding over TLS.

1 Like

thanks for your suggestion, I’ll give it a try :smiley:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.