My understanding of Omnia’s policy is that it always directs clients to ask Omnia’s IP. Omnia’s resolver then can be configured e.g. to forward to some other IPs (with caching and by default DNSSEC validation).
I would expect that you simply protect the IP layer and let DNS go the usual way (over the encrypted IP). Of course, one can certainly set things up in other ways…
There is a difference in expectation then. My expectation is to be at liberty (being my network and TO to facilitate it) to utilize the remote endpoint’s resolver in the wg tunnel as opposed to be limited to utilizing the local resolver routed over the wg tunnel. DNS settings for the WAN iface are honoured so why not for the WG iface?
apparently one cannot as described in the inital post. Similar to WAN iface DNS settings the DNS settings for the WG iface should be honoured too and not be neglected.
That’s my feeling from long-term observation and it makes sense to me to default to that. I’m really concerned with things inside knot-resolver, not much around. Some people certainly have configured Omnia’s DHCP to hand out other IPs for DNS.
If there were an (un)offical TO policy of such kind it would be kind of silly to allow dns settings for the WAN iface but not for the tunnel iface, would it not? Thus it seems rather a bug/oversight that the tunnel dns is neglected, perhaps not being parsed.
Apparently it does not to me, else I would not have opened this thread. IMHO TO is there to facilitate but not to tell me how to run things
That as it it may be and being their choice or misunderstanding of settings. But perhaps discussed in another thread then.
That is kind of you to offer some potential workarounds but I am afraid not being a solution since it is either the local dns resolver or the remote dns resolver. However the router’s clients are split routed - some via WAN and some via WG and each is to use the respective dns resolver. Which would be a piece of cake if it was not for this bug…
Not sure how wireguard works but if it is a separate interface and it uses the default dhcp (dnsmasq) it should be possible to set separate servers for them by setting them different in the dnsmasq config file
Sure, dhcp-option=wg0,6,172.24.120.10 should do the trick except it does not. Reckon that such dnsmasq setting does not work on virtual/tunnel interfaces but only physical ones. Afik it does not with openvpn tun either but the dns option is to be set within the openvpn configuration, as it is with wg.