How to set bandwidth limits and priority per interface/network

Exactly, shaping traffic on an interface affects all traffic traveling over that interface (by default, you can design a system where only specific traffic gets shaped, but if the goal is to keep bufferbloat low that is not that straight forward).

As James Bond in Thunderball said, you can’t win them all.

In my case it’s a bit of a downside, because I use traffic shaping to grant the needed bandwidth to my server always, even if someone on the home LAN is downloading with BitTorrent.

Oh well, I guess my GoldenEye: Source LAN party will be a bit slower than it could have been.
At least it won’t have any bloat in those 007 packets!

This is why per-internal-IP-fairess mode was added. It is by no means a perfect solution for all use-cases, but it keeps the bit torrent problems at bay (or rather restricted to the computer running the bit torrent client), while still allowing full internal traffic over the LAN switch. So local traffic to your severs will not be affected, but internet traffic to and from the sever will at worst be just as large as the torrent traffic (from one client) as your sever will be treated as its own independent internal IP. But I have a hunch that might not be enough bandwidth for the server?

I’m sorry but I don’t know how to answer to your question.
I did not understand your last post very well.
How does per-internal-IP-fairness mode would actually work with some real numbers?

My internet speed is roughly 6 Mbit in upload and my intention was to preserve 3 of these 6 just for my home server. However it would desirable to be able to limit just the internet traffic and not the LAN one (rsync to another computer scenario).

From what you wrote this per-internal-IP-fairness mode seems what I’m looking for, but I don’t know how to activate it.
I’ve made all SQM QoS adjustments using LuCI.

Okay, say you have your game server? and bit-torrent (on another computer) running, and both want to use as much upload as possible, each will get one half of your available rate, so 6/2 or 3 Mbps in your example, if a third computer is active each machine will only get 6/3=2 Mbps. Any part of a machine’s share that it does not use is shared between the others.
Does that help?

Sure, I understand. you are lucky in that doing this for the upload/egress direction is considerably simpler than for the download direction (but still not that simple).

Ah, in the LuCI GUI:

  1. navigate to Network → SQM QoS,
  2. select the “Queue discipline” tab
  3. check the checkbox labeled “Show and Use Advanced Configuration. Advanced options will only be used as long as this box is checked.”
  4. check the checkbox labeled “Show and Use Dangerous Configuration. Dangerous options will only be used as long as this box is checked.”
  5. Add the following: nat dual-dsthost ingress to the field called “Advanced option string to pass to the ingress queueing disciplines; no error checking, use very carefully.”
  6. Add the following: nat dual-srchost to the field called “Advanced option string to pass to the egress queueing disciplines; no error checking, use very carefully.”

That should do the trick. But as I tried to explain that is not exactly what you are asking for and might not give you tight enough guarantees, but it should keep the one bit-torrent client from stealing all/most bandwidth for itself…

Hope that helps, if you have questions, just ask.

2 Likes

Your posts are always very illuminating.
However the matter of subject is still pretty much unknown to me.
I’m just a humble web developer, who started getting into Linux and the low level of computer things just in the last couple of years.
Nonetheless your help is incredibly encouraging and I promise I’ll get better at it.

If I got everything you said, you are suggesting to keep egress and ingress limits per interface and activate per-internal-IP-fairness mode.
How can this mode change the fact that egress and ingress limits are effective both for internet traffic and LAN one (à la rsync from a computer inside my home network to another computer inside the same network and maybe of the same interface as well)?

I probably forgot the details about your specific set-up (I try to help too many sqm users to keep all the details straight, sorry), but SQM on a WAN interface will leave all internal traffic (so LAN to LAN, WiFi to WiFi, LAN to WiFi, WiFi to LAN) alone, it will only affect traffic traversing the internet access link. Now, as I said, that might not be an answer that matches your challenge/set-up, in tat case, please be gentle and explain your issue again.

Sorry, I didn’t want to sound rude.
I truly appreciate your help, you are indeed a blessing!

My setup is pretty simple, I think.
I’ve got a modem (ISP one), my Turris Omnia and three different interfaces on it:

  1. a trusted cabled LAN for my desktop and laptop, or similar devices;
  2. a bit isolated home server LAN, where is my home server and where I connect to when I need to do maintenance on it. Its settings are usually more strict and less permissive than the trusted LAN;
  3. finally we have the completely isolated Wi-Fi LAN for untrusted devices like phones, TV, consoles.

My dream would be to limit the amount of internet traffic used by these interfaces to always guarantee a certain needed amount to everyone. E.g. I would like my home server to always have access to 3 Mbit of my total 6 of upload bandwidth. However it wouldn’t make sense to limit the LAN traffic of my home server interface to 3 Mbit when rsyncing between two servers from the same interface or other possible LAN traffic, even between different interfaces.
However if this is not possible, I’ll live with it.

Thank you again for the support and your precious knowledge.

Ih, no worries, you did not sound rude at all.

That is easily achieved with the per-internal-IP fairness in cake. But that will not meet your second requirement of giving your server 50% of your uplink.

And for the egress that might also not be that hard. We might be able to get this done in layer_cake.qos with a few modifications to the script.

But the first step would be to make sure you have per-internal-IP fairness configured and tested. So, have you done that and how do you rate that experience?

I’ve set per-internal-IP-fairness mode on my WAN interface, without removing the limits to all other interfaces.
I’ve followed the passages you wrote. I’ll test my network during the week.

Was I supposed to activate per-internal-IP-fairness mode only on my WAN interface or on them all (trusted LAN, server LAN, Wi-Fi LAN)? Was I supposed to remove the egress and ingress limits to my other interfaces as well, when activating this mode?

By the way, happy new year!

Could you please post the output of tc -s qdisc again, that way I can see how many SQN instances are in play.

That would be my thought, given that your internal bandwidth is either 100 or 1000 Mbps which is much higher than your external upload of 6 Mbps.

That depends on what you want to achieve. Personally, knowing how computationally intensive traffic shaper like SQM’s cake or HTB can be, I would always try to get away with the lowest number of shapers as possible. But “possible” very much depends on what you want to achieve ;), so my “lowest possible” might be quite different from yours.

This is the output of tc -s qdisc

qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 57075450 bytes 86029 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 57075450 bytes 86029 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth1 root
 Sent 541381952 bytes 601038 pkt (dropped 0, overlimits 0 requeues 6)
 backlog 0b 0p requeues 6
qdisc fq_codel 0: dev eth1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 541381952 bytes 601038 pkt (dropped 0, overlimits 0 requeues 6)
 backlog 0b 0p requeues 6
  maxpacket 1514 drop_overlimit 0 new_flow_count 5 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 8031: dev eth2 root refcnt 9 bandwidth 5157Kbit besteffort dual-srchost nat nowash no-ack-filter split-gso rtt 100.0ms noatm overhead 34
 Sent 16898993 bytes 62540 pkt (dropped 0, overlimits 1525 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 9792b of 4Mb
 capacity estimate: 5157Kbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:       62 /    1534
 average network hdr offset:           14

                  Tin 0
  thresh       5157Kbit
  target          5.0ms
  interval      100.0ms
  pk_delay        618us
  av_delay         44us
  sp_delay          2us
  backlog            0b
  pkts            62540
  bytes        16898993
  way_inds           28
  way_miss          636
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1514
  quantum           300

qdisc ingress ffff: dev eth2 parent ffff:fff1 ----------------
 Sent 59599190 bytes 79357 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan2 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan3 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan4 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8035: dev br-lan root refcnt 2 bandwidth 15Mbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100.0ms raw overhead 0
 Sent 52285631 bytes 63559 pkt (dropped 4635, overlimits 95722 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 208094b of 4Mb
 capacity estimate: 15Mbit
 min/max network layer size:           42 /    1514
 min/max overhead-adjusted size:       42 /    1514
 average network hdr offset:           14

                  Tin 0
  thresh         15Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        1.5ms
  av_delay         95us
  sp_delay          2us
  backlog            0b
  pkts            68194
  bytes        55471997
  way_inds           19
  way_miss          566
  way_cols            0
  drops            4635
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          3028
  quantum           457

qdisc ingress ffff: dev br-lan parent ffff:fff1 ----------------
 Sent 14825495 bytes 52985 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8039: dev br-larry_lan root refcnt 2 bandwidth 17500Kbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100.0ms raw overhead 0
 Sent 4394473 bytes 9289 pkt (dropped 4, overlimits 4150 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 45356b of 4Mb
 capacity estimate: 17500Kbit
 min/max network layer size:           42 /    1514
 min/max overhead-adjusted size:       42 /    1514
 average network hdr offset:           14

                  Tin 0
  thresh      17500Kbit
  target          5.0ms
  interval      100.0ms
  pk_delay        357us
  av_delay         45us
  sp_delay          2us
  backlog            0b
  pkts             9293
  bytes         4400235
  way_inds            0
  way_miss          454
  way_cols            0
  drops               4
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1514
  quantum           534

qdisc ingress ffff: dev br-larry_lan parent ffff:fff1 ----------------
 Sent 2495083 bytes 9326 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 803d: dev wlan0 root refcnt 5 bandwidth 10Mbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100.0ms raw overhead 0
 Sent 97989 bytes 303 pkt (dropped 0, overlimits 123 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 8512b of 4Mb
 capacity estimate: 10Mbit
 min/max network layer size:           42 /    1506
 min/max overhead-adjusted size:       42 /    1506
 average network hdr offset:           10

                  Tin 0
  thresh         10Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        1.9ms
  av_delay        174us
  sp_delay          2us
  backlog            0b
  pkts              303
  bytes           97989
  way_inds            0
  way_miss           15
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            1
  un_flows            0
  max_len          1506
  quantum           305

qdisc ingress ffff: dev wlan0 parent ffff:fff1 ----------------
 Sent 49540 bytes 303 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8032: dev ifb4eth2 root refcnt 2 bandwidth 26Mbit besteffort dual-dsthost nat wash ingress no-ack-filter split-gso rtt 100.0ms noatm overhead 34
 Sent 60969958 bytes 79348 pkt (dropped 9, overlimits 53283 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 70116b of 4Mb
 capacity estimate: 26Mbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       80 /    1534
 average network hdr offset:           14

                  Tin 0
  thresh         26Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        568us
  av_delay         39us
  sp_delay          4us
  backlog            0b
  pkts            79357
  bytes        60983512
  way_inds           19
  way_miss          755
  way_cols            0
  drops               9
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          3608
  quantum           793

qdisc cake 8036: dev ifb4br-lan root refcnt 2 bandwidth 3Mbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100.0ms raw overhead 0
 Sent 14186242 bytes 52063 pkt (dropped 922, overlimits 22848 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 89280b of 4Mb
 capacity estimate: 3Mbit
 min/max network layer size:           60 /    1506
 min/max overhead-adjusted size:       60 /    1506
 average network hdr offset:           14

                  Tin 0
  thresh          3Mbit
  target          6.1ms
  interval      101.1ms
  pk_delay        2.3ms
  av_delay        209us
  sp_delay          4us
  backlog            0b
  pkts            52985
  bytes        15567243
  way_inds          114
  way_miss          667
  way_cols            0
  drops             922
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1506
  quantum           300

qdisc cake 803a: dev ifb4br-larry_la root refcnt 2 bandwidth 3500Kbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100.0ms raw overhead 0
 Sent 2620007 bytes 9322 pkt (dropped 4, overlimits 3954 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 17856b of 4Mb
 capacity estimate: 3500Kbit
 min/max network layer size:           60 /    1514
 min/max overhead-adjusted size:       60 /    1514
 average network hdr offset:           14

                  Tin 0
  thresh       3500Kbit
  target          5.2ms
  interval      100.2ms
  pk_delay        4.8ms
  av_delay        301us
  sp_delay          4us
  backlog            0b
  pkts             9326
  bytes         2625605
  way_inds            1
  way_miss          519
  way_cols            0
  drops               4
  marks               0
  ack_drop            0
  sp_flows            5
  bk_flows            1
  un_flows            0
  max_len          1514
  quantum           300

qdisc cake 803e: dev ifb4wlan0 root refcnt 2 bandwidth 1750Kbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100.0ms raw overhead 0
 Sent 53782 bytes 303 pkt (dropped 0, overlimits 31 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 5Kb of 4Mb
 capacity estimate: 1750Kbit
 min/max network layer size:           42 /    1392
 min/max overhead-adjusted size:       42 /    1392
 average network hdr offset:           10

                  Tin 0
  thresh       1750Kbit
  target         10.4ms
  interval      105.4ms
  pk_delay        3.1ms
  av_delay         77us
  sp_delay          2us
  backlog            0b
  pkts              303
  bytes           53782
  way_inds            0
  way_miss           18
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            1
  un_flows            0
  max_len          1392
  quantum           300

I trust your opinion. My network has very few devices (4/5 max), so all this could easily be a bit overengineered. If you say it’s better to keep it simple, I happily will.

Right now I have ingress limit of 26000 kbit/s and egress limit of 5157 kbit/s on WAN interface, plus per-internal-IP-fairness mode active; on cabled LAN interface I have egress limit of 15000 kbit/s and ingress limit of 3000 kbit/s; on server LAN interface I have egress limit of 17500 kbit/s and ingress limit of 3500 kbit/s; lastly on untrusted Wi-Fi interface I have egress limit of 10000 kbit/s and ingress limit of 1750 kbit/s.
On all of this interfaces I use cake queuing discipline and piece_of_cake.qos setup script. Per-internal-IP-fairness mode is active only on WAN.

What do you suggest me to remove and what to leave, or even change?

Gracie, but you better just take this as information from which you distill/create your own policy, I have no real idea about what you want to achieve in your network…

That is a matter of taste. Personally I find it easier to reason about and make predictions simpler systems rather than more complex systems so tend to aim for as simple as acceptable, but as I said, matter of taste. Now, simplicity is only a secondary goal, the primary goal is still to make the local network operate smoothly. IMHO starting simple and only adding layers of complexity if the performance without is not good enough (typically I aim for good enough and not perfect, but that is a “matter of taste”).

That sounds like a decent starting point for getting a homenetwork connected to the ISP with little bufferbloat.

If that works for you, just leave it ;). I would certainly try to see whether the individual shapers are actually needed, I can see the desire to throttle the untrusted WiFi, and maybe even the LAN, but I would probably try to remove the SQM instance for the server, since you limit everything also already, that will make sure that there is enough capacity left for the server (5157-3500=1657Kbps)…

BTW, what is the output of brctl show?

This is the output of brctl show:

bridge name	bridge id		STP enabled	interfaces
br-lan		7fff.d858d7011bcd	no		lan0
							lan1
							lan2
br-larry_lan		7fff.d858d7011bcd	no		lan3
							lan4

At the end I decided to follow your suggestions, but with a twist.
I’ve throttled the untrusted Wi-Fi and my server LAN, whereas I left the trusted LAN with no limits.

My reasoning was that, using per-internal-IP fairness mode on WAN (nat dual-dsthost ingress and nat dual-srchost settings on WAN interface) prevents trusted LAN to steal all the available internet traffic. Moreover, since I know how the server will be used, I can calculate the max bandwidth for server LAN, without making that interface hog all the resources.
In this way I can give my trusted cabled LAN some slack, in case of that famous rsync to another LAN computer scenario or maybe a LAN game party.

Every interface still use cake queuing discipline with piece_of_cake.qos setup script to debloat traffic.

Do you think my setup makes sense?

Typically it is sufficient to use one SQM/cake instance on WAN to debloat a network, but occasionally there are bad actors inside a network that also require special care, typically a places of large spreed transitions, like from Gigabit-ethernet to 100Mbps-ethernet. But if your current setup works as intended, just use it that way… Every network is different and it is a network’s admin (you) that needs to set the policy, SQM is merely one of the tools to do so :wink:

Thanks for all the help, I’m very happy with my home network.

I noticed that disabling SQM instance on trusted limtless LAN has no effects on bufferbloat. So I deleted that SQM instance, because I didn’t need it.
Unfortunately I can’t say the same for instances where shapers are active.
Both untrusted Wi-Fi and server LAN, if I leave default queuing disciplines fq_codel and default setup script simple.qos, go back to A or B in bufferbloat rating.

1 Like

IMHO that is the best way forward anyway, create a configuration based on first principles and then see how/if this actually works as intended and whether it is required at all.

Personally, I do not give too much importance to the one letter classification of bufferbloat on dslreports, but rather look at the detailed bufferbloat plots for the three individual tests (by clickong on the links named “Idle”, “Downloading”, “Uploading” under the three summary bars in the default bufferbloat plot). Not sure whether I posted this already, but have you read this: https://forum.openwrt.org/t/sqm-qos-recommended-settings-for-the-dslreports-speedtest-bufferbloat-testing/2803 already?