How to set bandwidth limits and priority per interface/network

I’ve recently purchased the Turris Omnia because I wanted a good, open hardware and Free Software powered router.
First of all, thanks for making such a product.
I decided to get a router, instead of just using my ISP modem, because I wanted to have more control over my home network.
For instance I would like to create three different networks, all isolated and with different download and upload bandwidth limits.
Moreover I would like to be able to set a priority for the different networks, to make sure to always have bandwidth for the most important devices in my house.

To accomplish all this I created three interfaces with LuCI web GUI and I’ve set up different firewall zones for each of them.
So far so good. Now I wanted to set bandwidth limits.
Using Forris I downloaded the suggested community SQM package and I’ve used that to set download and upload limits per interface in LuCI.

First question.
Is it normal that in LuCI GUI download and upload limits are inverted?
Download is actually upload limit and vice versa.

Second question.
How dow I set a bandwidth priority based on IP or, even better, on interface association?

Third question.
Is the SQM package the right tool for the job?
Or maybe should I install some other package more suited to what I want to accomplish?
I’m willing to use the command line and I use it everyday, but I’d rather have some sort of GUI in LuCI or Forris.

I’m very happy with my purchase but, as with all new things, I’m still learning how to use it.

2 Likes

See https://openwrt.org/docs/guide-user/network/traffic-shaping/sqm-details#faq direction really is a property of the interface and for LAN-side interfaces, download and upload are indeed inverted as compared to internet down- and upload. A more precise term would be ingress and egress (as these are clearly interface specific), but then we would need to explain all of this to al SQM users instead of just those advanced users that instantiate SQM on non-WAN interfaces. Does this help?

Well, that is not how SQM works by default, which is per-flow fairness, so that all flows will get a fair share of the available capacity. Now it turns out that some applications like bit torrent can recruit large numbers of flows and hence get an “unfair” advantage over low-flow count applications/users. For these situations cake offers isolation modes that (first) share fairly between internal IP-addresses (and optionally secondarily for each internal host also apply per flow fairness). That way no single host can hog more than its fair share of capacity (a share which with just one host active can be 100%). For many home networks that is all that seems needed for acceptable sharing. Sure it does not solve the torrent and web browser on the same host situation, but many users are happy enough to have the torrents running on a different computer than their browser.
That said sqm can also be configured to evaluate a few diff serve markings to steer packets into a small number of different priority tiers (see simple.qos and layer_cake.qos), but the onus is on the user how to correctly label packets with the corrects DSCPs, which is especially tricky for packets coming from the internet, as SQM’s discs run before iptable.

Yes and no. The default scripts do not seem to offer what you want, but you can easily create your own scripts and just use the existing SQM framework to interface your script with OpenWrt/TOS. Have a look at /usr/lib/sqm, to start just run cp /usr/lib/sqm/simple.qos /usr/lib/sqm/my_simple.qos and cp /usr/lib/sqm/simple.qos.help /usr/lib/sqm/my_simple.qos.help and edit the two my_simple.* files to your hearts content, have a look at /usr/lib/sqm/defaults.sh and /usr/lib/sqm/functions.sh to get a feel of the existing helper functions.

That said, I would recommend you simply try to configure your WAN interface with SQM with say layer_cake.qos and follow the instructions for per-internal-host fairness. That will not be exactly what you want, but should be a decent starting point from which to explore the solution space without foregoing a competent modern AQM configuration until you converge on your optimal settings.

5 Likes

Thanks for your answer.
It was very exhaustive and informative; now I’ve got a lot to read, study and hopefully understand! :wink:

1 Like

Don’t hesitate to ask questions here in this thread be it about configuration or theoretical principles behind SQM.

I tried SQM with my Mox (TurrisOS 5.1.1) just to see the impact. Using the recommended settings the download speed dropped from 52 MBit/s to 8 MBit/s and the upload from 11 MBit/s to 4 MBit/s.
But I got an “A” rating for buffer bloat :laughing:

Adapting the egress and ingress increased the speed but not even close to the original speed. The maximum was 28 MBit/s.
Since there was only very little other traffic in my network I was expecting similar up- and download speeds with and without SQM.

Do I miss something?

My settings were:
ingress: 45000
egress: 10000
discipline: cake with script piece_of_cake
link: ATM with 44 bytes of overhead

Peter

Intersting, but maybe better to discuss this in a new dedicated thread?

To try to help I will need to see the outout of the following commands issued from the router’s commandline:

ifstatus wan
cat /etc/config/sqm
tc -s qdisc
tc -d qdisc

Alao it would be helpful to both a speedtest with SQM enabled and with SQM disabled. I think the dslrepirts speedtest is quite decent, or alternatively the results from fast.com with thecsettings changed to test both directions for >= 30 seconds.

Is it possible to fork a thread?

Anyway, I used the recommended test at Speed test - how fast is your internet? | DSLReports, ISP Information
Here are the results. You can clearly see the 4 times where I disabled SQM (the first one, the last one and two in the middle to verify that the line is still Ok).

image

After enabling SQM again with the above mentioned settings I get attached output of the commands you mention.

Peter

{
	"up": true,
	"pending": false,
	"available": true,
	"autostart": true,
	"dynamic": false,
	"uptime": 1587452,
	"l3_device": "eth0",
	"proto": "dhcp",
	"device": "eth0",
	"metric": 0,
	"dns_metric": 0,
	"delegation": true,
	"ipv4-address": [
		{
			"address": "192.168.2.100",
			"mask": 24
		}
	],
	"ipv6-address": [
		
	],
	"ipv6-prefix": [
		
	],
	"ipv6-prefix-assignment": [
		
	],
	"route": [
		{
			"target": "0.0.0.0",
			"mask": 0,
			"nexthop": "192.168.2.1",
			"source": "192.168.2.100/32"
		}
	],
	"dns-server": [
		"192.168.2.1"
	],
	"dns-search": [
		"speedport.ip"
	],
	"neighbors": [
		
	],
	"inactive": {
		"ipv4-address": [
			
		],
		"ipv6-address": [
			
		],
		"route": [
			
		],
		"dns-server": [
			
		],
		"dns-search": [
			
		],
		"neighbors": [
			
		]
	},
	"data": {
		"leasetime": 1814400
	}
}

config queue 'eth1'
	option interface 'eth1'
	option qdisc_advanced '0'
	option debug_logging '0'
	option verbosity '5'
	option qdisc 'cake'
	option script 'piece_of_cake.qos'
	option linklayer 'atm'
	option overhead '44'
	option enabled '1'
	option download '45000'
	option upload '10000'

qdisc noqueue 0: dev lo root refcnt 2 
qdisc mq 0: dev eth0 root 
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
qdisc cake 8035: dev eth1 root refcnt 9 bandwidth 10Mbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100.0ms atm overhead 44 
qdisc ingress ffff: dev eth1 parent ffff:fff1 ---------------- 
qdisc noqueue 0: dev lan1 root refcnt 2 
qdisc noqueue 0: dev lan2 root refcnt 2 
qdisc noqueue 0: dev lan3 root refcnt 2 
qdisc noqueue 0: dev lan4 root refcnt 2 
qdisc noqueue 0: dev lan5 root refcnt 2 
qdisc noqueue 0: dev lan6 root refcnt 2 
qdisc noqueue 0: dev lan7 root refcnt 2 
qdisc noqueue 0: dev lan8 root refcnt 2 
qdisc noqueue 0: dev lan9 root refcnt 2 
qdisc noqueue 0: dev lan10 root refcnt 2 
qdisc noqueue 0: dev lan11 root refcnt 2 
qdisc noqueue 0: dev lan12 root refcnt 2 
qdisc noqueue 0: dev lan13 root refcnt 2 
qdisc noqueue 0: dev lan14 root refcnt 2 
qdisc noqueue 0: dev lan15 root refcnt 2 
qdisc noqueue 0: dev lan16 root refcnt 2 
qdisc noqueue 0: dev br-guest root refcnt 2 
qdisc noqueue 0: dev br-lan root refcnt 2 
qdisc noqueue 0: dev br-office root refcnt 2 
qdisc noqueue 0: dev br-printer root refcnt 2 
qdisc noqueue 0: dev br-testing root refcnt 2 
qdisc noqueue 0: dev br-wlan root refcnt 2 
qdisc cake 8036: dev ifb4eth1 root refcnt 2 bandwidth 45Mbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100.0ms atm overhead 44 

qdisc noqueue 0: dev lo root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root 
 Sent 13985429198 bytes 72307043 pkt (dropped 0, overlimits 0 requeues 1) 
 backlog 0b 0p requeues 1
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn 
 Sent 13985429198 bytes 72307043 pkt (dropped 0, overlimits 0 requeues 1) 
 backlog 0b 0p requeues 1
  maxpacket 1466 drop_overlimit 0 new_flow_count 652 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 8035: dev eth1 root refcnt 9 bandwidth 10Mbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100.0ms atm overhead 44 
 Sent 100177517 bytes 73927 pkt (dropped 3467, overlimits 214537 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 792Kb of 4Mb
 capacity estimate: 10Mbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:      106 /    1749
 average network hdr offset:           18

                  Tin 0
  thresh         10Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        7.5ms
  av_delay        4.6ms
  sp_delay         14us
  backlog            0b
  pkts            77394
  bytes       105078902
  way_inds           96
  way_miss          262
  way_cols            0
  drops            3467
  marks               0
  ack_drop            0
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len          7590
  quantum           305

qdisc ingress ffff: dev eth1 parent ffff:fff1 ---------------- 
 Sent 4355258 bytes 32876 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan1 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan2 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan3 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan4 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan5 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan6 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan7 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan8 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan9 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan10 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan11 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan12 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan13 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan14 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan15 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan16 root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-guest root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-office root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-printer root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-testing root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-wlan root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc cake 8036: dev ifb4eth1 root refcnt 2 bandwidth 45Mbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100.0ms atm overhead 44 
 Sent 4813064 bytes 32874 pkt (dropped 2, overlimits 6791 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 96768b of 4Mb
 capacity estimate: 45Mbit
 min/max network layer size:           50 /    1456
 min/max overhead-adjusted size:      106 /    1696
 average network hdr offset:           14

                  Tin 0
  thresh         45Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        314us
  av_delay         24us
  sp_delay          9us
  backlog            0b
  pkts            32876
  bytes         4815522
  way_inds            0
  way_miss            1
  way_cols            0
  drops               2
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            1
  un_flows            0
  max_len          1470
  quantum          1373

:man_facepalming:
Wrong interface. Using eth0 works much better.

But it is still about 10% slower than without SQM. Is this expected?

Peter

Well, the setting for traffic shapers always set the gross rate, while speedtest typically measure net throughput, mostly TCP goodput.
You set the download gross shaper to 45 Mbps, while your speedtest shows that you see ~50 Mbps of goodput.
AND you have configured ATM as link layer which itself drags in an overhead of ~9%, :wink:

With your download speed I am almost certain, that your link is quite certain not to use ATM/AAL5 (ADSL2* tops out at * 25 Mbps, in theory ATM can be used with VDSL2 as well, but so far I have not found somebody unlucky enough).

From your shaper setting 45/10 Mbps, with ATM and overhead 44 I expect the following TCP/IPv4 goodput:
45 * ((1500-20-20) / (ceil((1500+44)/48)*53)) = 37,56432247
10 * ((1500-20-20) / (ceil((1500+44)/48)*53)) = 8,34762721555

So, what ISP and what plan are you using, to help us figuring out the true link layer (and likely overhead)?

Also your WAN IP 192.168.2.100 really looks like the SQM device is not directly directly connected to the internet (as this is private IP range not intended to be routed, and for carrier grade NAT typically the 10.* range of IP addresses is used)…

Forgive me if I slightly change topic, but I would like to get back to my initial question.

Is the package luci-app-qos more suitable for my aim?
I must say luci-app-qos doesn’t seem to work if I add download and upload limits per interface.
Perhaps Torris OS uses other packages that disable luci-app-qos?
Or maybe I don’t know how to properly use it.

Many thanks for answering so patiently my beginner’s questions.

Since I’m living in the last house on a long connection line I don’t trust the robustness of my connection too much and therefor want to have only one company to blame if things do no work at all.
So I’m still connected via Deutsche Telekom’s Speedport with the following parameters:
DSL link Synchronous
Downstream 55248 kbit/s
Upstream 12060 kbit/s

The Mox is directly connected to this DSL-Modem/Router. All other devices (apart from a VoIP gateway) go through the Mox to reach the internet.

What would be the correct link layer for this setup? And does SQM even make sense in this case?

Peter

I last used lucy-app-qos like 10 years ago, so am not too familiar with its capabilities.
But as I said, start by setting up SQM properly on your wan interface, to debloat the internet. Then you ca go and split up your network into subnetworks and separate these by VLANs, that way you then can instantiate SQM for each of the VLAN tagged CPU interfaces to the switch, which effectively will give you something close, except that it will a) not throttle WiFi to internet traffic at all, but still throttle WiFi to VLAN traffic.
On OpenWrt there is also lucy-app-nft-qos which seems close to what you seem to desire, but as I never used that myself, I am not sure whether it is a good fit.

That pretty much is guaranteed to be VDSL2/PTM instead of ADSL/ATM then. Also it means PPPoE.

So, set link-layer to ethernet and set the overhead to 34 (to account for the ethernet, PTM and PPPoE headers your packets will carry over the bottleneck DSL link).

A VoIP call takes around 100Kbps per direction, so make sure to leave enough slack in your shaper settings that the number of concurrent phone calls you want to have will fit between the shaper setting and the sync.
Please keep in mind, that with VDSL2 there is a 64/65 encoding that is eating up your sync (and probably also some G.INP overhead) so I would set the shaper lower than:

55248 * 64/65 = 54398,0307692 kbit/s
12060 *64/65 = 11874,4615385 kbit/s

IMHO 50000 / 10000 would be my starting point, maybe I would set the download a bit lower and I would add the ingress keyword to the advanced qdisc options for the ingress disc.

1 Like

Thanks a lot @moeller0.

With these settings I have now very similar up- and download rates with and without SQM. But the bufferbloat rating improved from a “C” to a “A” or even “A+”.
I also added per-host isolation in the hope that my wife can work from home even if my son is at his PC, too :slight_smile:

root@turris:~# tc -d qdisc | grep -i cake
qdisc cake 80e3: dev eth0 root refcnt 9 bandwidth 11Mbit besteffort dual-srchost nat nowash no-ack-filter split-gso rtt 100.0ms noatm overhead 34 
qdisc cake 80e4: dev ifb4eth0 root refcnt 2 bandwidth 53Mbit besteffort dual-dsthost nat wash ingress no-ack-filter split-gso rtt 100.0ms noatm overhead 34 

Peter

2 Likes

Could I convince you to post any outcome of this experiment back to this thread? So whether your household’s happiness with the internet improved and whether e.g. video conferencing works concurrent to on-line gaming (and/or more to the point game downloading)?

I would love to give some hard numbers here.
I have to see what I can do. In the cases where we had problems it was not possible to locate the problem, whether it was our network, the company’s network or the server itself.

So first feedback is quite positive. My son has the feeling that some web-pages where a lot of content is loaded on demand are now more reactive than before.
Also his download of huge updates didn’t impact at all my web experience.

1 Like

Thanks moeller0 for the patience and the informations you provided.
As you wrote, I started by debloating my internet following the OpenWrt docs.
I previously had a BufferBloat rating of B, now I have A+.
And indeed web browsing seems more responsive, but I didn’t do any specific measurements.

I found out that default settings in SQM instance didn’t do anything for debloating my interfaces signal.
As wrote in OpenWrt docs I had to change queuing discipline for every instance from fq_codel to cake and change setup script from simple.qos to piece_of_cake.qos as well.
For my wan interface I’ve even added Link Layer Adaptation for VDSL2 and Per Packet Overhead of 34 byte.

Now my new challenge is to understand those setup scripts lingo and decide if I really want to do some sort of interface prioritisation.
With my actual settings I should get a pretty equal and fair distribution of bandwidth per interface. Right?
Thanks to your previous post I was able to set up download and upload limits per interface correctly.
The only thing I did not understand is the exception in this sentence:

Then you ca go and split up your network into subnetworks and separate these by VLANs, that way you then can instantiate SQM for each of the VLAN tagged CPU interfaces to the switch, which effectively will give you something close, except that it will a) not throttle Wi-Fi to internet traffic at all, but still throttle Wi-Fi to VLAN traffic.

Using SQM I was able to successfully limit both download and upload for my Wi-Fi interface.
What traffic is Wi-Fi to internet?
But do not worry, I’m working my way to fully grasp all your written knowledge!

That depends, the default is per-flow fairness, so all flows are treated equally. Say we have a link of 10 speed units (SU) and 10 flows that want to send as much as possible, each flow will get 1 SU, but with only two flows each flow would get 5 SUs (and a single flow would get 10 SUs). Leftover “speed” of flows that do not use up their share gets evenly distributed among those lows that want more “speed”.

The configuration as desceibed in the sing and dance section of the detailed SQM page in the OpnenWrt wiki, will change that that first speed is shared equitably among all active internal IP-addresses, and for each internal IP-address all flows are again treated equally (as described above).

Since I am not exactly sure which configuration you use, I would assume one or the other to be true in your case.

Sure that works, but sometimes people want unlimited speed between WiFi and (some) LAN ports but limited speed between WiFi and some other LAN ports, and if you instantiate SQM on the WiFI interface all WiFi traffic will be affected, that is WiFi to WAN, and WiFi to all LAN ports. Which again is not what some people expect.

My Wi-Fi interface has practically no real LAN.
It’s completely isolated and devices can’t speak to each other.
I actually use WiFi just for untrusted or guest devices like TV, phones, game consoles.

However my cabled LAN is a different story.
Let me get this straight: if I set egress and ingress limits, I limit both “download” and “upload” towards wan (e.g. internet browsing) and towards another LAN device as well (e.g. rsyncing files on a home server)?