Stuck trying to get qos-scripts working on the router

I’ve been looking for a QoS package that would allow me to prioritise a device over another on the network (e.g. priority to Firestick over mobile phone).

As far as I’m aware, only one QoS package allows you to configure device-specific prioritisation and this is qos-scripts and its counterpart luci-app-qos

With my old TP-Link router, I got this package working fine, but for some reason, when I try to run qos-scripts on the Turris, it doesn’t work:

root@turris:~# /etc/init.d/qos start
RTNETLINK answers: Invalid argument
We have an error talking to the kernel
RTNETLINK answers: Invalid argument
We have an error talking to the kernel

Any ideas on how to fix this or on how to prioritise certain devices over others using alternate packages would be highly appreciated!

install the cake QoS and you shouldn’t have to worry. i have several net streaming devices (including firesticks, roku, apple tv) and have never had any issues, even with multiple devices in use.

I’ve got Cake QoS setup at the moment using the tutorial but for all intents and purposes, there’s no difference.

On my old router, I had this package working just fine and was able to prioritise traffic just fine. Is there any alternative package I can use to prioritise by device?

What exactly is not working as you expect it with sqm-scripts and cake? And what kind of internet link do you use (ADSL, VDSL, DOCSIS. FTTH, …) and what bandwidth do you have available? And finally, how is your network topology, or how is the omnia connected to the internet and how do your clients (like your firestick) connect to the omnia.
Could you post the output of the following commands (with your cake configuration active):
cat /etc/config/sqm
tc -s qdisc

Maybe your sqm configuration can be tweaked to work better for your needs…

Best Regards

I appreciate this isn’t a very ‘scientific’ answer but I can’t help but feel that when I used to use my old TP-Link LEDE router with qos-scripts, I was able to watch stuff on the Firestick with no buffering and the QoS also capped the speed on my desktop so that it wouldn’t hog all the bandwidth reserved for the Firestick.

On the contrary, with SQM/Cake, I’m unable to replicate that same effect where for example the PC is throttled to give way to the Firestick for example.

As for my home internet connection, it’s an ADSL connection with speeds of between 12-15 Mbps down and 0.70-0.90 Mbps up.

This is what my /etc/config/sqm looks like:

  config queue 'eth1'
      option interface 'eth1'
      option ingress_ecn 'ECN'
      option egress_ecn 'ECN'
      option itarget 'auto'
      option etarget 'auto'
      option download '15299'
      option upload '1019'
      option debug_logging '0'
      option verbosity '5'
      option script 'piece_of_cake.qos'
      option qdisc_advanced '1'
      option squash_dscp '1'
      option squash_ingress '1'
      option qdisc_really_really_advanced '1'
      option iqdisc_opts 'nat dual-dsthost'
      option eqdisc_opts 'nat dual-srchost'
      option linklayer 'atm'
      option overhead '44'
      option qdisc 'fq_codel'
      option enabled '1'

This is what my tc -s qdisc looks like: https://pastebin.com/v8647fpr

option qdisc ‘fq_codel’

Shouldn’t this be cake?

1 Like

Ah, okay, the pastebin shows that the ATM accounting seems to not actually work correctly.
Could you also post the output of “tc -d qdisc” please?

Please try to add:
option linklayer_adaptation_mechanism ‘cake’

and also change
option qdisc 'fq_codel’
to
option qdisc 'cake’
even though this is mostly cosmetic.

The tc -s qdisc output also shows meta packets in use on up- and downstream (max_len in the cake and maxpacket in fq_codel reports significantly larger than 1500 bytes). This will most likely not work too well with your relatively slow uplink (then again cake should solve this issue automatically).

In theory it should, in reality piece_of_cake hardcodes QDISC=cake at the very beginning so this is mostly cosmetic. (The cake scripts intend to use more of the unique cake features that will not work with other qdiscs at all, so there is no use in allowing QDISC=fq_codel here). Still, well spotted inconsistency :wink:

Best Regards

Here is a copy of my new /etc/config/sqm:

config queue 'eth1'
        option interface 'eth1'
        option ingress_ecn 'ECN'
        option egress_ecn 'ECN'
        option itarget 'auto'
        option etarget 'auto'
        option download '15299'
        option upload '1019'
        option debug_logging '0'
        option verbosity '5'
        option script 'piece_of_cake.qos'
        option qdisc_advanced '1'
        option squash_dscp '1'
        option squash_ingress '1'
        option qdisc_really_really_advanced '1'
        option iqdisc_opts 'nat dual-dsthost'
        option eqdisc_opts 'nat dual-srchost'
        option linklayer 'atm'
        option overhead '44'
        option qdisc 'cake'
        option linklayer_adaptation_mechanism 'cake'
        option enabled '1'

And this is what tc -d qdisc looks like:

qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev eth0 root
qdisc fq_codel 0: dev eth0 parent :1 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :2 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :3 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :4 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :5 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :6 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :7 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth0 parent :8 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc cake 8054: dev eth1 root refcnt 9 bandwidth 1019Kbit besteffort dual-srchost nat wash rtt 100.0ms atm overhead 44
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
qdisc mq 0: dev eth2 root
qdisc fq_codel 0: dev eth2 parent :1 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth2 parent :2 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth2 parent :3 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth2 parent :4 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth2 parent :5 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth2 parent :6 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth2 parent :7 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev eth2 parent :8 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc noqueue 0: dev br-lan root refcnt 2
qdisc mq 0: dev wlan0 root
qdisc fq_codel 0: dev wlan0 parent :1 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev wlan0 parent :2 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev wlan0 parent :3 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev wlan0 parent :4 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc noqueue 0: dev wlan1 root refcnt 2
qdisc noqueue 0: dev vethMYE7GA root refcnt 2
qdisc fq_codel 0: dev ifb0 root refcnt 2 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 0: dev tun0 root refcnt 2 limit 1024p flows 1024 quantum 1500 target 5.0ms interval 100.0ms ecn
qdisc cake 8055: dev ifb4eth1 root refcnt 2 bandwidth 15299Kbit besteffort dual-dsthost nat wash rtt 100.0ms atm overhead 44

Even though I’ve set my qdisc to cake, is it normal for there to be so many fq_codel lines when I run the tc command?


Also, I don’t know if this helps, but on my modem, my MTU is currently set at 1500:

Ooops, this looks like you sky hub is your primary router, and the omnia is behind the sky hub’s NAT. But that should still work for per-internal-IP fairness…

I would guess that there is simply one fq_codel instance per queue per interface, indicating that the eth interfaces have 8 queues…

How sure are you about the 1019 Kbps upload? What does the modem report as synchronization bandwidth for ingress and egress?

This might indicate that you are lucky and that your ISP uses baby-jumbo frames, potentially to have the PPPoE-overhead not eat into the internet visible MTU. (Does your ISP use PPPoE). How sure are you about the overhead 44 number?

Best Regards

I should have clarified that earlier on to be honest! The Sky Hub is my modem/router and the Omnia is plugged into the Sky Hub as 192.168.0.2 and configured as the DMZ (thus avoiding double NAT) as per the screenshot in my earlier post.


As far as the Sky Hub is concerned…


According to the Sky Hub, I am connected via PPPoA. As for the 44 overheard, I just copied it over from the cake tutorial to be honest.

Ah, okay, but the little pedantic person inside of me, needs to point out that you are still running double NAT unless your ISP asigns you the 192.168.0.2 as the WAN IPv4 address (which given its RFC 1918 “Address Allocation for Private Internets” nature seems quite unlikely). I would guess you have:
external public IP assigned to SkyHubWAN → 192.168.0.0/24 assigned to SkyHubLAN and (assuming no other device is connected ot the sky hub via cable or WLAN → 192.168.0.2 assigned to TurrisOmniWAN → 192.168.1.0/24 assigned to TurrisOmniaLAN/WLAN
Is that correct? If yes, you still have double NAT, even though you most likely will not see any port remapping between SkyHubWAN and TurrisOmnisWAN. But as I said this should not really affect the per-internal-host-IP fairness.

IngressSynch: 20208
EgressSynch: 1282

IngressShaper: 15299Kbps
EgressShaper: 1019Kbps

Okay, especially with link layer accounting configured these values are very conservative and should work. (Heck for egress with the overhead and link layer accounted for correctly you should be able to specify 1282 Kbps, and you might want to try this). Now, if your ISP actually uses upstream shapers/policers that would be a different kettle of fish, but luckily we can sort of test for that: if you hook up your computer directly to the sky hub and use no traffic shaping/QoS at all what are the speeds you can measure against the dslreports speedtest (from that we can calculate back to the gross synch bandwidth required for the measured goodput)? (Have a look at https://forum.lede-project.org/t/sqm-qos-recommended-settings-for-the-dslreports-speedtest-bufferbloat-testing/2803 for recommendations how to set up that speedtest, but be advised that with your upstream you most likely will only be able to use 4 upstream streams).

Question: with these slightly modified settings how is your perception of shaper fairness (and how exactly are you testing that)?

Ah, 44 is quite conservative and should work to keep latency under load increases under control (during epochs of network satruration) at the cost of a slightly larger bandwidth sacrifice than you intended. You could follow the instructions on https://github.com/moeller0/ATM_overhead_detector and try to empirically deduce the applicable overhead on your link.

Best Regards

Spot on description!


The following tests were run with the desktop PC hooked up directly to the Sky Hub with no other tabs open etc:

First run: Download streams = 8, Upload streams = 4

Second run: Download streams = 16, Upload streams = 16


I have run this and the results are as follows:

  According to http://ace-host.stuart.id.au/russell/files/tc/tc-atm/
  10 bytes overhead indicate
  Connection: PPPoA, VC/Mux RFC-2364
  Protocol (bytes): PPP (2), ATM AAL5 SAR (8) : Total 10
  Add the following to both the egress root qdisc:
  A) Assuming the router connects over ethernet to the DSL-modem:
  stab mtu 2048 tsize 128 overhead 10 linklayer atm
  Add the following to both the ingress root qdisc:
  A) Assuming the router connects over ethernet to the DSL-modem:
  stab mtu 2048 tsize 128 overhead 10 linklayer atm
  Elapsed time is 773.995 seconds.
  Done...
  ans = [](0x0)

The following graphs were also generated: ATM Overhead Calc graphs - Album on Imgur

So instead of 44, should I be using a figure of 10?

Okay, taking the max and calculating back to the gross rate (the one needed for sqm-scripts) I get the following (these would beupper limits, not necessarily achievable shaper settings, as especially on ingress the shaper needs to run some 10-15% below the true bottleneck to be effective).
on the wire packet size 1510 → ceil(1510/48) = 32 : effective size = 3248 = 1536
Ingress:
17170 * ((1536)/(1500 - 20 - 20)) * (53/48) = 19945.42 Kbps so this is darn close to the IngressSync: 20208 (100-100
19945.42/20208 = 1.3 % off)
Egress:
996 * ((1536)/(1500 - 20 - 20)) * (53/48) = 1157 Kbps slightly more off the theoretical maximum of EgressSynch: 1282 (100-100* 1157/1282 = 9.8 % off)

So it looks like your shaper settings are decent (you might have some leeway to increase the Ingress shaper to say 18000, as the ATM accounting alone will reduce the effective measurable bandwidth by ~9%…

Pretty much so! Looking at the speedtests your upload seems to be the biggest problem and there is not much you can do short of switching to either AnnexJ or VDSL (both of which I assume your ISP does not offer for an acceptable price).

Best Regards

With that in mind, my /etc/config/sqm now looks like this:

root@turris:~# cat /etc/config/sqm

config queue 'eth1'
        option interface 'eth1'
        option ingress_ecn 'ECN'
        option egress_ecn 'ECN'
        option itarget 'auto'
        option etarget 'auto'
        option download '19945'
        option upload '1157'
        option debug_logging '0'
        option verbosity '5'
        option script 'piece_of_cake.qos'
        option qdisc_advanced '1'
        option squash_dscp '1'
        option squash_ingress '1'
        option qdisc_really_really_advanced '1'
        option iqdisc_opts 'nat dual-dsthost'
        option eqdisc_opts 'nat dual-srchost'
        option linklayer 'atm'
        option overhead '10'
        option qdisc 'cake'
        option linklayer_adaptation_mechanism 'cake'
        option enabled '1'

I’ll have a look and see how things perform over the next couple of days with this new SQM setup. Thanks for all your help so far!

One point that I’ve been wondering about…how does Qualcomm Fast Path differ to SQM?

fast path: this makes packet processing fast by side stepping the linux kernel, if the fast path module deems it is save to do so -> effectively making your router capable of routing outside of its CPU weight class.

sqm: use CPU to shape traffic to keep bufferbloat in ingress and egress direction inside acceptable bounds.

In theory both are orthogonal and it should be possible to combine them (even then most likely sqm high CPU cost for traffic shaping will lead to transfer rates well below wghat “pure” fast path would offer, at least that is my hypothesis). In practice it seems that fast path steps over those entry points that sqm’s shaper need to do their thing, so currently sqm will not work with sqm on the same interface. There seem to be two major community builds for LEDE gwlims and disent1’s I believe, the former will have fast path override sqm, the later sqm override fast-path. I believe the second option is less surprising, but in any case configuring sqm & fast path (on the same interface?) will make you forfeit one of them.

1 Like