How to use the cake queue management system on the Turris Omnia

So DOCSIS networks use a shaper to make sure no user ever exceeds its contracted rate (except temporarily for power boost like systems, but that is part of the contract). This shaper incidentally only takes the full ethernet frame with frame check sequence into account, see http://www.cablelabs.com/specification/docsis-3-0-mac-and-upper-layer-protocols-interface-specification/:
“C.2.2.7.2 Maximum Sustained Traffic Rate 632 This parameter is the rate parameter R of a token-bucket-based rate limit for packets. R is expressed in bits per second, and MUST take into account all MAC frame data PDU of the Service Flow from the byte following the MAC header HCS to the end of the CRC, including every PDU in the case of a Concatenated MAC Frame. This parameter is applied after Payload Header Suppression; it does not include the bytes suppressed for PHS. The number of bytes forwarded (in bytes) is limited during any time interval T by Max(T), as described in the expression: Max(T) = T * (R / 8) + B, (1) where the parameter B (in bytes) is the Maximum Traffic Burst Configuration Setting (refer to Annex C.2.2.7.3). NOTE: This parameter does not limit the instantaneous rate of the Service Flow. The specific algorithm for enforcing this parameter is not mandated here. Any implementation which satisfies the above equation is conformant. In particular, the granularity of enforcement and the minimum implemented value of this parameter are vendor specific. The CMTS SHOULD support a granularity of at most 100 kbps. The CM SHOULD support a granularity of at most 100 kbps. NOTE: If this parameter is omitted or set to zero, then there is no explicitly-enforced traffic rate maximum. This field specifies only a bound, not a guarantee that this rate is available.”

So in essence DOCSIS users need to only account for 18 Bytes of ethernet overhead in both ingress and egress directions under non-congested conditions.

BUT the bigger problem on DOCSIS networks are, a) this shaping approach is actually insufficient as it does not really account for the actual DOCSIS wire-sizes of the packets (smaller packets have a higher DOCSIS cost and that is not reflected by the shaper) and b) end users typically have no precise information about the actually configured bandwidth of the shaper. On a positive note many cable ISPs seem to provision higher bandwidths than expected from looking at the contracts. On a less positive note und congestion all bets are off ;). That said, no shaper will work that well under congestion, it is jus t that the DOCSIS shaper, by not properly accounting for the on-the-wire-size will have no real chance of fairly distributing the reduced available bandwidth under congestion (which mostly is a theoretical problem, good DOCSIS ISPs will split their nodes if congestion happens too often, but I digress).

Back to your observation, on systems with precisely defined (guaranteed) bandwidth like VDSL2, it often is possible to reduce/eradicate bufferbloat with the shaper set at 100% of the link rate, IF all overhead is accounted correctly, with DOCSIS this has not yet been observed. I note that your 28 is actually too high (the real per packet overhead is 18), and setting the overhead too high effectively lowers the shaper bandwidth which will almost always help against buffer bloat…
In your case, if you see a step in buffer bloat rating by going from obehead 18 to overhead 17 then I would argue that your on to something :wink:

Hope that helps.