Gigabit speed on WAN - PPPOE

Is there any Omnia max out/hardware torture post that would put the hardware to it’s limit in a real world use-case?

We’re starting to see multi-gigabit (1/2/5gbps) internet speed offerings. I’d like to see if the hardware can handle routing/nat between the 2.5g sfp fiber and the wifi6 modules. Can it be maxed out out of the box or are some software changes needed?

1 Like

Nobody forces you to order these :wink: and in all likelihood unless you routinely have to transfer big amounts of data in a time-critical fashion you will barely feel an improvement over 1 or maybe even over 0.5 Gbps… interactive use-cases are typically latency- not throughput-limited…

Perhaps it would be good to see some actual and measured limit of the hardware. There’s just too much thoughts and feelings when it comes to speeds of 1gig and over.

1 Like

Well, that is what I did a few years ago (using TOS4, IIRC), with out PPPoE but with NAT and firewalling my turris omnia topped out with SQM traffic shaping and bidirectionally saturating traffic at around 500/500 Mbps, tuning the packet steering script I managed to push this up to 550/550 Mbps. While I have not repeated these tests, I assume that 1 Gbps is still out of reach for the work-load that I care about (things tend to slow down over time, not magically increase in efficiency by a factor of two). So, again limited for my use-cases, I am confident that an omnia will not reach 1 Gbps, let alone 2.5 Gbps, and I base that on hard cold numbers more than thoughts and feelings. But this certainly does not cover other use cases and I assume that for loads of folks (not using SQM or something similarly CPU expensive) 1 Gbps should be well within reach.

Sure, real numbers always help, so go for it :wink: (I am on a 100/40 Mbps link, so I will not be able to offer useful testing data, short of reporting that SQM/PPPoE/NAT/Firewalling/WiFi/LAN with PaKON works at “link-speed” for me, but will not be useful to extrapolate to 1 Gbps and beyond :wink: ).
Mind you synthetic tests are not that easy if you want to test PPPoE encapsulation as well, as you need to set-up your own PPPoE “remote-end” somewhere.


Depends on the software version.
TOS 5.x - yes
TOS 6.x - Not for me. Maxes out at about 860 Mbit/s (PPPoE, VLAN).

The thing is that while Omnia does have two gigabit links from the switch chip to the CPU, the DSA architecture is broken and there is currently no way (I know of) to get those two links aggregated. If this has been fixed already, I welcome any and all contributions to how to properly configure the link aggregation.

But even if we had that, we couldn’t achieve more than a gigabit per second (or the numbers @moeller0 mentioned), as a single connection would only be switched across one of the two switch–CPU links and not both.

The only possibility how to route more than a gigabit via Omnia is to get the 2.5 Gbps metallic SFP and hook it up to a multi-gigabit switch. So that’s what I did for the purpose of this test (and because I also have a multi-gig WAN connection with PPPoE, this seems to fit well the OP’s question).

The set-up

  • Omnia (the original edition with Wi-Fi and 1 GB RAM), TurrisOS 6.2.0 (I should probably upgrade this testing beast someday)
  • WAN, PPPoE with VLAN tagging (VLAN 848), multi-gigabit port on the ONT, service provisioned at ~2100/1100 Mbps, capable of achieving ~2025/1080 Mbps in speedtests; ONT connected to the 2.5 Gbps port of my multi-gig switch at home
  • the 2.5 Gbps module connected to Omnia’s SFP cage; SFP cage configured as eth2; connected to the home switch, using trunk port configured for VLAN 848 and VLAN 9 (test LAN)
  • a home server connected using 10 Gbps (SFP+, fibre) to the home switch; VLAN 9 was used for this test, no other device besides this server and Omnia was on that network
  • a multi-gig switch (Juniper EX2300-24MP)

Without SQM

When I run the speedtest binary on the Omnia against a speedtest that’s apparently using BBR, this is what I get:

root@turris-omnia:~# ./speedtest -s 30620

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
Idle Latency:     8.41 ms   (jitter: 0.03ms, low: 8.35ms, high: 8.46ms)
    Download:  1538.16 Mbps (data used: 837.5 MB)
                 32.87 ms   (jitter: 34.96ms, low: 7.88ms, high: 466.39ms)
      Upload:  1084.60 Mbps (data used: 932.0 MB)
                 19.70 ms   (jitter: 5.57ms, low: 8.59ms, high: 263.95ms)
 Packet Loss:     0.0%

when I run the same from the home server, the speeds are even a bit better:

root@home-server:~# speedtest --ip= -s 30620 2>/dev/null

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
Idle Latency:     8.69 ms   (jitter: 0.04ms, low: 8.63ms, high: 8.73ms)
    Download:  1749.59 Mbps (data used: 2.0 GB)
                130.88 ms   (jitter: 57.34ms, low: 7.31ms, high: 954.17ms)
      Upload:  1081.51 Mbps (data used: 968.8 MB)
                 14.73 ms   (jitter: 0.87ms, low: 8.27ms, high: 25.19ms)

With SQM/QoS (Cake)

I have attached Cake to the PPPoE interface and enabled piece_of_cake with the default settings and bandwidth limits set to 2000Mbit down and 1050Mbit up, slightly below the maximum. The rest of cake’s settings were kept in default.

root@turris-omnia:~# tc qdisc show dev ifb4pppoe-wan
qdisc cake 800a: root refcnt 2 bandwidth 2Gbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100ms raw overhead 0
root@turris-omnia:~# tc qdisc show dev pppoe-wan
qdisc cake 8009: root refcnt 2 bandwidth 1050Mbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
qdisc ingress ffff: parent ffff:fff1 ----------------

speedtest ran from the router itself:

root@turris-omnia:~# ./speedtest -s 30620

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
Idle Latency:     8.46 ms   (jitter: 0.03ms, low: 8.38ms, high: 8.50ms)
    Download:   716.04 Mbps (data used: 599.8 MB)
                  8.44 ms   (jitter: 0.44ms, low: 7.81ms, high: 13.65ms)
      Upload:   912.13 Mbps (data used: 587.4 MB)
                 76.58 ms   (jitter: 39.25ms, low: 8.11ms, high: 1110.45ms)

and from the server:

root@home-server:~# speedtest --ip= -s 30620 2>/dev/null

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
Idle Latency:     8.66 ms   (jitter: 0.16ms, low: 8.42ms, high: 8.86ms)
    Download:   727.20 Mbps (data used: 799.2 MB)
                  9.03 ms   (jitter: 0.46ms, low: 8.09ms, high: 13.08ms)
      Upload:   979.94 Mbps (data used: 515.7 MB)
                 11.22 ms   (jitter: 1.70ms, low: 8.15ms, high: 20.36ms)

In the downlink direction, you can’t even max out gigabit with Cake. In the uplink direction, you can, and it seems to be quite okay-ish. One of the cores gets loaded up to 100 %, the other one is used at around 50 %.


Considering how old Omnia is and what did I have to do to get this all working, the result is exceeding my expectations. It is certainly capable of switching and routing multi-gig, although not using the integrated LAN switch (you’ll be throttled at 1 Gbps). It does not max out the 2.5 Gbps SFP and because I use it to transfer data between the WAN and LAN VLANs, the total combined throughput [up+down] will never exceed 2.5 Gbps.

Please note that any extra process running on the Omnia will significantly degrade the routing capabilities at those speeds (and with PPPoE), so forget about containers, Wi-Fi controllers, NAS, etc. There’s not enough power for that. Also, servers using Cubic instead of BBR might be more sensitive to any latency that might be added by the routing processes under full load.

(This one WAN connection does not support IPv6 and I will certainly add IPv6 results, as these might get a bit better thanks to missing NAT.)

P.S.: If you have considered using the Omnia SFP module with Routerboard RB5009, forget about it NOW. The Omnia SFP module does not properly report the 2.5 Gbps speed to that router (reports 10 Gbps instead of 2.5 Gbps) and your buffers will be heavily bloated, resulting in severely degraded connection. Trust me, I’ve been there, and eventually ordered a proper Mikrotik 1/2/5/10 Gbps SFP+ module which works well.


And because I am a curious person, I have also tested single connection limits of this set-up. I have picked an Ubuntu mirror that I know is capable saturating the full link when I use my regular router, but on Omnia the best I could get from it was 122 MiB/s average speed:

root@home-server:~# wget -O /dev/null --bind-address=
--2023-04-11 23:51:36--
Resolving ( 2001:1488:ffff::63,
Connecting to (|2001:1488:ffff::63|:443... failed: Invalid argument.
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4071903232 (3.8G) [application/octet-stream]
Saving to: ‘/dev/null’

/dev/null                                          100%[==================================>]   3.79G   121MB/s    in 32s

2023-04-11 23:52:08 (122 MB/s) - ‘/dev/null’ saved [4071903232/4071903232]

The results are pretty inconsistent though: more often the average single connection speed drops down to 90-100 MiB/s and the speed range during the transfer varies heavily between 80 and 140 MiB/s. Speedtest apparently benefits from launching four connections in parallel and only provides limited view in what the router is capable of.

For comparison, this is how the download looks like using the same connection with PPPoE handled by my regular home router (amd64/Ubuntu-based with an Intel i3-13100). Consistent all the way through.

root@home-server:~# wget -O /dev/null --bind-address=
--2023-04-12 00:06:10--
Resolving ( 2001:1488:ffff::63,
Connecting to (|2001:1488:ffff::63|:443... failed: Invalid argument.
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4071903232 (3.8G) [application/octet-stream]
Saving to: ‘/dev/null’

/dev/null                                          100%[===================================>]   3.79G   241MB/s    in 16s

2023-04-12 00:06:26 (240 MB/s) - ‘/dev/null’ saved [4071903232/4071903232]

How would you explain that difference in consistency? I wouldn’t think the Omnia CPU is the limit here.

There are some Turris-related processes running on my Omnia (reforis and similar) and the CPU is getting to 100 % when downloading, so my understanding is that the TCP packets are sometimes delayed, which causes TCP window rescaling and download speed changes.

Speedtest and iPerf executed with multiple connections against a non-BBR endpoint achieve ~1.1 Gbps, which is similar to those 122 MB/s when downloading the ISO.


Well that shows what the router is capable of, expecting single TCP-flows to reliably saturate a link is quite optimistic… the longer the RTT the more optimistic…

The bigger issue is IMHO that even with a handful of flows the router is maxed out with MTU ~1500 packets, repeat the test with MSS clamping to 150B and wheep. ;_


Don’t say it twice… :joy:

root@turris-omnia:~# iptables -I FORWARD -o pppoe-wan -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 150
root@turris-omnia:~# iptables -I FORWARD -i pppoe-wan -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 150
root@home-server:~# speedtest --ip= -s 30620 2>/dev/null

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
Idle Latency:     8.86 ms   (jitter: 0.16ms, low: 8.71ms, high: 9.06ms)
    Download:   175.49 Mbps (data used: 163.9 MB)
                 55.10 ms   (jitter: 43.66ms, low: 8.31ms, high: 287.51ms)
      Upload:   208.73 Mbps (data used: 375.1 MB)
                  9.02 ms   (jitter: 0.75ms, low: 7.64ms, high: 15.54ms)

the i3-13100 does handle it better, although apparently PPPoE is one of the major limits here (when downloading, one CPU core hits 100 %; when uploading, the load is spread across all four physical cores and no core goes above 25 %):

root@home-server:~# speedtest --ip= -s 30620 2>/dev/null

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
Idle Latency:     8.74 ms   (jitter: 0.12ms, low: 8.58ms, high: 8.84ms)
    Download:   526.76 Mbps (data used: 535.4 MB)
                 97.08 ms   (jitter: 58.90ms, low: 7.65ms, high: 562.13ms)
      Upload:   696.81 Mbps (data used: 1.0 GB)
                 13.22 ms   (jitter: 2.16ms, low: 7.27ms, high: 58.58ms)
1 Like

At MSS 150 the measurable good put over IPv4 is:
100 * ((150) / (150+20+20+8+4+38)) = 62.5% of the gross rate, while at MTU 1500 → MSS 1452 it is
100 * ((1452) / (1452 +20+20+8+4+38)) = 94.16% of the gross rate…
so the maximum throughput @MSS150 is:

2.5Gbps ethernet: 2.5*0.625 -> 1562 Mbps
1.0Gbps ethernet: 1.0*0.625 ->  625 Mbps


100 * 175.49 / 1562 = 11.23% 
100 * 208.73 / 1562 = 13.36% 


100 * 526.76 / 1562 = 33.72%
100 * 696.81 / 1562 = 44.61%

But this is a tough test since most routing duty is per-packet and not per byte, so MSS150 results in an almost 10-times higher load on the router…

And yes, PPPoE is not free, but I guess that would be true for any other kind of tunneling as well. This is where DOCSIS-ISPs actually do well, as most use DHCP to assign addresses and do not enforce another end-user visible tunneling, giving CPE more breathing room.

Now, there is a bit of irony here, s PPPoE was (partly) invented to allow ISPs to continue their time-based internet charging like in the analog modem days (often using PPP), even though most fixed internet access links are either “flat-rates” or charged by volume… however it is not that time-based charging, but the generic tunneling that made PPPoE survive IMHO. (In Germany the incumbent also is forced by the regulator to use PPPoE for its bitstream access contracts to its resellers, but I am sure that could be changed).

Interesting thought, would love to see results, however due to the 20byte higher per packet overhead that NAT saving will need to overcome that ~100*20/1500 = 1.3% lower throughput.

Thanks for the additional information on why PPPoE is still used.

1 Like

PPPoE also allows for easier customer identification and session management and reduces the complexity of the last mile. With PPPoE, you have to manage one identifier, one session, and maintain one helper to identify customers (assuming you identify customer per port; if you go for username/password validation, it’s even easier, as the last mile devices can be quite dumb). Once the session is up, you can route a public IPv4 address to it (and you can have a block of 256 addresses and allocate these all to customers spread across the region, as long as their PPPoE sessions all end up on one BRAS), a whole block of IPv4 addresses to it, delegate IPv6 prefixes with clear next-hop definition (the tunnel endpoint)…

With IPoE, or DHCPv4/v6, you have to have smart last mile devices (identifying customers properly on both protocols by adding correct DHCP options to the requests), you have to segment your IP allocations better, sometimes you have to perform nasty ARP hacks like the cable operators do… but the benefits are clear: MTU 1500, no tunnelling (= packets can be processes by all CPU cores) and better suitability for speeds above one gigabit per second. Still there are ISPs (like Bell Canada) that keep using PPPoE even for 8 Gbps plans.


Very cool, thanks! It would be interesting to see the numbers with wifi6 modules.

1 Like

I sadly don’t have the Omnia Wi-Fi 6 module, neither do I have any good Wi-Fi 6 client besides my phone or tablet. :smiling_face_with_tear:

btw, just checked the same fiber connection ( 1 Gb fiber ) with a different router, and in this case an old AC1900 synology.
Easy over the 900 down… same PPPOE protocol. MOX classic 620 down

So, apperantly something is not right with the Mox Classic, speedwise.

Edit : configuration is pretty basic. Adblock & openvpn is the only add on.
and the AC1900 has a Broadcom BCM58622 @ 1 GHz chipset, 256 MB RAM and 4 GB flash, so no fancy specs.

1 Like

This is a dual core A9, while the MOX sports a multicore a53… the a9 really is a better core than a53 for general purpose processing… really Arm’s power-efficient line is great at being power-efficient, but the claim that a53 meets or even beats the previous performance line’s A9 performance really only holds for carefully selected benchmarks…

It is really that simple in-order-designs like A53 simply are not as robust as out-of-order designs like A9, no ifs no buts… and since Arm’s efficiency line switched to out-of-order as well, I think that case is settled…

However the issue here can also be an issue of PPPoE acceleration working on one SoC but not the other more than general differences between different ARM cpu lines :wink:


Maybe it is a solution to fabricate a new mox part with a faster main CPU? i mean, it is a modulair thing, so that could solve this WAN speed hickup?

1 Like

That would IMHO be a something like MOX2023 or MOX2, not sure that turris is already considering this… after all we are still waiting for the next omnia and the omnia is a tad older than the MOX…

But, and that might be a stupid question from the nOOb side here…it is not possible just to upgrade the CPU board part without being incompatible with the connected restparts?

I mean, the advantage of a modulair desing could be the fact that one can upgrade specific parts. Not sure if that was the basic concept though