Gigabit speed on WAN - PPPOE

Hi Everyone,

I did not find this information, is Omnia (No WLAN) capable of gigabit speed on WAN using PPPOE connection ?

Thank you!

Gigabit ethernet traffic (RJ45-jack) means in real world ~945 Mbit/s. If you go with fibre, the SFP-slot is capable of 2500 MBit.
Maximum measured speed with Turris Omnia with NAT turned on is 975 MBit (for this you have to use fibre - see above). Source: True 1Gbit NAT WAN?

1 Like

Well, AFAIK the HW can’t get over 2G between WAN and LAN, even if CPU speed wasn’t limiting you. (Well, the router itself might theoretically consume 0.5G or more.)

1 Like

Maximum throughput over gigabit ethernet is effective 1000 Mbit gross rate. How much of that actually delivers usable data really depends on one’s definition of “usable”:

1) gigabit ethernet gross rate:
1000 Mbps
2.a) ethernet payload (no VLAN):
1000 * ((1500) / (1500 + 38)) = 975.29 Mbps
2.b) ethernet payload (single VLAN):
1000 * ((1500) / (1500 + 38 + 4)) = 972.76 Mbps

I am pretty sure this is where the 975 number came from…

3.a) IPv4 (no VLAN):
1000 * ((1500 - 20) / (1500 + 38)) = 962.29 Mbps
3.b) IPv4 (single VLAN):
1000 * ((1500 - 20) / (1500 + 38+ 4)) = 959.79 Mbps
3.c) IPv6 (no VLAN):
1000 * ((1500 - 40) / (1500 + 38)) = 949.28 Mbps
3.d) IPv6 (single VLAN):
1000 * ((1500 - 40) / (1500 + 38+ 4)) = 946.82 Mbps

4.a) IPv4/TCP (no VLAN):
1000 * ((1500 - 20 - 20) / (1500 + 38)) = 949.28 Mbps
4.b) IPv4/TCP (single VLAN):
1000 * ((1500 - 20 - 20) / (1500 + 38+ 4)) = 946.82 Mbps

4.c) IPv6/TCP (no VLAN):
1000 * ((1500 - 40 - 20) / (1500 + 38)) = 936.28 Mbps
4.d) IPv6/TCP (single VLAN):
1000 * ((1500 - 40 - 20) / (1500 + 38+ 4)) = 933.85 Mbps

5.a) IPv4/TCP+TCP timestamps (no VLAN):
1000 * ((1500 - 20 - 20 - 12) / (1500 + 38)) = 941.48 Mbps
5.b) IPv4/TCP+TCP timestamps (single VLAN):
1000 * ((1500 - 20 - 20 - 12) / (1500 + 38+ 4)) = 939.04 Mbps

5.c) IPv6/TCP+TCP timestamps (no VLAN):
1000 * ((1500 - 40 - 20 - 12) / (1500 + 38)) = 928.48 Mbps
5.d) IPv6/TCP+TCP timestamps (single VLAN):
1000 * ((1500 - 40 - 20 - 12) / (1500 + 38+ 4)) = 926.07 Mbps

By additional encapsulation layers (like a VPN tunnel) the payload rate can get even lower…
Throughput-tests typically measure around 4) or 5) (depending on whether your end systems are configured to use TCP timestamps or not) above (they typically also ad a tiny HTTP/HTTPS header that I ignore here).

I think we have two ports to the switch (not sure whether DSA actually supports the second port yet) so 2*1Gbps, but we can also access the internet via the two radios… So using 2.5 Gbps gross rate is not impossible theoretically.
EDIT: I agree that the omnia is a fine device for 1 Gbps, but 2.5 Gbps while theoretically possible probably requires a beefier device.

1 Like

Have Omnia with a 600 Mbit connection, and that actually runs 600 [Mb/s]…connection is DHCP.
Also have a MOX classic that does PPPOE with 1 GB fiber over WAN ( VLAN) , but there the max is around 600 [Mb/s].

Not sure what the the issue is with the MOX, the ‘packet steering’ option [on] gave about 75 [Mb/s] more…

Ah, radios, right, probably. Either way, I don’t expect Omnia is practically suitable for higher speeds than 1G.

1 Like

I fully agree, and I should have noted the level of nitpicking in my post above :wink:
With SQM I think my omnia topped out at about 500/500 Mbps in demanding but synthetic tests, I guess without SQM it should be good for 1 Gbps.

A number of ISPs under-promise/over-provision their links. That is especially common for DOCSIS/cable ISPs. But that really only makes the information about the maximum rate incorrect… this only works if:
a) the provisioned rate is below the maximum the link can actually carry
b) which often is true for shared media, like cable.

IMHO the MOX arm a53 cores are simply not as punchy as the omnia’s arm a9 cores… I think the mox is clocked lower and a53 was marketed as delivering the performance of the previous generation performance cores (a9/a15) while belonging to the efficiency core class of arm. However, data seems to indicate that that claim is only true under some conditions* and does not hold generally…

*) The a53 cores have a more powerful SIMD engine and some encryption acceleration but generally sport a weaker in-order architecture than the performance cores that are out-of-order… there is a reason why no-one builds performance cores as in-order anymore…

1 Like

Looks like the mox cpu is indeed working hard when downloading around 500 M/bs

stupid question maybe, but wasn’t the solution for the boot issues with the mox to lower some CPU settings?

Is there any Omnia max out/hardware torture post that would put the hardware to it’s limit in a real world use-case?

We’re starting to see multi-gigabit (1/2/5gbps) internet speed offerings. I’d like to see if the hardware can handle routing/nat between the 2.5g sfp fiber and the wifi6 modules. Can it be maxed out out of the box or are some software changes needed?

1 Like

Nobody forces you to order these :wink: and in all likelihood unless you routinely have to transfer big amounts of data in a time-critical fashion you will barely feel an improvement over 1 or maybe even over 0.5 Gbps… interactive use-cases are typically latency- not throughput-limited…

Perhaps it would be good to see some actual and measured limit of the hardware. There’s just too much thoughts and feelings when it comes to speeds of 1gig and over.

1 Like

Well, that is what I did a few years ago (using TOS4, IIRC), with out PPPoE but with NAT and firewalling my turris omnia topped out with SQM traffic shaping and bidirectionally saturating traffic at around 500/500 Mbps, tuning the packet steering script I managed to push this up to 550/550 Mbps. While I have not repeated these tests, I assume that 1 Gbps is still out of reach for the work-load that I care about (things tend to slow down over time, not magically increase in efficiency by a factor of two). So, again limited for my use-cases, I am confident that an omnia will not reach 1 Gbps, let alone 2.5 Gbps, and I base that on hard cold numbers more than thoughts and feelings. But this certainly does not cover other use cases and I assume that for loads of folks (not using SQM or something similarly CPU expensive) 1 Gbps should be well within reach.

Sure, real numbers always help, so go for it :wink: (I am on a 100/40 Mbps link, so I will not be able to offer useful testing data, short of reporting that SQM/PPPoE/NAT/Firewalling/WiFi/LAN with PaKON works at “link-speed” for me, but will not be useful to extrapolate to 1 Gbps and beyond :wink: ).
Mind you synthetic tests are not that easy if you want to test PPPoE encapsulation as well, as you need to set-up your own PPPoE “remote-end” somewhere.

3 Likes

Depends on the software version.
TOS 5.x - yes
TOS 6.x - Not for me. Maxes out at about 860 Mbit/s (PPPoE, VLAN).

The thing is that while Omnia does have two gigabit links from the switch chip to the CPU, the DSA architecture is broken and there is currently no way (I know of) to get those two links aggregated. If this has been fixed already, I welcome any and all contributions to how to properly configure the link aggregation.

But even if we had that, we couldn’t achieve more than a gigabit per second (or the numbers @moeller0 mentioned), as a single connection would only be switched across one of the two switch–CPU links and not both.

The only possibility how to route more than a gigabit via Omnia is to get the 2.5 Gbps metallic SFP and hook it up to a multi-gigabit switch. So that’s what I did for the purpose of this test (and because I also have a multi-gig WAN connection with PPPoE, this seems to fit well the OP’s question).

The set-up

  • Omnia (the original edition with Wi-Fi and 1 GB RAM), TurrisOS 6.2.0 (I should probably upgrade this testing beast someday)
  • WAN, PPPoE with VLAN tagging (VLAN 848), multi-gigabit port on the ONT, service provisioned at ~2100/1100 Mbps, capable of achieving ~2025/1080 Mbps in speedtests; ONT connected to the 2.5 Gbps port of my multi-gig switch at home
  • the 2.5 Gbps module connected to Omnia’s SFP cage; SFP cage configured as eth2; connected to the home switch, using trunk port configured for VLAN 848 and VLAN 9 (test LAN)
  • a home server connected using 10 Gbps (SFP+, fibre) to the home switch; VLAN 9 was used for this test, no other device besides this server and Omnia was on that network
  • a multi-gig switch (Juniper EX2300-24MP)

Without SQM

When I run the speedtest binary on the Omnia against a speedtest that’s apparently using BBR, this is what I get:

root@turris-omnia:~# ./speedtest -s 30620

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
         ISP: 
Idle Latency:     8.41 ms   (jitter: 0.03ms, low: 8.35ms, high: 8.46ms)
    Download:  1538.16 Mbps (data used: 837.5 MB)
                 32.87 ms   (jitter: 34.96ms, low: 7.88ms, high: 466.39ms)
      Upload:  1084.60 Mbps (data used: 932.0 MB)
                 19.70 ms   (jitter: 5.57ms, low: 8.59ms, high: 263.95ms)
 Packet Loss:     0.0%

when I run the same from the home server, the speeds are even a bit better:

root@home-server:~# speedtest --ip=10.9.8.2 -s 30620 2>/dev/null

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
         ISP: 
Idle Latency:     8.69 ms   (jitter: 0.04ms, low: 8.63ms, high: 8.73ms)
    Download:  1749.59 Mbps (data used: 2.0 GB)
                130.88 ms   (jitter: 57.34ms, low: 7.31ms, high: 954.17ms)
      Upload:  1081.51 Mbps (data used: 968.8 MB)
                 14.73 ms   (jitter: 0.87ms, low: 8.27ms, high: 25.19ms)

With SQM/QoS (Cake)

I have attached Cake to the PPPoE interface and enabled piece_of_cake with the default settings and bandwidth limits set to 2000Mbit down and 1050Mbit up, slightly below the maximum. The rest of cake’s settings were kept in default.

root@turris-omnia:~# tc qdisc show dev ifb4pppoe-wan
qdisc cake 800a: root refcnt 2 bandwidth 2Gbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100ms raw overhead 0
root@turris-omnia:~# tc qdisc show dev pppoe-wan
qdisc cake 8009: root refcnt 2 bandwidth 1050Mbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
qdisc ingress ffff: parent ffff:fff1 ----------------

speedtest ran from the router itself:

root@turris-omnia:~# ./speedtest -s 30620

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
         ISP: 
Idle Latency:     8.46 ms   (jitter: 0.03ms, low: 8.38ms, high: 8.50ms)
    Download:   716.04 Mbps (data used: 599.8 MB)
                  8.44 ms   (jitter: 0.44ms, low: 7.81ms, high: 13.65ms)
      Upload:   912.13 Mbps (data used: 587.4 MB)
                 76.58 ms   (jitter: 39.25ms, low: 8.11ms, high: 1110.45ms)

and from the server:

root@home-server:~# speedtest --ip=10.9.8.2 -s 30620 2>/dev/null

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
         ISP: 
Idle Latency:     8.66 ms   (jitter: 0.16ms, low: 8.42ms, high: 8.86ms)
    Download:   727.20 Mbps (data used: 799.2 MB)
                  9.03 ms   (jitter: 0.46ms, low: 8.09ms, high: 13.08ms)
      Upload:   979.94 Mbps (data used: 515.7 MB)
                 11.22 ms   (jitter: 1.70ms, low: 8.15ms, high: 20.36ms)

In the downlink direction, you can’t even max out gigabit with Cake. In the uplink direction, you can, and it seems to be quite okay-ish. One of the cores gets loaded up to 100 %, the other one is used at around 50 %.

Summary

Considering how old Omnia is and what did I have to do to get this all working, the result is exceeding my expectations. It is certainly capable of switching and routing multi-gig, although not using the integrated LAN switch (you’ll be throttled at 1 Gbps). It does not max out the 2.5 Gbps SFP and because I use it to transfer data between the WAN and LAN VLANs, the total combined throughput [up+down] will never exceed 2.5 Gbps.

Please note that any extra process running on the Omnia will significantly degrade the routing capabilities at those speeds (and with PPPoE), so forget about containers, Wi-Fi controllers, NAS, etc. There’s not enough power for that. Also, servers using Cubic instead of BBR might be more sensitive to any latency that might be added by the routing processes under full load.

(This one WAN connection does not support IPv6 and I will certainly add IPv6 results, as these might get a bit better thanks to missing NAT.)

P.S.: If you have considered using the Omnia SFP module with Routerboard RB5009, forget about it NOW. The Omnia SFP module does not properly report the 2.5 Gbps speed to that router (reports 10 Gbps instead of 2.5 Gbps) and your buffers will be heavily bloated, resulting in severely degraded connection. Trust me, I’ve been there, and eventually ordered a proper Mikrotik 1/2/5/10 Gbps SFP+ module which works well.

5 Likes

And because I am a curious person, I have also tested single connection limits of this set-up. I have picked an Ubuntu mirror that I know is capable saturating the full link when I use my regular router, but on Omnia the best I could get from it was 122 MiB/s average speed:

root@home-server:~# wget -O /dev/null --bind-address=10.9.8.2 https://cz.releases.ubuntu.com/22.10/ubuntu-22.10-desktop-amd64.iso
--2023-04-11 23:51:36--  https://cz.releases.ubuntu.com/22.10/ubuntu-22.10-desktop-amd64.iso
Resolving cz.releases.ubuntu.com (cz.releases.ubuntu.com)... 2001:1488:ffff::63, 217.31.202.63
Connecting to cz.releases.ubuntu.com (cz.releases.ubuntu.com)|2001:1488:ffff::63|:443... failed: Invalid argument.
Connecting to cz.releases.ubuntu.com (cz.releases.ubuntu.com)|217.31.202.63|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4071903232 (3.8G) [application/octet-stream]
Saving to: ‘/dev/null’

/dev/null                                          100%[==================================>]   3.79G   121MB/s    in 32s

2023-04-11 23:52:08 (122 MB/s) - ‘/dev/null’ saved [4071903232/4071903232]

The results are pretty inconsistent though: more often the average single connection speed drops down to 90-100 MiB/s and the speed range during the transfer varies heavily between 80 and 140 MiB/s. Speedtest apparently benefits from launching four connections in parallel and only provides limited view in what the router is capable of.

For comparison, this is how the download looks like using the same connection with PPPoE handled by my regular home router (amd64/Ubuntu-based with an Intel i3-13100). Consistent all the way through.

root@home-server:~# wget -O /dev/null --bind-address=192.168.248.2 https://cz.releases.ubuntu.com/22.10/ubuntu-22.10-desktop-amd64.iso
--2023-04-12 00:06:10--  https://cz.releases.ubuntu.com/22.10/ubuntu-22.10-desktop-amd64.iso
Resolving cz.releases.ubuntu.com (cz.releases.ubuntu.com)... 2001:1488:ffff::63, 217.31.202.63
Connecting to cz.releases.ubuntu.com (cz.releases.ubuntu.com)|2001:1488:ffff::63|:443... failed: Invalid argument.
Connecting to cz.releases.ubuntu.com (cz.releases.ubuntu.com)|217.31.202.63|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4071903232 (3.8G) [application/octet-stream]
Saving to: ‘/dev/null’

/dev/null                                          100%[===================================>]   3.79G   241MB/s    in 16s

2023-04-12 00:06:26 (240 MB/s) - ‘/dev/null’ saved [4071903232/4071903232]
2 Likes

How would you explain that difference in consistency? I wouldn’t think the Omnia CPU is the limit here.

There are some Turris-related processes running on my Omnia (reforis and similar) and the CPU is getting to 100 % when downloading, so my understanding is that the TCP packets are sometimes delayed, which causes TCP window rescaling and download speed changes.

Speedtest and iPerf executed with multiple connections against a non-BBR endpoint achieve ~1.1 Gbps, which is similar to those 122 MB/s when downloading the ISO.

2 Likes

Well that shows what the router is capable of, expecting single TCP-flows to reliably saturate a link is quite optimistic… the longer the RTT the more optimistic…

The bigger issue is IMHO that even with a handful of flows the router is maxed out with MTU ~1500 packets, repeat the test with MSS clamping to 150B and wheep. ;_

2 Likes

Don’t say it twice… :joy:

root@turris-omnia:~# iptables -I FORWARD -o pppoe-wan -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 150
root@turris-omnia:~# iptables -I FORWARD -i pppoe-wan -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 150
root@home-server:~# speedtest --ip=10.9.8.2 -s 30620 2>/dev/null

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
         ISP: 
Idle Latency:     8.86 ms   (jitter: 0.16ms, low: 8.71ms, high: 9.06ms)
    Download:   175.49 Mbps (data used: 163.9 MB)
                 55.10 ms   (jitter: 43.66ms, low: 8.31ms, high: 287.51ms)
      Upload:   208.73 Mbps (data used: 375.1 MB)
                  9.02 ms   (jitter: 0.75ms, low: 7.64ms, high: 15.54ms)

the i3-13100 does handle it better, although apparently PPPoE is one of the major limits here (when downloading, one CPU core hits 100 %; when uploading, the load is spread across all four physical cores and no core goes above 25 %):

root@home-server:~# speedtest --ip=192.168.248.2 -s 30620 2>/dev/null

   Speedtest by Ookla

      Server: O2 Czechia, a.s. - Prague (id: 30620)
         ISP:
Idle Latency:     8.74 ms   (jitter: 0.12ms, low: 8.58ms, high: 8.84ms)
    Download:   526.76 Mbps (data used: 535.4 MB)
                 97.08 ms   (jitter: 58.90ms, low: 7.65ms, high: 562.13ms)
      Upload:   696.81 Mbps (data used: 1.0 GB)
                 13.22 ms   (jitter: 2.16ms, low: 7.27ms, high: 58.58ms)
1 Like