WiFi 6 (ax) adapter

probably bears down to

mvebu-pcie soc:pcie: PCI host bridge to bus 0000:00
00:01.0 PCI bridge: Marvell Technology Group Ltd. Device 6820 (rev 04)
00:02.0 PCI bridge: Marvell Technology Group Ltd. Device 6820 (rev 04)
00:03.0 PCI bridge: Marvell Technology Group Ltd. Device 6820 (rev 04)

bus version 1.1 | 2.0 | 3.0 | 4.0 | 5.0 and number of lanes 1 | 4 | 8 | 16 and coding (ballast)


Theoretical gross bandwidth, practical net bandwidth potentially loss since transfer protocol takes its bite.
PCIe version bandwidth per link x1 x4 x8 x16 Coding (balast in %)
1 2.5 GT/s 2.5 Gbit/s 250 Mbyte/s 1 Gbyte/s 2 Gbyte/s 4 Gbyte/s 8b10b / 20%
2 5 GT/s 5 Gbit/s 500 Mbyte/s 2 Gbyte/s 4 Gbyte/s 8 Gbyte/s 8b10b / 20%
3 8 GT/s 10 Gbit/s 0.9846 Gbyte/s 3.938 Gbyte/s 7.877 Gbyte/s 15.754 GByte/s 128b/130b / <2%
4 16 GT/s 20 Gbit/s 1.969 GByte/s 7.877 Gbyte/s 15.754 Gbyte/s 31.508 GByte/s 128b/130b / <2%
5 32 GT/s 3.9 Gbyte/s 15.8 Gbyte/s 31.5 Gbyte/s 63 Gbyte/s 128b/130b / <2%

Not sure but seems that MV6820 provides PCIe 3 on the PHY Datapath Width Mode. Unclear the number of lanes, probably not 16 since that usually applies to GPUs.


The next bottleneck are then the 1GbE ethernet ports, upstream or downstream where the bandwidth gets throttled.

1 Like