Sorry for the delay, yesterday the analyzer was busy doing other things.
Sorry for testing using a method which differs from the one used by speedtest.net (which is testing uplink and downlink separately, thus doing its best to actually measure the link speed rather than router throughput) - I couldn't find how to make the analyzer properly establish the TCP session, it just sends mid-session TCP packets, so I had to set it to UDP mode to allow the connection tracker to handle the connections. But like the speedtest.net does, I was sending large packets (1492 bytes), to stay close to the typical TCP case.
The arrangement was two hAP ac² interconnected by their ether3 interfaces, one acting as PPPoE server and routing between the PPPoE interface running on ether3 and the analyzer connected to ether4, the other one acting as PPPoE client routing between the PPPoE running on ether3 and the analyzer connected to ether4 and doing src-nat on the PPPoE interface (i.e. imitating the typical home setup).
With
fasttracking on at both ends, the throughput was 995 Mbit/s for upload or 995 Mbit/s for download if the traffic in opposite direction was only there to make the firewall recognize a bi-directional stream and activate the fasttracking; for
full duplex, the throughput was about
900 Mbit/s per direction. I've intentionally configured two bi-directional streams to see how the multi-core architecture will deal with that; well, there was no difference between a single 900 Mbit/s stream per direction and two 450 Mbit/s streams per direction. The CPU load looked as follows:
NAME CPU USAGE
ethernet 0 3%
console 0 0%
ssh 0 0%
firewall 0 0%
networking 0 1%
mpls 0 1%
management 0 0%
unclassified 0 0%
cpu0 5%
ethernet 1 36.5%
console 1 0.5%
firewall 1 2%
networking 1 5%
mpls 1 0%
unclassified 1 0.5%
cpu1 44.5%
ethernet 2 27%
firewall 2 0%
networking 2 3.5%
mpls 2 0%
management 2 0%
unclassified 2 0.5%
cpu2 31%
ethernet 3 18%
firewall 3 0%
networking 3 2%
mpls 3 0.5%
management 3 0%
unclassified 3 0%
cpu3 20.5%
With
fasttracking off at both ends, the
full duplex throughput was about
420 Mbit/s per direction, with one core at each machine running at 95 % and another one at about 50 %.
NAME CPU USAGE
ethernet 0 1%
firewall 0 2.5%
networking 0 5%
profiling 0 0%
unclassified 0 0%
cpu0 8.5%
console 1 1%
firewall 1 0%
networking 1 1.5%
management 1 0%
unclassified 1 0%
cpu1 2.5%
ethernet 2 12.5%
firewall 2 18.5%
networking 2 15%
mpls 2 0%
management 2 0.5%
profiling 2 0%
unclassified 2 0.5%
cpu2 47%
ethernet 3 15.5%
firewall 3 37%
networking 3 36.5%
mpls 3 1%
management 3 0%
profiling 3 0%
unclassified 3 5%
cpu3 95%
With
fasttracking on at server side and off at client side, to get as close as possible to a typical setup where the server side is not a $70 plastic box, the
full-duplex throughput is above
590 Mbit/s per direction, with one core at the client on 95 % and two others at 50 %:
NAME CPU USAGE
ethernet 0 1.5%
console 0 0%
firewall 0 3%
networking 0 3.5%
mpls 0 0%
logging 0 0%
management 0 0%
profiling 0 0%
unclassified 0 1%
cpu0 9%
ethernet 1 18.5%
firewall 1 31.5%
networking 1 37%
mpls 1 0%
profiling 1 0%
unclassified 1 8%
cpu1 95%
ethernet 2 8%
console 2 0%
firewall 2 17%
networking 2 20.5%
mpls 2 0%
management 2 0.5%
profiling 2 0%
unclassified 2 1%
cpu2 47%
ethernet 3 9.5%
console 3 0%
ssh 3 0%
firewall 3 19%
networking 3 21%
mpls 3 0%
unclassified 3 2%
cpu3 51.5%