I'm test with iperf on both sides with packet size 40 bytesDepends on CPU speed and packet size.
[admin@751_server] /tool> bandwidth-test direction=both protocol=udp 10.11.0.2
status: running
duration: 32s
tx-current: 82.2Mbps
tx-10-second-average: 86.3Mbps
tx-total-average: 65.3Mbps
rx-current: 97.5Mbps
rx-10-second-average: 87.9Mbps
rx-total-average: 72.9Mbps
lost-packets: 7081
random-data: no
direction: both
tx-size: 1500
rx-size: 1500
[admin@751_server] /tool> bandwidth-test direction=both protocol=tcp tcp-connection-count=4 10.11.0.2
status: running
duration: 14s
tx-current: 54.3Mbps
tx-10-second-average: 54.9Mbps
tx-total-average: 54.6Mbps
rx-current: 54.5Mbps
rx-10-second-average: 55.0Mbps
rx-total-average: 55.1Mbps
random-data: no
direction: both
[admin@751_server] /tool> bandwidth-test direction=transmit protocol=tcp tcp-connection-count=4 10.11.0.2
status: running
duration: 8s
tx-current: 94.1Mbps
tx-10-second-average: 90.8Mbps
tx-total-average: 90.8Mbps
random-data: no
direction: transmit (quite similar result for "receive")
[admin@751_server] /tool> bandwidth-test direction=both protocol=udp 10.22.2.2
status: running
duration: 25s
tx-current: 7.5Mbps
tx-10-second-average: 9.0Mbps
tx-total-average: 11.2Mbps
rx-current: 51.5Mbps
rx-10-second-average: 50.0Mbps
rx-total-average: 37.9Mbps
lost-packets: 395
random-data: no
direction: both
tx-size: 1450
rx-size: 1450
[admin@751_server] /tool> bandwidth-test direction=transmit protocol=udp 10.22.2.2
status: running
duration: 15s
tx-current: 55.8Mbps
tx-10-second-average: 37.0Mbps
tx-total-average: 28.5Mbps
random-data: no
direction: transmit
tx-size: 1450
[admin@751_server] /tool> bandwidth-test direction=receive protocol=udp 10.22.2.2
status: running
duration: 15s
rx-current: 59.6Mbps
rx-10-second-average: 58.1Mbps
rx-total-average: 46.3Mbps
lost-packets: 570
random-data: no
direction: receive
rx-size: 1450
[admin@751_server] /tool> bandwidth-test direction=both protocol=tcp tcp-connection-count=4 10.22.2.2
status: running
duration: 10s
tx-current: 18.1Mbps
tx-10-second-average: 18.2Mbps
tx-total-average: 18.2Mbps
rx-current: 18.2Mbps
rx-10-second-average: 18.3Mbps
rx-total-average: 18.3Mbps
random-data: no
direction: both
admin@751_server] /tool> bandwidth-test direction=transmit protocol=tcp tcp-connection-count=4 10.22.2.2
status: running
duration: 10s
tx-current: 27.9Mbps
tx-10-second-average: 18.7Mbps
tx-total-average: 18.7Mbps
random-data: no
direction: transmit
[admin@751_server] /tool> bandwidth-test direction=receive protocol=tcp tcp-connection-count=4 10.22.2.2
status: running
duration: 10s
rx-current: 35.5Mbps
rx-10-second-average: 33.6Mbps
rx-total-average: 33.6Mbps
random-data: no
direction: receive
[admin@751_server] /tool> bandwidth-test direction=both protocol=udp 10.22.2.2
status: running
duration: 15s
tx-current: 4.3Mbps
tx-10-second-average: 2.4Mbps
tx-total-average: 3.3Mbps
rx-current: 14.8Mbps
rx-10-second-average: 13.8Mbps
rx-total-average: 13.2Mbps
lost-packets: 582
random-data: no
direction: both
tx-size: 1500
rx-size: 1500
[admin@751_server] /tool> bandwidth-test direction=transmit protocol=udp 10.22.2.2
status: running
duration: 8s
tx-current: 17.5Mbps
tx-10-second-average: 15.7Mbps
tx-total-average: 15.7Mbps
random-data: no
direction: transmit
tx-size: 1500
[admin@751_server] /tool> bandwidth-test direction=receive protocol=udp 10.22.2.2
status: running
duration: 9s
rx-current: 16.6Mbps
rx-10-second-average: 14.3Mbps
rx-total-average: 14.3Mbps
lost-packets: 864
random-data: no
direction: receive
rx-size: 1500
[admin@751_server] /tool> bandwidth-test direction=both protocol=tcp tcp-connection-count=4 10.22.2.2
status: running
duration: 10s
tx-current: 5.7Mbps
tx-10-second-average: 5.3Mbps
tx-total-average: 5.3Mbps
rx-current: 5.4Mbps
rx-10-second-average: 5.5Mbps
rx-total-average: 5.5Mbps
random-data: no
direction: both
[admin@751_server] /tool> bandwidth-test direction=transmit protocol=tcp tcp-connection-count=4 10.22.2.2
status: running
duration: 10s
tx-current: 10.6Mbps
tx-10-second-average: 10.7Mbps
tx-total-average: 10.7Mbps
random-data: no
direction: transmit
[admin@751_server] /tool> bandwidth-test direction=receive protocol=tcp tcp-connection-count=4 10.22.2.2
status: running
duration: 10s
rx-current: 11.5Mbps
rx-10-second-average: 11.7Mbps
rx-total-average: 11.7Mbps
random-data: no
direction: receive
20 Mbps to 60 Mbps is 1/3 of the total speed of the connection if my OVPN connection was so fast I would be very happy!!I have pretty much similar problem with vpn. Bandwidth test between mikrotiks shows great speed, but when it comes to speed between 2 devices behind each of mikrotik - speed drops drastically. Emils from support was on this problem, did nothing, made couple strange suggestions and then promised to look one more time and disappeared for over a month now. Support ingores all follow letters, meanwhile we're suffering from low vpn speed and noone can help even with a hint on my config. That's another end of 160 employees company I think. VPN speed wont go higher than 20 mbps, when real speed between routers is greater then 60 mbps even with encryption.
[Ticket#2016052266000207]
This is a common problem related to TCP meltdown. You shouldn't use TCP tunnels on a long distance nor many hops. Use L2TP or PPTP for it. We're all waiting ROS7 for OVPN UDP support.This situation with IPSec tunnel does not happen! The PING response time does not change (or at least very little change) if the tunnel is busy or not.
I believe that the correct answer was written by DJGlooM.Can you post your VPN config output. I use pptp and l2tp and can consistently push 50M+ with no issue.
Does your server have enough bandwidth to handle the tx/rx of your speed test? How far is the server from the location and what server are you running a speed test to?
Are you running the test via bandwidth test, or a website like speedtest.net?
If you're interested in maximizing throughput, I recommend using L2TP without IPsec. Of course this isn't secure, but it requires the least overhead in regards to packet overhead. On that subject, you may want to verify your path MTU to make sure you aren't trying to use a VPN tunnel with an MTU size that exceeds the capacity of the connection.I believe that the correct answer was written by DJGlooM.Can you post your VPN config output. I use pptp and l2tp and can consistently push 50M+ with no issue.
Does your server have enough bandwidth to handle the tx/rx of your speed test? How far is the server from the location and what server are you running a speed test to?
Are you running the test via bandwidth test, or a website like speedtest.net?
Today I did some tests before using PPTP tunnel and then L2TP, the data transfer speed was great and also the PING response time. The low speed problem is only with OVPN tunnels.
Unfortunately Router OS 6.X does not support UDP on OVPN tunnels, but only TCP.
I have internet access in 100 Mbit optical fiber in both companies, this is why I am sure that the line is not a problem.
Tomorrow I'll try to setup an IPSec tunnel, I am sure that this type of VPN is very fast because I already tried on two other companies.
The only doubt I have is that the second company I have only one public IP available and I do not know if you can use an IP for the IPSec tunnel and the same IP to get out the traffic on the internet without tunnel.
Bha! Tomorrow the answer
EoIP is layer 2 tunnel, also EoIP is GRE, so instead of using it in layer 3 you can use pure GRE tunneling. And yes, GRE slightly faster, than L2TP because overhead is lesser and I think GRE packets are routed faster than UDP packets.If you have a public IP on both devices, you can just set up an EoIP tunnel to make a layer 3 tunnel. The most bandwidth I have seen pushed over a VPN tunnel in mikrotik has been over EoIP.
That means you are using "fasttrack" in a situation where it cannot be used.For some reason, my SSTP connection was slow unless I either TORCHED the connection or enabled a QUEUE TREE on the interface (even though nothing goes through the queue tree, apparently).
Are you sure that adding a /queue tree item prevents the packets handled by the queue from getting fasttracked? Yes, sniffing does disable fasttracking, maybe torching does as well, but adding a queue?That means you are using "fasttrack" in a situation where it cannot be used.
You are right, adding a queue tree to an interface (vs a global queue tree) should not disable fasttrack.Are you sure that adding a /queue tree item prevents the packets handled by the queue from getting fasttracked? Yes, sniffing does disable fasttracking, maybe torching does as well, but adding a queue?
Yes, the generated name of the interface differs if the second connection from the same user gets established.It will be <sstp-user-1> etc., so the way you create the interface and queue names, you'd be creating another queue with the same name, which would obviously fail. And if the additional connection would go down first, you would remove the queue for the surviving connection, also not good.I would also tick the "yes" radio button under the "Only One" option in PPP-> PROFILE -> LIMITS. My understanding is that this workaround -- the way that I've got it scripted -- is only going to be compatible with one active connection at a time PER USER.