Community discussions

MikroTik App
 
BigTrumpet
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 53
Joined: Thu Feb 07, 2008 7:46 pm

Poor performance with GRE tunnel

Thu Jan 11, 2018 8:22 pm

Hi guys,
we have a RB3011 connected to a CCR1036 through a dedicated fiber MPLS connection (capped to 30/30 Mbps).
Latency from RB3011 to CCR is about 3 ms, no jitter, no packet loss.
MTU is 1500.

Between my two routes I have configured a GRE tunnel.
/interface gre
add allow-fast-path=no !keepalive local-address=10.38.16.6 mtu=1500 name=gre-tunnel remote-address=10.30.1.6
When I make a bandwidth test on the L3 network, I obtain about 29 Mbps.
/tool bandwidth-test direction=receive duration=20 interval=1 user=admin password=**** 10.30.1.6
                status: done testing
              duration: 20s
            rx-current: 28.8Mbps
  rx-10-second-average: 28.7Mbps
      rx-total-average: 24.1Mbps
          lost-packets: 260
           random-data: no
             direction: receive
               rx-size: 1500
When I perform the test inside GRE tunnel, I barely achieve 23 Mbps.
/tool bandwidth-test direction=receive duration=20 interval=1 user=admin password=**** 1.2.3.4
                status: done testing
              duration: 21s
            rx-current: 23.1Mbps
  rx-10-second-average: 23.0Mbps
      rx-total-average: 20.3Mbps
          lost-packets: 699
           random-data: no
             direction: receive
               rx-size: 1500
Is it normal such a drop of performance? It's about 20% less.
Which are your experiences with GRE?.

Thank you.
Massimo
 
User avatar
Paternot
Forum Guru
Forum Guru
Posts: 1059
Joined: Thu Jun 02, 2016 4:01 am
Location: Niterói / Brazil

Re: Poor performance with GRE tunnel

Thu Jan 11, 2018 9:10 pm

If the MTU of the link is 1500, then the MTU of the GRE tunnel will be less, due to overhead - 24 bytes smaller. Try again, but with packets of 1476 bytes.

Also, this is an awful lot of lost packets.
 
BigTrumpet
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 53
Joined: Thu Feb 07, 2008 7:46 pm

Re: Poor performance with GRE tunnel

Fri Jan 12, 2018 4:12 pm

edited
Last edited by BigTrumpet on Fri Jan 12, 2018 4:35 pm, edited 1 time in total.
 
BigTrumpet
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 53
Joined: Thu Feb 07, 2008 7:46 pm

Re: Poor performance with GRE tunnel

Fri Jan 12, 2018 4:35 pm

If the MTU of the link is 1500, then the MTU of the GRE tunnel will be less, due to overhead - 24 bytes smaller. Try again, but with packets of 1476 bytes.

Also, this is an awful lot of lost packets.
Thank you Paternot.
Actually, using option "remote-udp-tx-size=1476", the result is about 29.4... much better!
So, it must be a fragmentation issue, but now I can't understand why if I check fragmentation through the tunnel, it seems unfragmented up to 1500 bytes.

For example:
ping 8.8.8.8 do-not-fragment size=1500
It replies OK, while trying with 1501 it says:

packet too large and cannot be fragmented

Why does it appear that GRE tunnel MTU is 1500?
Do I have to change manually MTU size on the GRE settings to 1476? (now it's 1500).

Massimo
 
User avatar
Paternot
Forum Guru
Forum Guru
Posts: 1059
Joined: Thu Jun 02, 2016 4:01 am
Location: Niterói / Brazil

Re: Poor performance with GRE tunnel

Fri Jan 12, 2018 4:48 pm


Thank you Paternot.
Actually, using option "remote-udp-tx-size=1476", the result is about 29.4... much better!
So, it must be a fragmentation issue, but now I can't understand why if I check fragmentation through the tunnel, it seems unfragmented up to 1500 bytes.

For example:
ping 8.8.8.8 do-not-fragment size=1500
It replies OK, while trying with 1501 it says:

packet too large and cannot be fragmented

Why does it appear that GRE tunnel MTU is 1500?
Do I have to change manually MTU size on the GRE settings to 1476? (now it's 1500).

Massimo
Because the traffic inside the tunnel itself is not fragmented - the tunnel is. Like this:

Tunnel MTU is 1500.
Ethernet MTU is 1500 too.
GRE has an overhead of 24 bytes. So, a packet (inside the tunnel) of 1500 bytes results in an outside world packet of 1524 bytes.
Ethernet can only transmit 1500 bytes. So, the 1524 bytes packet is broken in two: one with 1500 bytes (header and payload) and another with 24 bytes of payload - plus header.

When you cut the MTU of the GRE tunnel to 1476, we have this:

Tunnel MTU is 1476
Ethernet MTU is 1500
GRE has an overhead of 24 bytes. So, a packet (inside the tunnel) of 1476 bytes results in an outside world packet of 1500 bytes.
1500 bytes is the ethernet MTU, so a single packet is sent.
 
BigTrumpet
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 53
Joined: Thu Feb 07, 2008 7:46 pm

Re: Poor performance with GRE tunnel

Fri Jan 12, 2018 5:13 pm

Good explanation, thanks.
So, apart from setting GRE MTU to ethernet MTU-24 (1476 in my case), do I have to configure a mangle rule for MSS clamping?

Like this?
/ip firewall mangle
add action=change-mss chain=postrouting new-mss=1476 out-interface=gre-BoIX protocol=tcp tcp-flags=syn tcp-mss=!0-1476
Massimo
 
User avatar
Paternot
Forum Guru
Forum Guru
Posts: 1059
Joined: Thu Jun 02, 2016 4:01 am
Location: Niterói / Brazil

Re: Poor performance with GRE tunnel

Fri Jan 12, 2018 7:16 pm

Good explanation, thanks.
So, apart from setting GRE MTU to ethernet MTU-24 (1476 in my case), do I have to configure a mangle rule for MSS clamping?

Like this?
/ip firewall mangle
add action=change-mss chain=postrouting new-mss=1476 out-interface=gre-BoIX protocol=tcp tcp-flags=syn tcp-mss=!0-1476
Massimo
Not sure, but don't think it would be needed. After all, there is already the MTU set.
 
User avatar
16again
Frequent Visitor
Frequent Visitor
Posts: 78
Joined: Fri Dec 29, 2017 12:23 pm

Re: Poor performance with GRE tunnel

Sat Jan 13, 2018 2:00 pm

Remote endpoints aren't aware of the GRE tunnel in between them , and its MTU.
So they'll negotiate a too big packet size on tcp session setup.

And routers will start fragmenting oversized tcp packets (ie most packets).
When DontFragment bit was set, router will sent back "fragmentation needed" icmp response, and packet doesn't reach remote side

So in short: yes, do use mss-clamp.
Luckily, most traffic is tcp, large udp packets still require fragmentation