Page 1 of 1
MetalLB BGP ECMP
Posted: Fri Sep 25, 2020 2:25 pm
by si458
Hi All,
this is going to be a far fetched question but through i would ask anyways
ive got the BGP working with MetalLB and Kubernetes with our Mikrotik router
HOWEVER its currently not load-balancing as they say it should, it only goes to one node instead of all 3
ive read up about it and apparently i need to do ECMP on my router/switch?
https://metallb.universe.tf/concepts/bgp/
the routing table on my mikrotik currently shows the single ip address going to all 3 nodes as expected
but only one of the routes is actually marked as active?
they also all have the same distance of 200?
is it possible to get mikrotik to load-balance them with BGP?
if so how?
Regards
Simon
Re: MetalLB BGP ECMP
Posted: Fri Sep 25, 2020 3:56 pm
by StubArea51
MikroTik will only load balance (ECMP) with iBGP when peering via loopbacks and using an IGP (OSPF or RIP) or static routes.
https://wiki.mikrotik.com/wiki/Manual:B ... _with_iBGP
Re: MetalLB BGP ECMP
Posted: Fri Sep 25, 2020 5:03 pm
by si458
Hi IPANetEngineer
i have already tried those examples and they dont work :(
the routes get inserted via metallb themselves (not manually) and they are inserted as single route to each host NOT as a single route to multiple hosts
e.g.
192.168.168.179 -> 192.168.168.156 - 200
192.168.168.179 -> 192.168.168.157 - 200
192.168.168.179 -> 192.168.168.158 - 200
where as the example shows it needs to be
192.168.168.179 -> 192.168.168.156,192.168.168.157,192.168.168.158 - 200
Regards
Simon
Re: MetalLB BGP ECMP
Posted: Sun Oct 25, 2020 11:25 pm
by pubudeux
Hey Simon,
Did you ever find a solution for this? I'm stuck at the same point as well with Mikrotik BGP + MetalLB.
Re: MetalLB BGP ECMP
Posted: Mon Oct 26, 2020 12:34 pm
by si458
Hey Simon,
Did you ever find a solution for this? I'm stuck at the same point as well with Mikrotik BGP + MetalLB.
Hi,
sadly no i didnt,
i ended up using layer 2 with internal ip addresses on a small subnet /28
then on the mikrotik forwarding the subnet from external ips to internal ips with the small subnet using NETMAP
Re: MetalLB BGP ECMP
Posted: Wed Oct 28, 2020 7:48 am
by SillyPosition
I tried to setup the same thing very recently, metallb on physical machines in my home, and metallb to publish services over the home network.
My first attempt actually worked very simply, I used the configured AS in mikrotik (routing - bgp - interfaces), and I setup a peer using my configured asn for metallb and the nodes IPs (multiple peers for all the k8s cluster, all with the same AS)
They all appeared as published, and I even saw the route appears (in ip - routes). It appears with distance 20.
And my attempt to browse this IP from machines in the network was successful, however the few first attempts were very slow, took up to 5-6seconds to load nginx default page.
But then, I haven't done anything in particular, and it stopped working.
Meaning, that everything in mikrotik looks the same, I see the route generated, but Im unable to navigate to the exposed service at all
I wonder if any of you guys made some progress around it since then?
I prefer the BGP approach significantly over ARP, since I read it works much faster during failover.
Re: MetalLB BGP ECMP
Posted: Wed Mar 17, 2021 12:28 am
by naviset
Seems it's not possible to use Mikrotik for MetalLb load balancing in BGP mode:
viewtopic.php?t=81321
Re: MetalLB BGP ECMP
Posted: Mon May 03, 2021 5:49 pm
by jr0dd
I've been beating my head up against the wall with this for the last couple weekends. BGP establishes connection, individual routes are created in ip/routes for each device and everything looks like it's working. BUT I can not reach anything beyond the metallb ip from my home network devices. I can not ping them and don't show up in an arp-scan. The devices behind metallb can ping devices on my home network though. In layer 2 mode all works as expected.
--------
ip address print
Flags: X - disabled, I - invalid, D - dynamic
# ADDRESS NETWORK INTERFACE
0 10.10.0.1/20 10.10.0.0 lan-bridge
1 D public-ip/24 public-ip ether1
--------
/ip route print
# DST-ADDRESS PREF-SRC GATEWAY DISTANCE
0 ADS 0.0.0.0/0 public-ip 1
1 ADC 10.10.0.0/20 10.10.0.1 lan-bridge 0
2 ADb 10.10.2.1/32 10.10.2.10 20
3 ADb 10.10.2.10/32 10.10.2.10 20
4 ADb 10.10.2.100/32 10.10.2.10 20
5 ADC public-ip/24 public-ip ether1 0
--------
/routing bgp instance print
Flags: * - default, X - disabled
0 *X name="default" as=60000 router-id=0.0.0.0 redistribute-connected=no redistribute-static=no redistribute-rip=no redistribute-ospf=no redistribute-other-bgp=no out-filter="" client-to-client-reflection=yes
ignore-as-path-len=no routing-table=""
1 name="k3s" as=400 router-id=10.10.0.1 redistribute-connected=no redistribute-static=no redistribute-rip=no redistribute-ospf=no redistribute-other-bgp=no out-filter="" client-to-client-reflection=yes
ignore-as-path-len=no routing-table=""
--------
/routing bgp peer print detail
Flags: X - disabled, E - established
0 E name="metallb" instance=k3s remote-address=10.10.2.10 remote-as=500 tcp-md5-key="" nexthop-choice=force-self multihop=no route-reflect=no hold-time=3m ttl=default in-filter="" out-filter="" address-families=ip
default-originate=always remove-private-as=no as-override=no passive=no use-bfd=no
--------
/routing bgp network print
Flags: X - disabled
# NETWORK SYNCHRONIZE
0 10.10.2.0/24 yes
--------
/routing bgp advertisements print
PEER PREFIX NEXTHOP AS-PATH ORIGIN LOCAL-PREF
metallb 0.0.0.0/0 10.10.0.1 igp
--------
metallb-bgp-config
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.10.0.1
peer-asn: 400
my-asn: 500
address-pools:
- name: mikrotik
protocol: bgp
avoid-buggy-ips: true
addresses:
- 10.10.2.0/24
--------
metallb-l2-config
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: mikrotik
protocol: layer2
avoid-buggy-ips: true
addresses:
- 10.10.2.0/24
UPDATE: All I had to do was put metallb on a different subnet to get this working. Now if Mikrotik could please support ECMP packets that would be great
Re: MetalLB BGP ECMP
Posted: Wed Aug 18, 2021 1:44 am
by YYZ
I've been beating my head up against the wall with this for the last couple weekends. BGP establishes connection, individual routes are created in ip/routes for each device and everything looks like it's working. BUT I can not reach anything beyond the metallb ip from my home network devices. I can not ping them and don't show up in an arp-scan. The devices behind metallb can ping devices on my home network though. In layer 2 mode all works as expected.
--------
ip address print
Flags: X - disabled, I - invalid, D - dynamic
# ADDRESS NETWORK INTERFACE
0 10.10.0.1/20 10.10.0.0 lan-bridge
1 D public-ip/24 public-ip ether1
--------
/ip route print
# DST-ADDRESS PREF-SRC GATEWAY DISTANCE
0 ADS 0.0.0.0/0 public-ip 1
1 ADC 10.10.0.0/20 10.10.0.1 lan-bridge 0
2 ADb 10.10.2.1/32 10.10.2.10 20
3 ADb 10.10.2.10/32 10.10.2.10 20
4 ADb 10.10.2.100/32 10.10.2.10 20
5 ADC public-ip/24 public-ip ether1 0
--------
/routing bgp instance print
Flags: * - default, X - disabled
0 *X name="default" as=60000 router-id=0.0.0.0 redistribute-connected=no redistribute-static=no redistribute-rip=no redistribute-ospf=no redistribute-other-bgp=no out-filter="" client-to-client-reflection=yes
ignore-as-path-len=no routing-table=""
1 name="k3s" as=400 router-id=10.10.0.1 redistribute-connected=no redistribute-static=no redistribute-rip=no redistribute-ospf=no redistribute-other-bgp=no out-filter="" client-to-client-reflection=yes
ignore-as-path-len=no routing-table=""
--------
/routing bgp peer print detail
Flags: X - disabled, E - established
0 E name="metallb" instance=k3s remote-address=10.10.2.10 remote-as=500 tcp-md5-key="" nexthop-choice=force-self multihop=no route-reflect=no hold-time=3m ttl=default in-filter="" out-filter="" address-families=ip
default-originate=always remove-private-as=no as-override=no passive=no use-bfd=no
--------
/routing bgp network print
Flags: X - disabled
# NETWORK SYNCHRONIZE
0 10.10.2.0/24 yes
--------
/routing bgp advertisements print
PEER PREFIX NEXTHOP AS-PATH ORIGIN LOCAL-PREF
metallb 0.0.0.0/0 10.10.0.1 igp
--------
metallb-bgp-config
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.10.0.1
peer-asn: 400
my-asn: 500
address-pools:
- name: mikrotik
protocol: bgp
avoid-buggy-ips: true
addresses:
- 10.10.2.0/24
--------
metallb-l2-config
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: mikrotik
protocol: layer2
avoid-buggy-ips: true
addresses:
- 10.10.2.0/24
UPDATE: All I had to do was put metallb on a different subnet to get this working. Now if Mikrotik could please support ECMP packets that would be great
My configuration is also working, but I'm getting 5 seconds of latency when connecting to one of the pods behind metallb as another user mentioned previously. Are you experiencing the same issue?
The only way I found to eliminate this delay is by creating a masquarade nat rule in the Firewall section, but I can't understand the reason behind this. Any ideas?
Re: MetalLB BGP ECMP
Posted: Sat Feb 25, 2023 1:21 am
by eset
I've been beating my head up against the wall with this for the last couple weekends. BGP establishes connection, individual routes are created in ip/routes for each device and everything looks like it's working. BUT I can not reach anything beyond the metallb ip from my home network devices. I can not ping them and don't show up in an arp-scan. The devices behind metallb can ping devices on my home network though. In layer 2 mode all works as expected.
--------
ip address print
Flags: X - disabled, I - invalid, D - dynamic
# ADDRESS NETWORK INTERFACE
0 10.10.0.1/20 10.10.0.0 lan-bridge
1 D public-ip/24 public-ip ether1
--------
/ip route print
# DST-ADDRESS PREF-SRC GATEWAY DISTANCE
0 ADS 0.0.0.0/0 public-ip 1
1 ADC 10.10.0.0/20 10.10.0.1 lan-bridge 0
2 ADb 10.10.2.1/32 10.10.2.10 20
3 ADb 10.10.2.10/32 10.10.2.10 20
4 ADb 10.10.2.100/32 10.10.2.10 20
5 ADC public-ip/24 public-ip ether1 0
--------
/routing bgp instance print
Flags: * - default, X - disabled
0 *X name="default" as=60000 router-id=0.0.0.0 redistribute-connected=no redistribute-static=no redistribute-rip=no redistribute-ospf=no redistribute-other-bgp=no out-filter="" client-to-client-reflection=yes
ignore-as-path-len=no routing-table=""
1 name="k3s" as=400 router-id=10.10.0.1 redistribute-connected=no redistribute-static=no redistribute-rip=no redistribute-ospf=no redistribute-other-bgp=no out-filter="" client-to-client-reflection=yes
ignore-as-path-len=no routing-table=""
--------
/routing bgp peer print detail
Flags: X - disabled, E - established
0 E name="metallb" instance=k3s remote-address=10.10.2.10 remote-as=500 tcp-md5-key="" nexthop-choice=force-self multihop=no route-reflect=no hold-time=3m ttl=default in-filter="" out-filter="" address-families=ip
default-originate=always remove-private-as=no as-override=no passive=no use-bfd=no
--------
/routing bgp network print
Flags: X - disabled
# NETWORK SYNCHRONIZE
0 10.10.2.0/24 yes
--------
/routing bgp advertisements print
PEER PREFIX NEXTHOP AS-PATH ORIGIN LOCAL-PREF
metallb 0.0.0.0/0 10.10.0.1 igp
--------
metallb-bgp-config
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 10.10.0.1
peer-asn: 400
my-asn: 500
address-pools:
- name: mikrotik
protocol: bgp
avoid-buggy-ips: true
addresses:
- 10.10.2.0/24
--------
metallb-l2-config
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: mikrotik
protocol: layer2
avoid-buggy-ips: true
addresses:
- 10.10.2.0/24
UPDATE: All I had to do was put metallb on a different subnet to get this working. Now if Mikrotik could please support ECMP packets that would be great
My configuration is also working, but I'm getting 5 seconds of latency when connecting to one of the pods behind metallb as another user mentioned previously. Are you experiencing the same issue?
The only way I found to eliminate this delay is by creating a masquarade nat rule in the Firewall section, but I can't understand the reason behind this. Any ideas?
So which of configmaps you are using because the second overwrites the first one