Specifies the name for the BGP advertisement.
You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing.
MetalLB supports advertising using L2 and BGP for the same set of IP addresses.
MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network.
The fields for the BGPAdvertisements
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
|
Specifies the name for the BGP advertisement. |
||
|
|
Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. |
||
|
|
Optional: Specifies the number of bits to include in a 32-bit CIDR mask.
To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route.
For example, with an aggregation length of |
||
|
|
Optional: Specifies the number of bits to include in a 128-bit CIDR mask.
For example, with an aggregation length of |
||
|
|
Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values:
|
||
|
|
Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. |
||
|
|
Optional: The list of |
||
|
|
Optional: A selector for the |
||
|
|
Optional: |
||
|
|
Optional: Peers limits the BGP peer to advertise the IPs of the selected pools to. When empty, the load balancer IP is announced to all the BGP peers configured. |
Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32
route and one fc00:f853:ccd:e799::1/128
route for each load-balancer IP address that MetalLB assigns to a service.
Because the localPref
and communities
fields are not specified, the routes are advertised with localPref
set to zero and no BGP communities.
Configure MetalLB as follows so that the IPAddressPool
is advertised with the BGP protocol.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create an IP address pool.
Create a file, such as ipaddresspool.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: doc-example-bgp-basic
spec:
addresses:
- 203.0.113.200/30
- fc00:f853:ccd:e799::/124
Apply the configuration for the IP address pool:
$ oc apply -f ipaddresspool.yaml
Create a BGP advertisement.
Create a file, such as bgpadvertisement.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: bgpadvertisement-basic
namespace: metallb-system
spec:
ipAddressPools:
- doc-example-bgp-basic
Apply the configuration:
$ oc apply -f bgpadvertisement.yaml
Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200
and 203.0.113.203
and between fc00:f853:ccd:e799::0
and fc00:f853:ccd:e799::f
.
To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200
to a service.
With that IP address as an example, the speaker advertises two routes to BGP peers:
203.0.113.200/32
, with localPref
set to 100
and the community set to the numeric value of the NO_ADVERTISE
community.
This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers.
203.0.113.200/30
, aggregates the load-balancer IP addresses assigned by MetalLB into a single route.
MetalLB advertises the aggregated route to BGP peers with the community attribute set to 8000:800
.
BGP peers propagate the 203.0.113.200/30
route to other BGP peers.
When traffic is routed to a node with a speaker, the 203.0.113.200/32
route is used to forward the traffic into the cluster and to a pod that is associated with the service.
As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32
, for each service, as well as the 203.0.113.200/30
aggregate route.
Each service that you add generates the /30
route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers.
Configure MetalLB as follows so that the IPAddressPool
is advertised with the BGP protocol.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create an IP address pool.
Create a file, such as ipaddresspool.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: doc-example-bgp-adv
labels:
zone: east
spec:
addresses:
- 203.0.113.200/30
- fc00:f853:ccd:e799::/124
autoAssign: false
Apply the configuration for the IP address pool:
$ oc apply -f ipaddresspool.yaml
Create a BGP advertisement.
Create a file, such as bgpadvertisement1.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: bgpadvertisement-adv-1
namespace: metallb-system
spec:
ipAddressPools:
- doc-example-bgp-adv
communities:
- 65535:65282
aggregationLength: 32
localPref: 100
Apply the configuration:
$ oc apply -f bgpadvertisement1.yaml
Create a file, such as bgpadvertisement2.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: bgpadvertisement-adv-2
namespace: metallb-system
spec:
ipAddressPools:
- doc-example-bgp-adv
communities:
- 8000:800
aggregationLength: 30
aggregationLengthV6: 124
Apply the configuration:
$ oc apply -f bgpadvertisement2.yaml
To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the .spec.nodeSelector
specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create an IP address pool by using a custom resource:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: pool1
spec:
addresses:
- 4.4.4.100-4.4.4.200
- 2001:100:4::200-2001:100:4::400
Control which nodes in the cluster the IP address from pool1
advertises from by defining the .spec.nodeSelector
value in the BGPAdvertisement custom resource:
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: example
spec:
ipAddressPools:
- pool1
nodeSelector:
- matchLabels:
kubernetes.io/hostname: NodeA
- matchLabels:
kubernetes.io/hostname: NodeB
In this example, the IP address from pool1
advertises from NodeA
and NodeB
only.
The fields for the l2Advertisements
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
|
Specifies the name for the L2 advertisement. |
||
|
|
Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. |
||
|
|
Optional: The list of |
||
|
|
Optional: A selector for the |
||
|
|
Optional:
|
||
|
|
Optional: The list of |
Configure MetalLB as follows so that the IPAddressPool
is advertised with the L2 protocol.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create an IP address pool.
Create a file, such as ipaddresspool.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: doc-example-l2
spec:
addresses:
- 4.4.4.0/24
autoAssign: false
Apply the configuration for the IP address pool:
$ oc apply -f ipaddresspool.yaml
Create a L2 advertisement.
Create a file, such as l2advertisement.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2advertisement
namespace: metallb-system
spec:
ipAddressPools:
- doc-example-l2
Apply the configuration:
$ oc apply -f l2advertisement.yaml
The ipAddressPoolSelectors
field in the BGPAdvertisement
and L2Advertisement
custom resource definitions is used to associate the IPAddressPool
to the advertisement based on the label assigned to the IPAddressPool
instead of the name itself.
This example shows how to configure MetalLB so that the IPAddressPool
is advertised with the L2 protocol by configuring the ipAddressPoolSelectors
field.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create an IP address pool.
Create a file, such as ipaddresspool.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: doc-example-l2-label
labels:
zone: east
spec:
addresses:
- 172.31.249.87/32
Apply the configuration for the IP address pool:
$ oc apply -f ipaddresspool.yaml
Create a L2 advertisement advertising the IP using ipAddressPoolSelectors
.
Create a file, such as l2advertisement.yaml
, with content like the following example:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2advertisement-label
namespace: metallb-system
spec:
ipAddressPoolSelectors:
- matchExpressions:
- key: zone
operator: In
values:
- east
Apply the configuration:
$ oc apply -f l2advertisement.yaml
By default, the IP addresses from IP address pool that has been assigned to the service, is advertised from all the network interfaces. The interfaces
field in the L2Advertisement
custom resource definition is used to restrict those network interfaces that advertise the IP address pool.
This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the interfaces
field of all nodes.
You have installed the OpenShift CLI (oc
).
You are logged in as a user with cluster-admin
privileges.
Create an IP address pool.
Create a file, such as ipaddresspool.yaml
, and enter the configuration details like the following example:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: doc-example-l2
spec:
addresses:
- 4.4.4.0/24
autoAssign: false
Apply the configuration for the IP address pool like the following example:
$ oc apply -f ipaddresspool.yaml
Create a L2 advertisement advertising the IP with interfaces
selector.
Create a YAML file, such as l2advertisement.yaml
, and enter the configuration details like the following example:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2advertisement
namespace: metallb-system
spec:
ipAddressPools:
- doc-example-l2
interfaces:
- interfaceA
- interfaceB
Apply the configuration for the advertisement like the following example:
$ oc apply -f l2advertisement.yaml
The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface. |
From OKD 4.14 the default network behavior is to not allow forwarding of IP packets between network interfaces. Therefore, when MetalLB is configured on a secondary interface, you need to add a machine configuration to enable IP forwarding for only the required interfaces.
OKD clusters upgraded from 4.13 are not affected because a global parameter is set during upgrade to enable global IP forwarding. |
To enable IP forwarding for the secondary interface, you have two options:
Enable IP forwarding for a specific interface.
Enable IP forwarding for all interfaces.
Enabling IP forwarding for a specific interface provides more granular control, while enabling it for all interfaces applies a global setting. |
Patch the Cluster Network Operator, setting the parameter routingViaHost
to true
, by running the following command:
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig": {"routingViaHost": true} }}}}' --type=merge
Enable forwarding for a specific secondary interface, such as bridge-net
by creating and applying a MachineConfig
CR:
Base64-encode the string that is used to configure network kernel parameters by running the following command on your local machine:
$ echo -e "net.ipv4.conf.bridge-net.forwarding = 1\nnet.ipv6.conf.bridge-net.forwarding = 1\nnet.ipv4.conf.bridge-net.rp_filter = 0\nnet.ipv6.conf.bridge-net.rp_filter = 0" | base64 -w0
bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo=
Create the MachineConfig
CR to enable IP forwarding for the specified secondary interface named bridge-net
.
Save the following YAML in the enable-ip-forward.yaml
file:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: <node_role> (1)
name: 81-enable-global-forwarding
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo= (2)
verification: {}
filesystem: root
mode: 644
path: /etc/sysctl.d/enable-global-forwarding.conf
osImageURL: ""
1 | Node role where you want to enable IP forwarding, for example, worker |
2 | Populate with the generated base64 string |
Apply the configuration by running the following command:
$ oc apply -f enable-ip-forward.yaml
After you apply the machine config, verify the changes by following this procedure:
Enter into a debug session on the target node by running the following command:
$ oc debug node/<node-name>
This step instantiates a debug pod called <node-name>-debug
.
Set /host
as the root directory within the debug shell by running the following command:
$ chroot /host
The debug pod mounts the host’s root file system in /host
within the pod. By changing the root directory to /host
, you can run binaries contained in the host’s executable paths.
Verify that IP forwarding is enabled by running the following command:
$ cat /etc/sysctl.d/enable-global-forwarding.conf
net.ipv4.conf.bridge-net.forwarding = 1
net.ipv6.conf.bridge-net.forwarding = 1
net.ipv4.conf.bridge-net.rp_filter = 0
net.ipv6.conf.bridge-net.rp_filter = 0
The output indicates that IPv4 and IPv6 packet forwarding is enabled on the bridge-net
interface.