You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
name: https
nodePort: 32678
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/instance: nginx-ingress
app.kubernetes.io/name: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
Because the service is dual stack, but metallb has no IPv6 configured in any pool, the controller fails to allocate an IPv6 address and stops there.
To Reproduce
Steps:
Create IPaddresspool pool1 only for IPv4 family.
In case service created with addresspool annotation as pool1. and ipFamilyPolicy is 'PreferDualStack'.
Expected is the service get IP address assigned from ipv4 family only. But actually service stuck in pending state for LBIP.
The same behavior can be seen without addresspool annotation if all pools are only IPv4 family.
Expected Behavior
As mentioned above
Additional Context
If there is only single IP family is available in metallb ipaddresspool (no IPv6 address defined) :
In case of preferDualstack metallb controller should allocate that IP
In case of RequireDualStack it should not allocate any IP and status will be in pending state.
Currently with respect to metallb controller there is no difference between RequireDualStack and preferDualstack and both are treated as mentioned above as 'RequireDualStack'.
I've read and agree with the following
I've checked all open and closed issues and my request is not there.
I've checked all open and closed pull requests and my request is not there.
I've read and agree with the following
I've checked all open and closed issues and my issue is not there.
This bug is reproducible when deploying MetalLB from the main branch
This sounds doable, but we need to be careful in considering the complexity and the logic.
In case of a dual stack service, what metallb should do is:
look for dual stack pools
If not available, look for single stack pools for the primary family
If not available, look for single stack pools for the secondary family
With the premise that a lot of our logic is tied to the pool the IP comes from, we want to keep the same pool for both addresses.
At that point, we need to choose what to do if:
a new pool that would provide both addresses is available (I'd say don't change the service)
the current pool is extended with the missing family (maybe we can then provide the service with the missing one)
This, to be done also when the controller restarts.
Said that, I currently don't have the bandwith to work on this but I am open to review if somebody wants to take it.
MetalLB Version
0.14.5
Deployment method
Charts
Main CNI
calico
Kubernetes Version
No response
Cluster Distribution
No response
Describe the bug
Related issue: #1339
here is the service:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.0.0.79
clusterIPs:
externalTrafficPolicy: Local
healthCheckNodePort: 30272
internalTrafficPolicy: Cluster
ipFamilies:
ipFamilyPolicy: PreferDualStack
ports:
nodePort: 30122
port: 80
protocol: TCP
targetPort: http
nodePort: 32678
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/instance: nginx-ingress
app.kubernetes.io/name: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
Because the service is dual stack, but metallb has no IPv6 configured in any pool, the controller fails to allocate an IPv6 address and stops there.
To Reproduce
Steps:
Expected Behavior
As mentioned above
Additional Context
If there is only single IP family is available in metallb ipaddresspool (no IPv6 address defined) :
Currently with respect to metallb controller there is no difference between RequireDualStack and preferDualstack and both are treated as mentioned above as 'RequireDualStack'.
I've read and agree with the following
I've read and agree with the following
The text was updated successfully, but these errors were encountered: