-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bgp control plane: add route aggregation feature #37275
base: main
Are you sure you want to change the base?
Bgp control plane: add route aggregation feature #37275
Conversation
…isements (RFC 4632) New route aggregation option under the CiliumBGPAdvertisement. apiVersion: cilium.io/v2alpha1 kind: CiliumBGPAdvertisement metadata: name: bgp-advertisements labels: advertise: bgp spec: advertisements: - advertisementType: "Service" service: aggregate: true <<<<<<<<<<<<<< New Field Fixes: cilium#36777 Signed-off-by: Roman Ptitcyn <romanspb@yahoo.com>
New route aggregation option under the CiliumBGPAdvertisement. Fixes: cilium#36777 Signed-off-by: Roman Ptitcyn <romanspb@yahoo.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @romanspb80 Before jumping into the code review of the implementation, I think we need to agree on the API change. I still think user should specify aggregation length instead of dynamically calculating it based on the boolean flag.
In vast majority of cases there is going to be single IP returned from getServicePrefix
, so with single /32 or /128 prefix there will be nothing to aggregate.
Have you tested this change on kubernetes cluster with sample loadbalancer service ? can you provide some example outputs how aggregation will look like?
Hi @harsimran-pabla . Thanks for the comment.
In this case the dynamically calculating works:
The dynamically calculated mask /28 is more efficiently then if we set the mask /24 manualy. Also there is another case which I noted in my comment : Thanks |
Yes, but the common case is that each service gets single /32 prefix. And if we pick start and end address from various services which have aggregate flag set, resulting aggregated prefix can be very large range - as services can have IPs allocated from different lb-ipam pools. |
This approach was modifying LB Pool with a aggregate flag. Why not introduce new BGP advertisement type which will advertise complete LB IPAM pool range ? Implementation would be much simpler and not change existing service reconcilers. I am trying to understand if the goal is to reduce /32 prefix announcements, can we achieve it with this simpler mechanism. Existing service announcements can be configured if user wants to service with eTP=Local to work. |
Yes, that makes sense. So instead of dynamic calculation use the cidr (or all cidrs from the LB pool) mask right away? |
No, I would still prefer each pool advertisement. Atleast we should start with this common case and see later if it makes sense to improve further upon it. In this approach, feature is not about prefix aggregation. It is more about LB Pool advertisement. |
There are two approaches to reduce /32 advertisements, I am not leaning towards one or the other.
I am not yet convinced if we need to jump straight into auto-aggregation, we can look at it in the future. At this stage, I think we need something simpler and deterministic. |
Ok. Will do this approach:
|
Add route aggregation feature in BGP Control Plane for service advertisements (RFC 4632)
New route aggregation option under the CiliumBGPAdvertisement.
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPAdvertisement
metadata:
name: bgp-advertisements
labels:
advertise: bgp
spec:
advertisements:
- advertisementType: "Service"
service:
aggregate: true <<<<<<<<<<<<<< New Field
Fixes: #36777
Signed-off-by: Roman Ptitcyn romanspb@yahoo.com