-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.16 Backports 2025-01-07 #36872
v1.16 Backports 2025-01-07 #36872
Conversation
Not seeing the patch in this PR though ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My commits look good, thanks!
@julianwiedmann sorry my bad. |
fd6cd12
to
e6cef56
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thank you!
/test |
[ upstream commit 3a73f24 ] We have been recently witnessing a few conformance ipsec runs reporting leaked packets. In order to simplify troubleshooting these issues, and figure out whether they are legitimate or flakes, let's additionally print whether the detected packet was encapsulated or not. Signed-off-by: Marco Iorio <marco.iorio@isovalent.com> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
[ upstream commit 40a4df7 ] We have been recently witnessing a few conformance ipsec runs reporting leaked packets, with some referring to DNS answers from a coredns pod to a CiliumInternalIP. To simplify troubleshooting these issues, and figure out whether they are legitimate or flakes, let's additionally print information about the DNS message itself, so that we can trace down which component performed the request. The output is along the lines of: [10:27:49:245997] 10.244.1.67:49662 -> 10.244.0.10:53 (proto: 17, encap: 1, ifindex: 43, netns: f0000000) [10:27:49:246003] Detected DNS message, ID: 17ef, Flags 120, QD: 1, AN: 0, NS: 0, AR: 1, query googlecom [10:27:49:246315] 10.244.0.10:53 -> 10.244.1.67:49662 (proto: 17, encap: 1, ifindex: 45, netns: f0000000) [10:27:49:246317] Detected DNS message, ID: 17ef, Flags 8580, QD: 1, AN: 1, NS: 0, AR: 1, query googlecom Signed-off-by: Marco Iorio <marco.iorio@isovalent.com> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
[ upstream commit 483b009 ] Normally, the script only flags traffic whose source and destination IP addresses belong to the PodCIDR and, when encapsulation is enabled, don't match the CiliumInternalIPs specified as parameters. However, this filter is overridden when the traffic comes from a proxy, so that it gets flagged even in case it is subsequently masqueraded. Let's additionally output whether displayed traffic got actually flagged due to this reason, to simplify troubleshooting possible flakes. Signed-off-by: Marco Iorio <marco.iorio@isovalent.com> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
[ upstream commit b780df6 ] Let's additionally output the TCP flags in case of leaked traffic, as potentially useful while troubleshooting possible flakes. Signed-off-by: Marco Iorio <marco.iorio@isovalent.com> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
[ upstream commit 055b7a3 ] In each iteration of pod in function processConfigWithSinglePort and processConfigWithNamedPorts bes4 and bes6 need to be cleared. Otherwise, when size of pods is larger than one, aka when the iteration time is more than one, bes4 and bes6 will aggregate all of the backends. For example, in the first iteration backend 10.0.2.250:80 is added, then in the second iteration [10.0.2.250:80, 10.0.2.199:80] are added. 10.108.13.48:80 LocalRedirect 1 => 10.0.2.199:80 2 => 10.0.2.250:80 3 => 10.0.2.250:80 Fixes: e7bb8a7 ("k8s/cilium Event handlers and processing logic for LRPs") Signed-off-by: Zijian Zhang <zijianzhang@bytedance.com> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
[ upstream commit a3489f1 ] Without CORS headers browsers will prevent calling hubble ui backend api on another domain. Signed-off-by: Dmitry Kharitonov <dmitry@isovalent.com> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
…parated string [ upstream commit 5c08f95 ] Signed-off-by: John Roche <john.roche@swyftx.com.au> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
[ upstream commit 550d2f5 ] Signed-off-by: Alexis La Goutte <alexis.lagoutte@gmail.com> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
[ upstream commit cdecbcb ] GNU make on the host may use --jobserver-style=fifo (default on my machine). It also implies --jobserver-auth=fifo:/tmp/GMfifo$MAKE_PID, an undocumented flag, used internally by make and passed to the child instances of make. This flag appears in $(MAKEFLAGS). The cilium-build target in Documentation/Makefile passes MAKEFLAGS to another make instance, called in a docker image. The problem is that --jobserver-auth passed to make inside docker points to a file that doesn't exist in the container filesystem namespace, and make fails with an error like this: make: *** internal error: invalid --jobserver-auth string 'fifo:/tmp/GMfifo361142'. Stop. make: *** [Makefile:48: cilium-build] Error 2 make: Leaving directory '/home/max/.opt/go/src/github.com/cilium/cilium-snat/Documentation' Fix this by filtering out --jobserver-auth=... from MAKEFLAGS when passing it to make inside docker. Signed-off-by: Maxim Mikityanskiy <maxim@isovalent.com> Signed-off-by: viktor-kurchenko <viktor.kurchenko@isovalent.com>
e6cef56
to
9365b3a
Compare
/test |
Looks like connectivity tests constantly fail in the E2E upgrade workflow after downgrade for the kernel: |
/test |
Sorry, missed this :/. Nothing obvious - let's drop that backport to unblock, and let me have a try manually. (already discussed with @joamaki) |
9365b3a
to
ccc9759
Compare
/test |
fyi #36988 looks good now. Think it's the missing |
Skipped:
Once this PR is merged, a GitHub action will update the labels of these PRs: