Skip to content

Ingress from Host Network is dropped when using Native-Routing #37321

Open
@gcezaralmeida

Description

Is there an existing issue for this?

  • I have searched the existing issues

Version

equal or higher than v1.16.6 and lower than v1.17.0

What happened?

When using Native-Routing, eBPF and NetworkPolicies traffic from hostNetwork (e.g. kubeapi-server) to cilium netwok is dropped for cross-node. Same node is working fine. Also tunnels (VxLAN and Geneve) are working fine.

How can we reproduce the issue?

1 - Enable Native Routing and eBPF
2 - Create a regular Pod
3 - Create a Pod with hostNetwork (Make sure that both pods are on different nodes. Same node it is working)
4 - Create a NetPol that allows connection between both pods but deny all others. In the example I created a NetPol that only allow connection from inside the namespace. (btw, I also tested ingress from other namespaces. I'm getting the same error)
5 - Try to establish a connection from the hostNetwork Pod

1 - cilium values.yaml

USER-SUPPLIED VALUES:
autoDirectNodeRoutes: true
bandwidthManager:
  bbr: true
  enabled: true
bgpControlPlane:
  enabled: true
bpf:
  datapathMode: veth
  hostLegacyRouting: false
  masquerade: true
cgroup:
  autoMount:
    enabled: false
  hostRoot: /sys/fs/cgroup
cni:
  exclusive: false
devices: bond0
enableIPv4BIGTCP: true
gatewayAPI:
  enabled: true
hubble:
  enabled: true
  relay:
    enabled: true
ipam:
  mode: kubernetes
ipv4NativeRoutingCIDR: 10.42.0.0/16
k8sServiceHost: 10.123.100.5
k8sServicePort: 6443
kubeProxyReplacement: true
nodePort:
  enabled: true
operator:
  replicas: 2
routingMode: native
securityContext:
  capabilities:
    ciliumAgent:
    - CHOWN
    - KILL
    - NET_ADMIN
    - NET_RAW
    - IPC_LOCK
    - SYS_ADMIN
    - SYS_RESOURCE
    - DAC_OVERRIDE
    - FOWNER
    - SETGID
    - SETUID
    cleanCiliumState:
    - NET_ADMIN
    - SYS_ADMIN
    - SYS_RESOURCE
socketLB:
  enabled: true
  hostNamespaceOnly: true

2,3,4 - Pods and NetPol

apiVersion: v1
kind: Pod
metadata:
  name: busybox-hostnetwork
  labels:
    app: cilium-net-test
spec:
  hostNetwork: true
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - cilium-net-test
          topologyKey: "kubernetes.io/hostname"
  containers:
    - name: busybox
      image: busybox
      command: ["sh", "-c", "while true; do sleep 3600; done"]
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-ciliumnetwork
  labels:
    app: cilium-net-test
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - cilium-net-test
          topologyKey: "kubernetes.io/hostname"
  containers:
    - name: nginx
      image: nginx
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress-in-namespace
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector: {}

5 - Try to Establish a connection from Pod-to-Pod

kubectl exec -it busybox-hostnetwork -- nc -zv $(kubectl get pods nginx-ciliumnetwork -o jsonpath={.status.podIP}) 80

It is possible to see that the traffic is being blocked on Hubble.

Jan 29 03:01:56.343 [kubevirt-lab-01]: 10.123.100.2:42501 (host) -> default/nginx-ciliumnetwork:80 (ID:42737) to-network FORWARDED (TCP Flags: SYN)
Jan 29 03:01:56.344 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:01:56.344 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:01:57.359 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:01:57.359 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:01:58.383 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:01:58.383 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:01:59.407 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:01:59.407 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:02:00.431 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:02:00.431 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:02:01.459 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:02:01.459 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)

One thing that I notice is that Hubble is marking the traffic as it comes from kube-apiserver. So I added both netpol below trying to allow the connection but no success.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress-kube-apiserver
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
          podSelector:
            matchLabels:
              component: kube-apiserver
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress-nodes
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
        - ipBlock:
            cidr: 10.123.100.0/24

Cilium Version

❯ cilium version
cilium-cli: v0.16.23 compiled with go1.23.4 on darwin/arm64
cilium image (default): v1.16.5
cilium image (stable): v1.16.6
cilium image (running): 1.16.3

❯ kubectl exec -it -n kube-system daemonsets/cilium -- cilium status
Defaulted container "cilium-agent" out of: cilium-agent, config (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.32 (v1.32.0) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideEnvoyConfig", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumEnvoyConfig", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    True   [bond0    10.123.100.4 10.123.100.5 fe80::b696:91ff:fecd:cd0c (Direct Routing)]
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            none
CNI Config file:         successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                  Ok   1.16.6 (v1.16.6-9e9f0989)
NodeMonitor:             Listening for events on 112 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 67/254 allocated from 10.42.0.0/24, 
IPv4 BIG TCP:            Enabled   [131072]
IPv6 BIG TCP:            Disabled
BandwidthManager:        EDT with BPF [BBR] [bond0]
Routing:                 Network: Native   Host: BPF
Attach Mode:             TCX
Device Mode:             veth
Masquerading:            BPF   [bond0]   10.42.0.0/16 [IPv4: Enabled, IPv6: Disabled]
Controller Status:       410/410 healthy
Proxy Status:            OK, ip 10.42.0.128, 0 redirects active on ports 10000-20000, Envoy: external
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 408.86   Metrics: Disabled
Encryption:              Disabled        
Cluster health:          3/3 reachable   (2025-01-29T02:58:39Z)
Modules Health:          Stopped(0) Degraded(0) OK(248)

Kernel Version

6.12.9-talos

Kubernetes Version

v1.32.0

Regression

No response

Sysdump

No response

Relevant log output

Hubble flows

Jan 29 03:01:56.343 [kubevirt-lab-01]: 10.123.100.2:42501 (host) -> default/nginx-ciliumnetwork:80 (ID:42737) to-network FORWARDED (TCP Flags: SYN)
Jan 29 03:01:56.344 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:01:56.344 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:01:57.359 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:01:57.359 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:01:58.383 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:01:58.383 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:01:59.407 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:01:59.407 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:02:00.431 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:02:00.431 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:02:01.459 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:02:01.459 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)
Jan 29 03:02:03.471 [kubevirt-lab-01]: 10.123.100.2:42501 (host) -> default/nginx-ciliumnetwork:80 (ID:42737) to-network FORWARDED (TCP Flags: SYN)
Jan 29 03:02:03.471 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jan 29 03:02:03.471 [kubevirt-lab-02]: 10.123.100.2:42501 (kube-apiserver) <> default/nginx-ciliumnetwork:80 (ID:42737) Policy denied DROPPED (TCP Flags: SYN)

Anything else?

No response

Cilium Users Document

  • Are you a user of Cilium? Please add yourself to the Users doc

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Assignees

No one assigned

    Labels

    kind/bugThis is a bug in the Cilium logic.kind/community-reportThis was reported by a user in the Cilium community, eg via Slack.needs/triageThis issue requires triaging to establish severity and next steps.sig/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions