-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kube-proxy hides real IP #10921
Comments
This is effectively a dup of #3760, I'm going to close this in favor of that. You are correct, though, for what it's worth :) |
Ok, good to know. #3760 looks good. |
@erimatnor I just got the same problem, did you find a workaround for this? thanks. |
Currently there is not a fix for client IP being changed when accessing via a NodePort or LoadBalancer. |
I'm going to second this request to find a fix for this issue. From my understanding NodePort and LoadBalancer both suppress the incoming source IP, and the Ingress approach is HTTP(S) only, so there is no viable fix for any generic TCP service. |
It's on the agenda. It's very high on the agenda now. I apologize for The fix is not easy and it is not going to make the 1.3 release train, but On Mon, May 9, 2016 at 3:16 PM, miverson notifications@github.com wrote:
|
No worries. Shout if there's a way I can help. |
Doesn't the ingress module solve this issue? |
Ingress solves it for HTTP only. On Sun, Jun 26, 2016 at 5:54 AM, André Cruz notifications@github.com
|
This is a huge no-go for any apps that require some geodata about the users and I don't think @thockin is the only one to have to worry about this. @aronchick this is something we were asking for a year now, and I'm astonished as this hasn't been high on priority list for the project. The OpenShift guys fixed this, right @smarterclayton? For some time now, I was under the impression you'd be contributing your solution to upstream. |
It's in progress. On Thu, Jul 14, 2016 at 8:34 AM, Paulo Pires notifications@github.com
|
I don't think a design proposal has been posted yet, but I know we were On Thu, Jul 14, 2016 at 11:56 PM, Paulo Pires notifications@github.com
|
@pires I know - we've wanted to get to this for a long time. The issue is that because there was a very clear solution (skipping the k8s load balancer and using your own that had XFF), and there a bunch of other features that had no work around, we couldn't get to this in time. Hopefully will land for 1.4! |
@aronchick yes, there are solutions out there but I'm glad you guys are looking into it as well. As a user, seeing how powerful Kubernetes is to scale my app but not serve it seems a huge paradox. Can't wait for 1.4 👍 |
This is a great question. I'd love for this to be implemented soon as well. To describe my use case: My app is not a web app, it's a Minecraft server cluster running on TCP port 25565, having HTTP header simply isn't possible. Google Load Balancer when using with compute engine stands out for this reason (showing real user IP regardless of protocol). I hope that Kubernetes and Google Container Engine will have this in the near future as well. |
1.4 is out, is there are any changes in version in which fix will be included? |
This is beta in 1.5
https://kubernetes.io/docs/user-guide/load-balancer/, look for
"preservation of Source IP"
…On Wed, Feb 1, 2017 at 12:47 PM, Andrew Dryga ***@***.***> wrote:
1.4 is out, is there are any changes in version in which fix will be
included?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10921 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVAOKSJPacFipFh28d0HDTA5IW1tBks5rYO9MgaJpZM4FUYvF>
.
|
How about this feature in the current 1.6? |
Still beta in 1.6
…On Wed, Apr 5, 2017 at 5:44 AM, Mario Kleinsasser ***@***.***> wrote:
How about this feature in the current 1.6?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10921 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVBG6Gm8NitpewiV0eZUHn5YIw-2-ks5rs4yYgaJpZM4FUYvF>
.
|
Hello everyone, Last few years I was working on the following project - https://www.klzii.chat So I was wandering, maybe some one can give me an advice? |
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeclusterip
Search for "OnlyLocal"
…On Mon, Apr 10, 2017 at 7:01 AM, Gleb Barkov ***@***.***> wrote:
Hello everyone,
Last few years I was working on the following project -
https://www.klzii.chat
So right now, during postMVP stage, we phased issue with getting real
users IP, that I'm trying to solve.
So I was wandering, maybe some one can give me an advice?
We are using: Google Cloud, Kubernetes+Docker
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10921 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVFffTeLg6lyBXrVwaBJ02xMB-g4fks5rujYhgaJpZM4FUYvF>
.
|
@thockin Hey, thanks for the reply it was very helpful. We have managed to solve this issue by updating our kubernetes node to last version (1.6). This helped us to resolve problem with getting real users IP. Best regards, |
@thockin I am facing a similar issue. I tried the above solution, but was not able to get it to work. It would be great to have your inputs regarding this please. We are currently blocked on this. We are running nginx as a NodePort service, listening on ports 80 and 443. There is no external LB involved. The nginx service is the external facing entity. In this case, the source IP gets NATed as discussed here and the 'X-Forwarded-For' HTTP header gives a local pod IP. This is using K8s 1.5. The snippet of service yaml file that I was using looks something like below :
Next, I upgraded to K8s 1.8 and tried using the externalTrafficPolicy as 'Local' as discussed in this and other threads. The service yaml file looks like something below now.
However, after defining externalTrafficPolicy as 'Local', I find that the nginx service is no longer getting exposed on the node IP to receive external traffic. The service can be reached from only within the node/cluster. Is this expected? Or am I doing something wrong here? Note that we are not using any external LB. The end goal I am pursuing here is to obtain the source IP of the requesting client. EDIT : Pls note I am running K8s on bare-metal without any external LB. |
I am blocked on this. Any inputs would be greatly appreciated. |
any update on this issue? |
push |
For anyone who finds here. This issue is not solved, type: ClusterIP stays incompatible with keeping the source IP address. Ideally ClusterIP should also support externalTrafficPolicy: Local, because Loadbalancer is not available by default. |
Thank @smekkley spec:
externalTrafficPolicy: Local
type: LoadBalancer |
Hello!! I am new to kubernetes, and was facing the same issue as reported here. I am using "Kubernetes v1.18.3" and weave-net as the pod-network add-on. One my application requires the client address, from where the request originated for further processing, but the RemoteAddr is containing the IP address from the range used by kube-proxy. Service is deployed as NodePort type and followed instructions documented here. Even after that, I still don't see the client IP, but the IP has changed from the kube-proxy IP subnet to weave-net subnet. It's mentioned in the comments, that externalTrafficPolicy solution works only for GKE and service type: Loadbalancer, is the behavior still the same in the latest version too. Could you anyone please help me, how we can preserve the source IP. |
@BharathB23 |
Exposing a frontend service (e.g., a request router) via a nodePort or Kubernetes Loadbalancer service hides the real source IP of clients. This is undesirable for logging purposes, client black-listing, and affinity to lower-tier services that a proxy connects too (usually achieved by hashing the IP source address and port).
Ideally, kube-proxy should support standard headers, such as X-Forwarded-For or X-Real-IP in the case of HTTP, although the problem, I assume, is that kube-proxy does not operate at layer 7 (and this would not cover other protocols).
It seems the best solution would be to avoid terminating TCP altogether, keeping the original source IP intact.
The text was updated successfully, but these errors were encountered: