-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LOAS Daemon #2209
Comments
This has a lot of attack surface reduction advantages. |
I'm not a huge fan of doing this based on IP address. I'd love to find a better way to direct traffic here. We should weight this with a "token vending machine" that mints short term tokens vs. a "auth stamping proxy" that will transparently apply auth. I'm a big fan of the former and it is what we did with GCE. If creds do leak, they are short term. Another option is to support generic secret distribution with support for rotation. This is obviously related to #2030. |
@thockin had an idea on how to move from IPs to something with DNS. Maybe he can remind me what he said. Token vending machine seems good for GCE, when the whole ecosystem uses tokens. Not sure how well it works for a broader ecosystem of services, not all of which use tokens (for example, could the vending machine work with Amazon's |
Yeah -- AWS has a way to mint short lived tokens -- the problem is that each cloud/system will be slightly different. See: http://aws.amazon.com/code/7351543942956566 |
Not only that, but the tokens will only work for tightly coupled systems in the cloud. So NFS servers, Git repositories behind SSH, Docker registries behind anything, etc. |
@smarterclayton But the proposal is specific to auth for http requests. I don't initially see how this can expand to kerberos for NFS or ssh public keys for git access.... |
My idea was to offer a new type of Service "every node" - rather than On Fri, Nov 7, 2014 at 5:21 AM, Eric Paris notifications@github.com wrote:
|
Related: |
I consider the original request resolved by github.com/istio/istio Thank you all the Istio project members, for an amazing project that works great with Kubernetes, and which closes this 2.5 year old Kubernetes FR! |
LOAS Daemon Proposal
A LOAS (Local Opaque Auth Service) daemon runs on each node.
The loasd stores credentials that are needed by pods on a machine, but it does not let the Pods see the credentials they are using. This is the Opaque part. It is useful in several ways:
There are a few assumptions:
Examples of services that might use this proxy (to be verified):
Implementation sketch
How credentials get into the LOASD's memory is TBD. Probably it securely communicates with the APIserver, and perhaps also with a separate keystore. However this works, it is a detail hidden from the container API.
"Normal" pod traffic does not use the proxy. Only traffic that goes to certain IP addresses use the proxy. Iptables rules which are setup for each pod cause packets to go to LOASD. Pods learn what dest IP addresses they should use for given proxied service from an env var, e.g.
LOASD_PROXY_IP_FOR_K8S_API=xxx.xxx.xxx.xxx
or
LOASD_PROXY_IP_FOR_AMAZON_S3=yyy.yyy.yyy.yyy
These IP addresses are allocated from a special address range. There will be considerable overlap between this code and the K8s Services and Portal code.
The LOASD runs a http server. It identifies which pod is talking to it based on the source IP of the request. It identifies the ultimate destination of the traffic based on the dest IP of the request. The dest IP is not the ultimate destination. It is one of the special IP addresses mentioned just above. The LOASD maps checks if it has a "recipe" installed for that (sourceIP, destIP) pair. If it does, then it uses that recipe to rewrite the http request to add authentication to it. Typically, this might mean injecting an
Authorization
header, or replacing a dummy value in an existingAuthorization
header, or in the case of amazon, computing a signature and adding that to the request.Scope of use
We might be able to make something generic enough that other cluster infrastructure people might want to use it and people would contribute "recipes" for other end points.
However, even if we only ever used this for authenticating Pod to APIserver traffic, I think it still has value.
The text was updated successfully, but these errors were encountered: