Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow secure access to apiserver from Admission Controllers #31491

Merged
merged 2 commits into from
Sep 22, 2016

Conversation

dims
Copy link
Member

@dims dims commented Aug 26, 2016

  • Allow options.InsecurePort to be set to 0 to switch off insecure access
  • In NewSelfClient, Set the TLSClientConfig to the cert and key files
    if InsecurePort is switched off
  • Mint a bearer token that allows the client(s) created in NewSelfClient
    to talk to the api server
  • Add a new authenticator that checks for this specific bearer token

Fixes #13598


This change is Reviewable

@dims
Copy link
Member Author

dims commented Aug 26, 2016

Don't know if this is an acceptable work around for now, hence the WIP. Tested by modifying hack/local-up-cluster.sh like so http://paste.openstack.org/show/563759/

@k8s-github-robot k8s-github-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. release-note-label-needed labels Aug 26, 2016
@dims dims force-pushed the fixes-issue-13598 branch 2 times, most recently from 4e3531f to 021a9df Compare August 26, 2016 12:23
@deads2k
Copy link
Contributor

deads2k commented Aug 26, 2016

@kubernetes/sig-auth This is an interesting idea. A few things that leap out at me:

  1. the token isn't valid across the cluster, only local to this API server. That's a little strange, but given usage I don't know that it bothers me.
  2. rotation would require bouncing the API server. Again, a little strange, but it doesn't bother me.
  3. It needs some work to make sure that an authorizer could recognize the loopback token (user.Info.Extra will probably need a marker) and it needs an authorizer paired to it.

@dims dims force-pushed the fixes-issue-13598 branch from 021a9df to f1cb1e9 Compare August 26, 2016 19:31
@k8s-github-robot k8s-github-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Aug 26, 2016
clientConfig.BearerToken = uuid.NewRandom().String()
clientConfig.TLSClientConfig = restclient.TLSClientConfig{
CertFile: s.TLSCertFile,
KeyFile: s.TLSPrivateKeyFile,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Private key? Isn't this just authenticating using the token?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we switch off the insecure port, we just have the 6443 https port which is secure. Talking to this port you need the TLS Cert+Key.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, you need the CA bundle of the server's cert, not a client cert

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liggitt So TLSClientConfig should have just one field which is a CAFile? where do i get that file path from? (is it a field from ServerRunOptions?)

Copy link
Member

@liggitt liggitt Aug 26, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anything that has a kubeconfig (kubelet, kube proxy, controller manager, etc) has it, but I don't think the server knows its own CA bundle currently

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just realized, i don't need to set anything as i am setting the Insecure flag. Duh! thanks @liggitt and @ericchiang

if options.InsecurePort < 1 || options.InsecurePort > 65535 {
glog.Fatalf("--insecure-port %v must be between 1 and 65535, inclusive.", options.InsecurePort)
if options.InsecurePort < 0 || options.InsecurePort > 65535 {
glog.Fatalf("--insecure-port %v must be between 0 and 65535, inclusive. 0 for turning off insecure port.", options.InsecurePort)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably need to fail if both ports are turned off?

@dims dims force-pushed the fixes-issue-13598 branch 5 times, most recently from 0726140 to 3878f70 Compare August 28, 2016 20:46
@dims
Copy link
Member Author

dims commented Aug 30, 2016

@kubernetes/sig-auth @deads2k @lavalamp @smarterclayton - If the approach sounds good, i'll add tests and remove WIP. Please let me know

@deads2k
Copy link
Contributor

deads2k commented Aug 30, 2016

@kubernetes/sig-auth @deads2k @lavalamp @smarterclayton - If the approach sounds good, i'll add tests and remove WIP. Please let me know

The idea of a random token only good for a loopback connection to the API server seems reasonable to me. I'm not sure the particulars work for me though. I'm trying to think through how I would expect this to be combined with authorization and I'm not sure that I want this tied to a magic subject name.

There are alternatives that would avoid any interaction with an existing authorizer. Say the username was random (same as the token maybe or prefixed?) and you specified the user.Info.Extra field to have a marker for "my magic authenticator". A separate authorizer could grant access based on that extra field and we'd never have a collision with any other user.

Alternatively, we could lay claim to certain group names. OpenShift did this with system:<name here> patterns on certain groups and we actively manage the permissions associated with those groups. That kind of plan has the advantage of being distributable as an impersonation target.

I don't like the apiserver magic username, I'm ok with the idea of using Extra and another authorizer, but I like the idea of a magic group name best. I'd like to see that settled before we try to move this forward. Comments?

@smarterclayton
Copy link
Contributor

I really do prefer the reserved group name pattern. I think it addresses a
very common problem and is the common solution to that problem.

On Tue, Aug 30, 2016 at 8:53 AM, David Eads notifications@github.com
wrote:

@kubernetes/sig-auth https://github.com/orgs/kubernetes/teams/sig-auth
@deads2k https://github.com/deads2k @lavalamp
https://github.com/lavalamp @smarterclayton
https://github.com/smarterclayton - If the approach sounds good, i'll
add tests and remove WIP. Please let me know

The idea of a random token only good for a loopback connection to the API
server seems reasonable to me. I'm not sure the particulars work for me
though. I'm trying to think through how I would expect this to be combined
with authorization and I'm not sure that I want this tied to a magic
subject name.

There are alternatives that would avoid any interaction with an existing
authorizer. Say the username was random (same as the token maybe or
prefixed?) and you specified the user.Info.Extra field to have a marker
for "my magic authenticator". A separate authorizer could grant access
based on that extra field and we'd never have a collision with any other
user.

Alternatively, we could lay claim to certain group names. OpenShift did
this with system: patterns on certain groups and we actively
manage the permissions associated with those groups. That kind of plan has
the advantage of being distributable as an impersonation target.

I don't like the apiserver magic username, I'm ok with the idea of using
Extra and another authorizer, but I like the idea of a magic group name
best. I'd like to see that settled before we try to move this forward.
Comments?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#31491 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pyefJ1V2XFbdzf7pC8E9hdIZwK3fks5qlCe9gaJpZM4Jtsxr
.

* Allow options.InsecurePort to be set to 0 to switch off insecure access
* In NewSelfClient, Set the TLSClientConfig to the cert and key files
  if InsecurePort is switched off
* Mint a bearer token that allows the client(s) created in NewSelfClient
  to talk to the api server
* Add a new authenticator that checks for this specific bearer token

Fixes kubernetes#13598
@dims dims force-pushed the fixes-issue-13598 branch from ebe9831 to 25d4a70 Compare September 20, 2016 14:42
@@ -213,12 +215,14 @@ func (s *ServerRunOptions) NewSelfClient(token string) (clientset.Interface, err
QPS: 50,
Burst: 100,
}
if s.SecurePort > 0 {
if s.SecurePort > 0 && len(s.TLSCAFile) > 0 {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k In theory we could have used s.ClientCAFile, but i think we should allow folks to separate the CA authority for the TLS cert and the one for to validating kubectl clients. So added another flag for this purpose. Please let me know if this is ok.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tested by generating certs as mentioned in https://coreos.com/kubernetes/docs/latest/openssl.html and patching hack/local-up-cluster.sh as follows

diff --git a/hack/local-up-cluster.sh b/hack/local-up-cluster.sh
index e580a3d..398deff 100755
--- a/hack/local-up-cluster.sh
+++ b/hack/local-up-cluster.sh
@@ -310,12 +310,16 @@ function start_apiserver {
       --service-cluster-ip-range="10.0.0.0/24" \
       --cloud-provider="${CLOUD_PROVIDER}" \
       --cloud-config="${CLOUD_CONFIG}" \
+      --tls-cert-file=/home/dims/k8s/apiserver.pem \
+      --tls-private-key-file=/home/dims/k8s/apiserver-key.pem \
+      --tls-ca-file=/home/dims/k8s/ca.pem \
+      --basic-auth-file=/home/dims/k8s/basicauth.csv \
       --cors-allowed-origins="${API_CORS_ALLOWED_ORIGINS}" >"${APISERVER_LOG}" 2>&1 &
     APISERVER_PID=$!

     # Wait for kube-apiserver to come up before launching the rest of the components.
     echo "Waiting for apiserver to come up"
-    kube::util::wait_for_url "http://${API_HOST}:${API_PORT}/api/v1/pods" "apiserver: " 1 ${WAIT_FOR_URL_API_SERVER} || exit 1
+    #kube::util::wait_for_url "http://${API_HOST}:${API_PORT}/api/v1/pods" "apiserver: " 1 ${WAIT_FOR_URL_API_SERVER} || exit 1
 }

 function start_controller_manager {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k In theory we could have used s.ClientCAFile, but i think we should allow folks to separate the CA authority for the TLS cert and the one for to validating kubectl clients. So added another flag for this purpose. Please let me know if this is ok.

Yes, a new flag is appropriate.

@dims dims force-pushed the fixes-issue-13598 branch from 6190aef to 364dd1a Compare September 20, 2016 23:01
@dims
Copy link
Member Author

dims commented Sep 20, 2016

@k8s-bot kubemark test this

@lavalamp
Copy link
Member

drive-by comment: @deads2k says "the token isn't valid across the cluster, only local to this API server. That's a little strange, but given usage I don't know that it bothers me." It actually is a problem if the client goes through a load balancer or something instead of directly to what the apiserver is listening on.

@dims
Copy link
Member Author

dims commented Sep 21, 2016

@lavalamp We construct a special in-process client to talk to the ip/port that the same process is listening on. So it does not access anything outside the same process. hope this helps.

clientConfig.Host = net.JoinHostPort(s.InsecureBindAddress.String(), strconv.Itoa(s.InsecurePort))
} else {
return nil, errors.New("Unable to set url for apiserver local client")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice. Exactly what I was thinking.

@deads2k
Copy link
Contributor

deads2k commented Sep 21, 2016

@dims This is a great start, but I think there's more work to do. I'd like to see followup issues and pulls to:

  1. EnableLoopbackToken option in the generic API server to dry out the paired code. @sttts @lavalamp since we'll still be building a privileged loopback client config, I think this makes sense as an option.
  2. Extend local-up-cluster.sh to be able to start with the secured port available using this code and an option to close the insecured port (yeah, that will be a disaster for now, but we'll need it to continue down this path).
  3. Doc explaining the new flag/behavior. I think you should put this off until the end of 1.5 and we'll see how much we're able to secure overall.

This much is secure and lgtm. Merging what we have now.

@deads2k deads2k added lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-label-needed labels Sep 21, 2016
@deads2k deads2k added this to the v1.5 milestone Sep 21, 2016
@deads2k deads2k added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Sep 21, 2016
@deads2k
Copy link
Contributor

deads2k commented Sep 21, 2016

This pull is needed to set up secure bootstrapping for rbac roles. Bumping priority since work is waiting on it.

@dims
Copy link
Member Author

dims commented Sep 21, 2016

@deads2k happy to pitch in with all that stuff over 1.5 time period. thanks

@dims
Copy link
Member Author

dims commented Sep 21, 2016

@k8s-bot gce e2e test this

@k8s-ci-robot
Copy link
Contributor

Jenkins GCE e2e failed for commit 364dd1a.

The magic incantation to run this job again is @k8s-bot gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@dims
Copy link
Member Author

dims commented Sep 21, 2016

@k8s-bot gce e2e test this

@k8s-github-robot
Copy link

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue

@deads2k
Copy link
Contributor

deads2k commented Sep 23, 2016

Opened #33375 to track the local-up-cluster changes since it needs to take into account using that secure connection from the controller-manager.

@caesarxuchao
Copy link
Member

Allow options.InsecurePort to be set to 0 to switch off insecure access

@dims Is this implemented? Can you also update the option description here? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/backlog Higher priority than priority/awaiting-more-evidence. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.