-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement controllerRef #24946
Comments
Yes - we decided that we won't even try to get it done for v1.3 and I moved to fixing other blockers. Sorry for not communicating it wider. It's a first thing for post v1.3 on my list. |
Any updates on this? |
@caesarxuchao can give you current update. AFAIK it's implemented for ReplicaSet and ReplicationController. I'm not sure about other ones. |
So controllerRef is the the same as an owner reference, right? |
Yes and no - controller ref is implemented as a field in OwnerRef object (controller has to be an owner), and adoption code makes sure that there's only one owner with controller flag set. |
That's right, just those two. There was an issue regarding set the controllerRef for Deployments and @krmayankk was interested to do it. |
@janetkuo Could you assign this to me? I'm planning to pick up the remaining work on this (targeting 1.6):
|
Deployments, RC, and RS already use owner references.
Overlapping Deployments are handled gracefully by the controller manager,
you should see the first one working as expected.
…On Sat, Jan 7, 2017 at 1:30 AM, Anthony Yeh ***@***.***> wrote:
@janetkuo <https://github.com/janetkuo> Could you assign this to me? I'm
planning to pick up the remaining work on this (targeting 1.6):
- Make sure all workload controllers (RC, RS, SS, DS, Deployment)
populate controllerRef according to the proposal
<https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md>.
As mentioned above, some have already been done.
- Update workload controllers to actually use that value (in addition
to label selector) to filter owned objects. I did a manual survey at
HEAD
<https://github.com/kubernetes/kubernetes/tree/25e836010f15b813caacf9a24850a036e15ed739>
and all workload controllers currently fail to come up fully when creating
two with identical selectors within one second of each other.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#24946 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFf-2-DmHl7HMLNcILauGHb0Mfxwcgks5rPty1gaJpZM4ISSNa>
.
|
Job is another one that is missing IIRC. |
Jobs, DaemonSets, and StatefulSets, all of them still need it. |
CronJobs too |
I think this only works if the Deployments have different timestamps, which have one-second precision. In my tests, I had both Deployments in a single YAML file, so they got created at the same time. The controllerRef proposal aims to make even this case work, which is why I tested it as a baseline to see how much of the proposal has been realized. I hadn't thought about *Jobs since they don't suffer from fighting thanks to selector generation, but you're right that controllerRef should be implemented consistently across everything anyway. So *Job controllers are also within the scope of this tracking issue. |
The reason we implemented selector resolution was that at that time
server-side GC wasn't available (so owner references weren't used) and we
needed to isolate any offenders. Also, owner references are not
deterministic when applied in an existing cluster for the first time
because you cannot be sure which Deployment is going to adopt the existing
set of selected ReplicaSets and which one will end up creating its own. The
case you described (creation of two Deployments at the same time) is valid
but it's still not always correct even in the case of owner references
(which Deployment is going to adopt any existing ReplicaSets that are
selectable by both?). We probably want selector generation. Eventually, I
would love to get rid of our selector resolution code once owner references
are soaked in.
We want owner references in all controllers for server-side GC.
…On Mon, Jan 9, 2017 at 7:25 PM, Anthony Yeh ***@***.***> wrote:
Overlapping Deployments are handled gracefully by the controller manager,
you should see the first one working as expected.
I think this only works if the Deployments have different timestamps,
which have one-second precision. In my tests, I had both Deployments in a
single YAML file, so they got created at the same time. The controllerRef
proposal aims to make even this case work, which is why I tested it as a
baseline to see how much of the proposal has been realized.
I hadn't thought about *Jobs since they don't suffer from fighting thanks
to selector generation, but you're right that controllerRef should be
implemented consistently across everything anyway. So *Job controllers are
also within the scope of this tracking issue.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#24946 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFf9aKf2hI2SMpbILvmig3n-9rQXEOks5rQnuOgaJpZM4ISSNa>
.
|
@Kargakis I see what you mean. I confirmed with @lavalamp that the controllerRef proposal is not intended to solve all of these cases (in particular, orphan/adopt with overlapping selectors + non-fungible objects). I will send a PR to update the proposal to explicitly call out some of these non-goals. Thanks. |
Sure, but if you're in this situation when you do the upgrade that turns on the controller references, it means that you have currently have controllers actively fighting over pods. Right now, this situation causes controllers to generate enough traffic to destabilize the cluster. So I don't think people will actually get their clusters into this situation and leave them there. It's easy enough to fix manually if pods end up owned by the wrong thing. And as @enisoc says, the primary purpose of this is to make mistakes with labels not be cluster-destabilizing. And actually I do think that if a user is planning to do an orphan/adoption event, the user really needs to first do a label query and see if their planned adoption is going to find more than it should. They should do a reverse-query (what controllers are looking for these labels?), too, but we don't support that yet. So at the moment, a user wanting to do a controlled adoption can just make a new label to make sure nothing else is using it |
I've sent a pull request to update the ControllerRef proposal: kubernetes/community#298 |
ControllerRef is now implemented across all workload controllers: |
Automatic merge from submit-queue Fix kubectl describe for pods with controllerRef **What this PR does / why we need it**: kubectl describe doesn't take controllerRef into consideration, resulting confusing result. e.g. if we have two replicaset with the same selector, one with 1 replica and the other 2 replicase, then both replicaset will show 3 running pods. ```sh $ kubectl describe rs replicaset-2 Name: replicaset-2 Namespace: default Selector: environment=prod Labels: environment=prod Annotations: <none> Replicas: 2 current / 2 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: environment=prod Containers: created-from-replicaset: Image: nginx Port: Environment: <none> Mounts: <none> Volumes: <none> Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 5m 5m 1 replicaset-controller Normal SuccessfulCreate Created pod: replicaset-2-39szb 5m 5m 1 replicaset-controller Normal SuccessfulCreate Created pod: replicaset-2-470jr ``` **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # xref #24946 **Special notes for your reviewer**: **Release note**: ```release-note Fix kubectl describe for pods with controllerRef ```
Originally discussed in #14961, particularly #14961 (comment), but seems like it deserves a standalone issue.
More details to follow.
cc/ @bgrant0607 @wojtek-t @mml @jiangyaoguo @gmarek
The text was updated successfully, but these errors were encountered: