-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump statefulset to apps/v1 #53415
Bump statefulset to apps/v1 #53415
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: crimsonfaith91 Assign the PR to them by writing No associated issue. Update pull-request body to add a reference to an issue, or get approval with The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
8fcba9a
to
bb61b1f
Compare
/cc @janetkuo @kow3ns I am still investigating the 2 test failures: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/53415/pull-kubernetes-unit/56608/ For the first failure, it fails on this line: https://github.com/kubernetes/kubernetes/blob/master/hack/make-rules/test-cmd-util.sh#L1353 For the second failure, I initially thought the failure was due to no I am not sure whether we need to add After spending a whole day debugging the failures, I think I should ask for help from someone with more experience. Not sure whether any of you have bandwidth to chat about it by person. |
fde34bf
to
bb61b1f
Compare
Run a cluster locally to see what resources are supported in each apps/ API versions. ControllerRevision does not need to be moved with other resources that use it as they are independent. For example, you can have apps/v1beta1 Deployments using extensions/v1beta1 ReplicaSets. |
2534a5b
to
c2e9544
Compare
Excluding It looks like kubectl prioritizes |
pkg/apis/apps/install/install.go
Outdated
@@ -37,12 +38,13 @@ func Install(groupFactoryRegistry announced.APIGroupFactoryRegistry, registry *r | |||
if err := announced.NewGroupMetaFactory( | |||
&announced.GroupMetaFactoryArgs{ | |||
GroupName: apps.GroupName, | |||
VersionPreferenceOrder: []string{v1beta1.SchemeGroupVersion.Version, v1beta2.SchemeGroupVersion.Version}, | |||
VersionPreferenceOrder: []string{v1beta1.SchemeGroupVersion.Version, v1beta2.SchemeGroupVersion.Version, v1.SchemeGroupVersion.Version}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@janetkuo To verify, does it mean v1beta1
or v1
is preferred? It looks to me that the order is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v1beta1 is preferred
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for verifying!
cmd/kube-apiserver/app/aggregator.go
Outdated
@@ -205,6 +205,7 @@ var apiVersionPriorities = map[schema.GroupVersion]priority{ | |||
// to my knowledge, nothing below here collides | |||
{Group: "apps", Version: "v1beta1"}: {group: 17800, version: 1}, | |||
{Group: "apps", Version: "v1beta2"}: {group: 17800, version: 1}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should these have equal priority?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be due to different API groups have different priorities. extensions/v1beta1
daemonset works fine and passes the test, but not apps/v1beta1
statefulset. Instead, apps/v1
statefulset is preferred by default, which fails the 2 kubectl tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To answer your question, I am not sure about it - didn't work on apiserver/aggregator codes before. I lean towards thinking them should have different priorities though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to ask how priority should be set but didn't get a response yet #49391 (comment)
e7b912e
to
fb26e6a
Compare
@janetkuo I may have found the real reason behind why
|
fb26e6a
to
35a684f
Compare
@liggitt we recently added
However, when this PR bumps StatefulSet to apps/v1 (StatefulSet is already in apps/v1beta1 and apps/v1beta2), |
no. what order does it appear in discovery? did you register the "apps/v1" in |
The order is
Yes: https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/aggregator.go#L208 |
@liggitt The DaemonSet and apps/v1 PR registers v1 group. The v1beta and v1beta2 groups are at the same priority but the v1 group is at a higher priority. Does the ordering matter, and do the priorities have to be unique? If so, I'm curios as to how this works for DeamonSet. Is it because DeamonSet is in exactly one of v1beta1 and v1beta2 while StatefulSet is in all 3? |
The version priority is determined by the value of the
That translates to the VersionPriority on the registered APIService:
that registration placed |
@LiGgit Understood. @crimsonfaith91 you should modify the v1beta2 priority to be higher than the v1beta1 priority and lower than v1. |
DaemonSet works because |
Great!! This is the answer that I have searched for the past 2 days! |
Note that that is the external API priority. The internal storage priority is separately controllable, which lets us advertise one order to API clients, while still storing a backlevel version in etcd for compatibility with HA API upgrades. That is what the etcd_storage_versions test checks, which version is persisted in etcd (which should always store a version readable by n-1 API servers for resources which existed in a prior release) |
@liggitt we are not switching the storage versions. We still have to verify storage version upgrades. |
@kow3ns Based on our discussion offline, I started looking on REST client codes to fix the kubectl statefulset history test failure. I was able to resolve the incompatibility between
This error makes a lot of sense. The kubectl test was "given" I plan to convert the statefulset from |
That does not make sense to me... if you create an apps/v1 statefulset, it should be accessible as an apps/v1beta1 stateful set as well |
also, that error appears to be a failure to reach the API server, not a 404 on a particular API group/version/resource |
DeprecatedTemplateGeneration = "deprecated.daemonset.template.generation" | ||
) | ||
|
||
// ScaleSpec describes the attributes of a scale subresource | ||
type ScaleSpec struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DirectXMan12 @deads2k for propagation of scale into apps/v1... I thought we really didn't want to do this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#49504 indicates that this is still not clear. Currently we have an individual scale in each group version. If there is a plan to change this in 1.9 and migrate to a different strategy it is not concrete afaik
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the current approach has known problems (HPA must hardcode custom clients for everything it is asked to scale) and scale was added to the apps group without considering how it would actually interact with autoscaling. we should complete the design that accounts for that before propagating those problems into a stable API group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that this warrants more discussion. Added a TODO on this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We did consider the auto-scaling interaction when adding Scale to the apps group, as well as the interaction with eviction controller and pod disruption budgets. In #42752 we decided to add a scale sub resource for all objects that made sense. If we are going to pivot on the implementation, that is fine, and importing the scale subresource from another location will be a relatively minor change. If the duck typing approach form #49504 is taken, I still think we should support the scale subresource in the apps group because we have it in extensions/v1beta1 and users of Deployment may have built expectations about it into their code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We did consider the auto-scaling interaction when adding Scale to the apps group
An approach that required hard-coding HPA to yet another API group's Scale kind seems like a mistake.
In #42752 we decided to add a scale sub resource for all object that made sense.
A scale subresource makes sense. Representing it with a different group/version/kind in every API group that must still have the same schema does not. We can add scale subresources without redefining Scale kinds in every API group.
If the duck typing approach form #49504 is taken...
I strongly hope it is not. We don't use duck-typing for other parts of our API, I don't think we should start promoting it now.
I still think we should support the scale subresource in the apps group because we have it in extensions/v1beta1
The scale subresource is a good idea, but I have yet to see a compelling reason to define different Scale kinds in multiple API groups that are required to have identical schemas for duck typing. That seems like the worst of both worlds... you can't have a generic scale client that can use the same Scale kind against all /scale
subresources, yet you don't gain any flexibility in having per-group Scale kinds because of duck typing constraints.
Before merging yet another Scale type (in a long-term supported version, no less), is there a reason not to settle on a common Scale kind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An approach that required hard-coding HPA to yet another API group's Scale kind seems like a mistake.
Perhaps, but if we decide to replace it with a kind from a different group version rather than using one resource per group the PR to implement this, imo, can be handled with a follow up.
Before merging yet another Scale type (in a long-term supported version, no less), is there a reason not to settle on a common Scale kind?
We want to issue and merge the PRs to promote the workloads API. There will be follow up PRs to add Deployment, to add ReplicaSet, to add Condition types #51594, and to deal with any other outstanding issues. If we choose a different, and definitive, home for the scale sub-resource, I would prefer to deal with it in a follow up PR (which should be trivial) rather than blocking all other development pending resolution of the conversation in #49504.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we choose a different, and definitive, home for the scale sub-resource I would prefer to deal with it in a follow up PR
We shouldn't merge apps/v1 Scale until this question is settled. It can be pulled out of this PR to unblock it.
@liggitt the kubectl code converts an unversioned client into a version client by passing the internal rest client directly to the constructor. This seems to cause the versioned client to call against the v1 endpoint and attempt conversion into an v1beta1 StatefulSet. |
Agreed, but the problem that i want to solve requires a different use case. We intended following priority order:
I can verify the error is not about failure to reach the API server. I tested it with |
Ah. That is not the correct way to construct a versioned client, and is a bug that needs to be fixed.
|
@crimsonfaith91 the error you showed me earlier was a casting error inside of the client's conversion, and is likely due to the config problem we discussed. As @liggitt points out, the error you've posted above is a transport layer connection error. This is a different problem. |
@liggitt that is precisely the conversation @crimsonfaith91 and I had offline :) |
Thank you very much for detailed explanation about |
@kow3ns Finally fix the failed tests in newest run! |
539118f
to
cfe6685
Compare
closing in dup of: #53679 |
What this PR does / why we need it:
This PR bumps statefulset to apps/v1.
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): kubernetes/enhancements#484Release note:
TODO:
ScaleSpec
intoapps/v1