Skip to content

Moving deployment from v1beta1 to v1beta2 inconsistent validation #57606

Closed
@Stono

Description

I have been upgrading some of my Deploymens from apps/v1beta1 to apps/v1beta2 on kubernetes 1.8.4 and seem to have differences in yaml validation which mean my upgrades are inconsistent with fresh deployments (ie the upgrade works fine, but if we needed to rebuild, it'd fail).

Take the following v1beta1 yaml, with no spec.selector.matchLabels, in v1beta1 these were calculated if omitted.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx

Observed behaviour
If I was to simply make a single change to this yaml file to switch to v1beta2, which is actually invalid, as spec.selector.matchLabels is required in >= v1beta2

apiVersion: apps/v1beta2
kind: Deployment
...

The apply is accepted by the server:

❯ kubectl apply -f test.yaml
deployment "nginx-deployment" configured

However if I was to delete that deployment, and try and recreate it, it would (correctly) error:

❯ kubectl delete deployment nginx-deployment
deployment "nginx-deployment" deleted

❯ kubectl apply -f test.yaml
The Deployment "nginx-deployment" is invalid:
* spec.selector: Required value
* spec.template.metadata.labels: Invalid value: map[string]string{"app":"nginx"}: `selector` does not match template `labels`

Expected behaviour
I believe the yaml file presented in using an apply in an upgrade scenario should be validated in the same way in which it would be in a create.

It feels like the validation for an apply is being done after a computed merge with what is already deployed, in a "upgrade" case, the matchLabels were pre-computed, but in a create case, they're not.

I hope you can see how this would leave me in a slightly uncomfortable position because I had been presuming that if a yaml file could be applied then it was valid, and should we ever need to rebuild or fresh deploy (for example the first time an app is promoted to a new environment) then those yaml files would not work.

At the least I feel this validation should be documented / explained so others don't get caught out!

Activity

added
needs-sigIndicates an issue or PR lacks a `sig/foo` label and requires one.
on Dec 24, 2017
Stono

Stono commented on Dec 24, 2017

@Stono
Author

@kubernetes/sig-api-machinery-bugs

added
sig/api-machineryCategorizes an issue or PR as relevant to SIG API Machinery.
kind/bugCategorizes issue or PR as related to a bug.
and removed
needs-sigIndicates an issue or PR lacks a `sig/foo` label and requires one.
on Dec 24, 2017
k8s-ci-robot

k8s-ci-robot commented on Dec 24, 2017

@k8s-ci-robot
Contributor

@Stono: Reiterating the mentions to trigger a notification:
@kubernetes/sig-api-machinery-bugs

In response to this:

@kubernetes/sig-api-machinery-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

liggitt

liggitt commented on Dec 29, 2017

@liggitt
Member

Using apply to update a resource last applied with a different version does not work properly (see #16543 (comment))

It feels like the validation for an apply is being done after a computed merge with what is already deployed, in a "upgrade" case, the matchLabels were pre-computed, but in a create case, they're not.

That is correct

liggitt

liggitt commented on Dec 29, 2017

@liggitt
Member

The default computation of selectors wreaked havoc with apply, which is one of the main reasons it was removed in v1beta2

This is working as intended. The other issue is open to track the problem with cross version apply

caesarxuchao

caesarxuchao commented on Jan 4, 2018

@caesarxuchao
Member
added
sig/docsCategorizes an issue or PR as relevant to SIG Docs.
and removed
kind/bugCategorizes issue or PR as related to a bug.
on Jan 4, 2018
liggitt

liggitt commented on Jan 4, 2018

@liggitt
Member

from the server's perspective, apply is just a patch operation. apply being able to update an existing object does not necessarily mean it would be valid to use to create a new object.

fejta-bot

fejta-bot commented on Apr 4, 2018

@fejta-bot

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

added
lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.
on Apr 4, 2018

1 remaining item

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.sig/api-machineryCategorizes an issue or PR as relevant to SIG API Machinery.sig/docsCategorizes an issue or PR as relevant to SIG Docs.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Moving deployment from v1beta1 to v1beta2 inconsistent validation · Issue #57606 · kubernetes/kubernetes