-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
init-container definition (non-anno.) persisting after removal from deployment #45627
Comments
Another observation: after creating a sparkly new deployment, the API returns a structure that includes the Whoever spent time making it look like this was done needs to be stared at balefully for at least a few minutes. |
Made a simple digging into the code and found the root cause might be here: SetInitContainersAndStatuses. @smarterclayton would give a detailed explanation. |
@erictune re beta annotations. Unfortunately, part of supporting beta annotations is that they must override and take priority over the fields. Otherwise they would break if someone set init annotations. We will drop the annotations in 1.X (I was hoping 1.7, but not sure what guarantee we established, so 1.8 if not 1.7) |
kubelets can skew two versions older than the apiserver, so 1.8, probably |
/sig apps |
That's all very interesting, but that doesn't mitigate init-containers being essentially broken in 1.6 (relative to the 1.6 documentation page) To reiterate:
this works (1.6.x)
... and produces
magically generated structure that does nothing
|
Following the above advice from @gladiatr72, we've tried to delete the init container annotations prior to new deploys and thought it might help someone facing the same issue. Our init containers don't change so frequently and we just started using them a bit more so we needed a workaround for these issues on k8s 1.6.4. If we only delete the annotations, they seems to get restored immediately from the GA notation. At the point in time we wish to clear the init containers they reference the old versions. It seems the only way for us to update init containers on k8s 1.6.4 is to delete both the annotations and the GA notation:
To clarify, deleting only the annotations like this command below does not work for us because afterwards
|
the population of the annotations happens during conversion, and can be removed in 1.8 (since the oldest supported kubelets will be at 1.6, which recognizes the API fields) |
@liggitt Thanks, understood. Looking forward to it! Hopefully the above can help someone needing a workaround before 1.8. |
fixed by #51816 |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
No
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
initContainers, init-containers, init containers
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
bug
I was pleased to see the graduation of init-containers to GA. Unfortunately, there are two problems with how k8s is dealing with them:
Our use case involves a Django application component. The idea is to use init containers to deal with pre-flight checks and handle any pre-launch processing that needs to happen before the application comes online (typical use as per the docs)
Since these processes make use of the same image:version, it is necessary for the containers to run the same version of the app image.
Defining the init-containers in
.template.spec.initContainers
works with the initial deployment; however, any change to the deployment spec file within.template.spec.initContainers
is not properly processed. Any subsequent applications to the deployment will cause the initial init-container definitions to run, but, of course, they're running the wrong version. This problem affects the entire init-container element definition, not justimage
.Another interesting feature of this bug is that despite not having defined the init-containers in
.template.metadata.annotations
, the running deployment shows that the.template.spec.initContainers
definitions have had their JSON equivalents packed into bothpod.alpha.kubernetes.io/init-containers
andpod.beta.kubernetes.io/init-containers
annotations.NOTE: At this point in my investigation, my original spec file has its init-containers defined only in
template.spec.initContainers
At this point I'm thinking the API must be pre-processnig
template.spec.initContainers
to create the annotation strings--not a typical GA feature feature, but whatever... I'll roll with it, right?I decided to convert the init-container defs to JSON and see if the deployment problem goes away.
NOTE: At this point
.template.spec.InitContainers
is removed from the spec fileDefining and subsequently updating the init-containers as beta annotations work just like in 1.5. Functionally, I'm in good shape; however, I ask the API for its view of the deployment spec and
.template.spec.initContainers
is back!Environment:
Cloud provider or hardware configuration:
gce
OS (e.g. from /etc/os-release):
Debian 8
What happened:
see above
What you expected to happen:
I expected the support of the beta annotation to be the piece that was duct-taped onto the new spot on the API for init-containers rather than the other way around.
How to reproduce it (as minimally and precisely as possible):
Create a deployment that uses a versioned image for an init-container with the definitions under
.template.spec.initContainers[]
Once the deployment is verified to work, update the spec file's init-container definition to use a different version of the image.
After deployment,
kubectl describe deployment thing
and gaze in dismay.Convert init-container definition to annotation JSON string. Redeploy. Functionally happy.
Anything else we need to know:
It appears part of this pig's lipstick is that when I either view or attempt to directly edit the deployment spec, the annotation is converted back into YAML for my viewing pleasure (with the alpha and beta init-container annotations in the template metadata). If I try to delete said YAML, kube-api doesn't complain; however, another
kubectl get deployment -o yaml
shows the bits that I just deleted!Needless to say, if someone else comes along to interact with this deployment or its child pods, their first instinct will be to (try to) interact with the YAML structure, not the nightmarish JSON strings.
The text was updated successfully, but these errors were encountered: