-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes-e2e-gke-staging: broken test run #30962
Comments
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6334/ Multiple broken tests: Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}
Issues about this test specifically: #27295 Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}
Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #28283 Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26126 #30653 Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}
Issues about this test specifically: #26324 #27715 #28845 Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}
Issues about this test specifically: #29954 Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
|
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6337/ Run so broken it didn't make JUnit output! |
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6352/ Multiple broken tests: Failed: [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27703 Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}
Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}
Failed: [k8s.io] Pods should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
Issues about this test specifically: #28071 |
@cjcullen can you look into this? Looks like auth problems. |
We don't have any logs left from the staging jobs. I'll watch for further flakes. |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6363/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6366/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6369/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6370/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6372/ Run so broken it didn't make JUnit output! |
[FLAKE-PING] @cjcullen @jlowdermilk This flaky-test issue would love to have more attention... |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6376/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6379/ Run so broken it didn't make JUnit output! |
Recent flakes appear to be jenkins weirdness: deleting clusters while they are being tested. I have @rmmh helping me look into it. |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6384/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6380/ Run so broken it didn't make JUnit output! |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6383/ Run so broken it didn't make JUnit output! |
This was tested manually with the small script: #!/bin/bash # test with "timeout -k4 2 ./leak-test.sh" # observe that the trap properly cleans up the container. CONTAINER_NAME="leak-$$" echo "container: $CONTAINER_NAME" trap "docker kill ${CONTAINER_NAME}" EXIT docker run --rm --name="${CONTAINER_NAME}" ubuntu sleep 1 trap '' EXIT This should fix flakes associated with leaked containers: kubernetes/kubernetes#30962 and kubernetes/kubernetes#31213
This was tested manually with the small script: #!/bin/bash # test with "timeout -k4 2 ./leak-test.sh" # observe that the trap properly cleans up the container. CONTAINER_NAME="leak-$$" echo "container: $CONTAINER_NAME" trap "docker kill ${CONTAINER_NAME}" EXIT docker run --rm --name="${CONTAINER_NAME}" ubuntu sleep 600 trap '' EXIT This should fix flakes associated with leaked containers: kubernetes/kubernetes#30962 and kubernetes/kubernetes#31213
This was tested manually with the small script: #!/bin/bash # test with "timeout -k4 2 ./leak-test.sh" # observe that the trap properly cleans up the container. CONTAINER_NAME="leak-$$" echo "container: $CONTAINER_NAME" trap "docker stop ${CONTAINER_NAME}" EXIT docker run --rm --name="${CONTAINER_NAME}" ubuntu sleep 600 trap '' EXIT This should fix flakes associated with leaked containers: kubernetes/kubernetes#30962 and kubernetes/kubernetes#31213
This was tested manually with the small script: #!/bin/bash # test with "timeout -k15 2 ./leak-test.sh" # observe that the trap properly cleans up the container. CONTAINER_NAME="leak-$$" echo "container: $CONTAINER_NAME" trap "docker stop ${CONTAINER_NAME}" EXIT docker run --rm --name="${CONTAINER_NAME}" ubuntu sleep 600 trap '' EXIT This should fix flakes associated with leaked containers: kubernetes/kubernetes#30962 and kubernetes/kubernetes#31213
This was tested manually with the small script: #!/bin/bash # test with "timeout -k15 2 ./leak-test.sh" # observe that the trap properly cleans up the container. CONTAINER_NAME="leak-$$" echo "container: $CONTAINER_NAME" trap "docker stop ${CONTAINER_NAME}" EXIT docker run --rm --name="${CONTAINER_NAME}" ubuntu sleep 600 trap '' EXIT This should fix flakes associated with leaked containers: kubernetes/kubernetes#30962 and kubernetes/kubernetes#31213
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6387/ Multiple broken tests: Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26126 #30653 Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}
Issues about this test specifically: #29954 Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}
Issues about this test specifically: #26168 #27450 Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
Issues about this test specifically: #27662 #29820 Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}
|
[FLAKE-PING] @rmmh This flaky-test issue would love to have more attention. |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6386/ Run so broken it didn't make JUnit output! |
Closing as a dupe of #31213 |
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging/6333/
Run so broken it didn't make JUnit output!
The text was updated successfully, but these errors were encountered: