High flake rates on Downward API volume update e2e tests #59813
Description
There have been 178 flakes in various forms of Downward API volume update tests in the past week. The flakes span several variations on the same underlying functionality (annotation/label update, direct downwardapi/projected) all rooted in the same underlying issue.
The issue afaict is that the volume manger is not getting the updated version of the pod and thus the update to the downward API volume is not happening, even after periodic syncs of the pod.
My investigation thus far is here:
https://bugzilla.redhat.com/show_bug.cgi?id=1538216#c2
This has been an ongoing issue for a long time.
Specifically
https://storage.googleapis.com/k8s-gubernator/triage/index.html?pr=1&sig=storage&test=on%20modification#29d7730339ad87387e21
https://storage.googleapis.com/k8s-gubernator/triage/index.html?pr=1&sig=storage&test=on%20modification#00453201efe289cd34ce
https://storage.googleapis.com/k8s-gubernator/triage/index.html?pr=1&sig=storage&test=on%20modification#f2925bc52d8730d3df2c
https://storage.googleapis.com/k8s-gubernator/triage/index.html?pr=1&sig=storage&test=on%20modification#19fb1c2e8a094bfc157a
Previous incarnations of this issue (all closed):
https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+downward+api+modification+is%3Aclosed
Most recent issues closed due to inactivity without resolution:
#44226
#44227
Origin issues:
openshift/origin#17605
openshift/origin#17556
xref https://bugzilla.redhat.com/show_bug.cgi?id=1538216
@derekwaynecarr @dashpole @dchen1107 @vishh @smarterclayton @runcom @yujuhong @saad-ali