-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vSphere Volume Pod Attachment Times Out #38068
Comments
@BaluDontu is this the same as #37022 |
@kerneltime i am also running into this even after removing the space between datastore and path. If i turn up debug on the kubelet to v=2 i see this scrolling over and over "attacher.go:138] Checking VMDK [datastore] kubevols/test-pv.vmdk" is attached" also as a side note, the kubelet is generating and holding a lot of connections to vcenter while this is happening. I am still running the 1.4.7 from your branch cherry-pick-37413 |
@FeatherKing looks like there is a mismatch between the real device path and the device path retrieved in kubelet. Could you please check on your node what is the device path after the volume is attached? |
cc @kubernetes/sig-storage |
@FeatherKing : I guess you are using an existing kubernetes cluster where you have first tried to create a deployment with volume path: [datastore] kubevols/test-pv.vmdk. After this kubernetes stalls attempting to check for the volume [datastore] kubevols/test-pv.vmdk. On this existing kubernetes cluster which is already stalled, If you are now trying to create a new deployment to use a volumePath: [datastore]kubevols/test-pv.vmdk (without space), it wouldn't still work and you would see messages related to "attempting to mount volume/verifycontrollerattached for [datastore] kubevols/test-pv.vmdk". It is because kubernetes still attempts to mount this volume which it couldn't find at all. Because of this all subsequent deployment fails. Inorder to resolve this issue, can you create a new kubernetes cluster and try to now use a vSphere volume path - [datastore]kubevols/test-pv.vmdk (without space). After this, it would perfectly work fine. |
@FeatherKing For For |
Here are some more logs after restarting all my services and using no space in after the datastore name controller logs at pod creation node kubelet at pod creation from my controller logs a little bit after creating pod |
Could you please double check whether /dev/disk/by-id/wwn-
0x6000c290c97403da891839e9bd75ecac exist on the node or not? Since there
are multiple log showing Checking VMDK "[datastore]kubevols/test.vmdk" is
attached, it seems like it could not verify this device path.
…On Tue, Dec 6, 2016 at 1:24 PM, FeatherKing ***@***.***> wrote:
Here are some more logs after restarting all my services and using no
space in after the datastore name
controller logs at pod creation
reconciler.go:202] Started AttachVolume for volume "
kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>"
to node "intk8sm2"
operation_executor.go:612] AttachVolume.Attach succeeded for volume "
kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>"
(spec.Name: "vmdk-storage") from node "intk8sm2".
node kubelet at pod creation
reconciler.go:229] VerifyControllerAttachedVolume operation started for
volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>"
(spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75"
(UID: "43fe2d93-bbf7-11e6-928f-005056a34c75")
nestedpendingoperations.go:262] Operation for ""kubernetes.io/vsphere-
volume/[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>""
failed. No retries permitted until 2016-12-06 21:02:39.892310318 +0000 UTC
(durationBeforeRetry 16s). Error: Volume "kubernetes.io/vsphere-volume/
[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>"
(spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75"
(UID: "43fe2d93-bbf7-11e6-928f-005056a34c75") has not yet been added to
the list of VolumesInUse in the node's volume status.
reconciler.go:229] VerifyControllerAttachedVolume operation started for
volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>"
(spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75"
(UID: "43fe2d93-bbf7-11e6-928f-005056a34c75")
operation_executor.go:1172] Controller successfully attached volume "
kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>"
(spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75"
(UID: "43fe2d93-bbf7-11e6-928f-005056a34c75") devicePath:
"/dev/disk/by-id/wwn-0x6000c290c97403da891839e9bd75ecac"
reconciler.go:305] MountVolume operation started for volume "
kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>"
(spec.Name: "vmdk-storage") to pod "43fe2d93-bbf7-11e6-928f-005056a34c75"
(UID: "43fe2d93-bbf7-11e6-928f-005056a34c75").
operation_executor.go:804] Entering MountVolume.WaitForAttach for volume "
kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk
<http://kubernetes.io/vsphere-volume/%5Bdatastore%5Dkubevols/test.vmdk>"
(spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75"
(UID: "43fe2d93-bbf7-11e6-928f-005056a34c75") DevicePath:
"/dev/disk/by-id/wwn-0x6000c290c97403da891839e9bd75ecac"
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
kubelet.go:1813] Unable to mount volumes for pod
"pvpod_default(43fe2d93-bbf7-11e6-928f-005056a34c75)": timeout expired
waiting for volumes to attach/mount for pod "pvpod"/"default". list of
unattached/unmounted volumes=[vmdk-storage]; skipping
pod_workers.go:184] Error syncing pod 43fe2d93-bbf7-11e6-928f-005056a34c75,
skipping: timeout expired waiting for volumes to attach/mount for pod
"pvpod"/"default". list of unattached/unmounted volumes=[vmdk-storage]
from my controller logs a little bit after creating pod
vsphere.go:894] Failed to create vSphere client. err: Post
https://vcenter:443/sdk: net/http: TLS handshake timeout
attacher.go:105] Error checking if volumes ([[datastore]kubevols/test.vmdk])
are attached to current node ("intk8sm2"). err=Post
https://vcenter:443/sdk: net/http: TLS handshake timeout
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#38068 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ASSNxYZep54vl_iM37D6yRAfQyAqtRIsks5rFdJ3gaJpZM4LDoIQ>
.
--
- Jing
|
@jingxu97 that path doesnt exist. only the cdrom for the vm shows up in there. @abrarshivani i did not have disk.EnableUuid set to true. i just set it now and will try again. Should this be in my vmx automatically? |
@FeatherKing Yes. The path won't appear unless you have this property set. |
@abrarshivani setting disk.enableuuid seemed to clear this up in my test cluster! thank you! |
Close this issue since it is solved |
Just to confirm, setting Disk.EnableUuid to True in my worker VM's configuration did allow the disk to successfully mount. |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): vSphere, pod, timeout
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report
Kubernetes version (use
kubectl version
): Server Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-beta.2+coreos.0", GitCommit:"4fbc7c924d1e09ef018598bd053b596eb9bdd95c", GitTreeState:"clean", BuildDate:"2016-11-29T01:43:08Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}Environment:
uname -a
): Linux k8-w2 4.7.3-coreos-r2What happened:
I created a Pod with the following spec, using @abrarshivani 's example here ;
The VMDK specified in the volumePath was successfully attached to the VM that the pod was scheduled in, but the pod becomes stuck in the containerCreating state. After 2 minutes, the kubelet on the worker node that the pod was scheduled on logs a timeout error.
What you expected to happen:
I expected the Pod to be created with the VMDK mounted under /test-vmdk inside the pod.
How to reproduce it (as minimally and precisely as possible):
vmkfstools -c 2G /vmfs/volumes/kstore-k8s-vol/volumes/test.vmdk
kubectl describe pod pvpod
Anything else do we need to know:
kube-controller-manager log entries;
k8-w2 worker node kubelet logs;
VM hardware with VMDK showing as being attached;
The text was updated successfully, but these errors were encountered: