Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vSphere Volume Pod Attachment Times Out #38068

Closed
KingJ opened this issue Dec 4, 2016 · 15 comments
Closed

vSphere Volume Pod Attachment Times Out #38068

KingJ opened this issue Dec 4, 2016 · 15 comments
Labels
sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@KingJ
Copy link

KingJ commented Dec 4, 2016

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): vSphere, pod, timeout


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Kubernetes version (use kubectl version): Server Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-beta.2+coreos.0", GitCommit:"4fbc7c924d1e09ef018598bd053b596eb9bdd95c", GitTreeState:"clean", BuildDate:"2016-11-29T01:43:08Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: vSphere Cloud Provider - vCenter Server v6.5.0, ESXi 6.5.0
  • OS (e.g. from /etc/os-release): CoreOS 1185.3.0 (MoreOS)
  • Kernel (e.g. uname -a): Linux k8-w2 4.7.3-coreos-r2
  • Install tools: N/A
  • Others: N/A

What happened:
I created a Pod with the following spec, using @abrarshivani 's example here ;

apiVersion: v1
kind: Pod
metadata:
  name: pvpod
spec:
  containers:
  - name: test-container
    image: gcr.io/google_containers/test-webserver
    volumeMounts:
    - name: vmdk-storage
      mountPath: /test-vmdk
  volumes:
  - name: vmdk-storage
    vsphereVolume:
      volumePath: "[kstore-k8s-vol] volumes/test"
      fsType: ext4

The VMDK specified in the volumePath was successfully attached to the VM that the pod was scheduled in, but the pod becomes stuck in the containerCreating state. After 2 minutes, the kubelet on the worker node that the pod was scheduled on logs a timeout error.

What you expected to happen:
I expected the Pod to be created with the VMDK mounted under /test-vmdk inside the pod.

How to reproduce it (as minimally and precisely as possible):

  1. Create a new VMDK using the command vmkfstools -c 2G /vmfs/volumes/kstore-k8s-vol/volumes/test.vmdk
  2. Create a new Pod using the following spec;
apiVersion: v1
kind: Pod
metadata:
  name: pvpod
spec:
  containers:
  - name: test-container
    image: gcr.io/google_containers/test-webserver
    volumeMounts:
    - name: vmdk-storage
      mountPath: /test-vmdk
  volumes:
  - name: vmdk-storage
    vsphereVolume:
      volumePath: "[kstore-k8s-vol] volumes/test"
      fsType: ext4
  1. Observe errors by running kubectl describe pod pvpod
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason          Message
  ---------     --------        -----   ----                    -------------   --------        ------          -------
  13m           13m             1       {default-scheduler }                    Normal          Scheduled       Successfully assigned pvpod to k8-w2
  11m           17s             6       {kubelet k8-w2}                         Warning         FailedMount     Unable to mount volumes for pod "pvpod_default(ec696999-ba3d-11e6-a9a4-005056892975)": timeout expired waiting for volumes to attach/mount for pod "pvpod"/"default". list of unattached/unmounted volumes=[vmdk-storage]
  11m           17s             6       {kubelet k8-w2}                         Warning         FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "pvpod"/"default". list of unattached/unmounted volumes=[vmdk-storage]

Anything else do we need to know:

kube-controller-manager log entries;

2016-12-04T16:23:00.422463105Z I1204 16:23:00.422244       1 reconciler.go:202] Started AttachVolume for volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" to node "k8-w2"
2016-12-04T16:23:00.429345287Z W1204 16:23:00.429078       1 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
2016-12-04T16:23:01.491332560Z I1204 16:23:01.491246       1 operation_executor.go:619] AttachVolume.Attach succeeded for volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") from node "k8-w2".

k8-w2 worker node kubelet logs;

Dec 04 16:23:00 k8-w2 kubelet-wrapper[946]: E1204 16:23:00.488905     946 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test\"" failed. No retries permitted until 2016-12-04 16:23:00.988885317 +0000 UTC (durationBeforeRetry 500ms). Error: Volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975") has not yet been added to the list of VolumesInUse in the node's volume status.
Dec 04 16:23:00 k8-w2 kubelet-wrapper[946]: I1204 16:23:00.488827     946 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975")
Dec 04 16:23:00 k8-w2 kubelet-wrapper[946]: I1204 16:23:00.490608     946 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/ec696999-ba3d-11e6-a9a4-005056892975-default-token-zcpmt" (spec.Name: "default-token-zcpmt") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975")
Dec 04 16:23:00 k8-w2 kubelet-wrapper[946]: I1204 16:23:00.608354     946 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/ec696999-ba3d-11e6-a9a4-005056892975-default-token-zcpmt" (spec.Name: "default-token-zcpmt") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975").
Dec 04 16:23:00 k8-w2 kubelet-wrapper[946]: I1204 16:23:00.991910     946 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975")
Dec 04 16:23:00 k8-w2 kubelet-wrapper[946]: E1204 16:23:00.992429     946 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test\"" failed. No retries permitted until 2016-12-04 16:23:01.992407088 +0000 UTC (durationBeforeRetry 1s). Error: Volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975") has not yet been added to the list of VolumesInUse in the node's volume status.
Dec 04 16:23:02 k8-w2 kubelet-wrapper[946]: I1204 16:23:02.000818     946 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975")
Dec 04 16:23:02 k8-w2 kubelet-wrapper[946]: E1204 16:23:02.001589     946 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test\"" failed. No retries permitted until 2016-12-04 16:23:04.00155815 +0000 UTC (durationBeforeRetry 2s). Error: Volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975") has not yet been added to the list of VolumesInUse in the node's volume status.
Dec 04 16:23:04 k8-w2 kubelet-wrapper[946]: I1204 16:23:04.008834     946 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975")
Dec 04 16:23:04 k8-w2 kubelet-wrapper[946]: E1204 16:23:04.009560     946 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test\"" failed. No retries permitted until 2016-12-04 16:23:08.00952202 +0000 UTC (durationBeforeRetry 4s). Error: Volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975") has not yet been added to the list of VolumesInUse in the node's volume status.
Dec 04 16:23:05 k8-w2 kubelet-wrapper[946]: W1204 16:23:05.154304     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:23:08 k8-w2 kubelet-wrapper[946]: I1204 16:23:08.035413     946 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975")
Dec 04 16:23:08 k8-w2 kubelet-wrapper[946]: I1204 16:23:08.040480     946 operation_executor.go:1199] Controller successfully attached volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975") devicePath: "/dev/disk/by-id/wwn-0x6000c29644981da81416d23b00969968"
Dec 04 16:23:08 k8-w2 kubelet-wrapper[946]: I1204 16:23:08.136359     946 operation_executor.go:811] Entering MountVolume.WaitForAttach for volume "kubernetes.io/vsphere-volume/[kstore-k8s-vol] volumes/test" (spec.Name: "vmdk-storage") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975") DevicePath: "/dev/disk/by-id/wwn-0x6000c29644981da81416d23b00969968"
Dec 04 16:23:15 k8-w2 kubelet-wrapper[946]: W1204 16:23:15.708296     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:23:26 k8-w2 kubelet-wrapper[946]: W1204 16:23:26.292271     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:23:36 k8-w2 kubelet-wrapper[946]: W1204 16:23:36.833943     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:23:47 k8-w2 kubelet-wrapper[946]: W1204 16:23:47.284526     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:23:57 k8-w2 kubelet-wrapper[946]: W1204 16:23:57.759809     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:24:08 k8-w2 kubelet-wrapper[946]: W1204 16:24:08.260559     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:24:18 k8-w2 kubelet-wrapper[946]: W1204 16:24:18.740360     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:24:29 k8-w2 kubelet-wrapper[946]: W1204 16:24:29.271524     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:24:39 k8-w2 kubelet-wrapper[946]: W1204 16:24:39.857839     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:24:50 k8-w2 kubelet-wrapper[946]: W1204 16:24:50.560441     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:25:00 k8-w2 kubelet-wrapper[946]: E1204 16:25:00.361456     946 kubelet.go:1521] Unable to mount volumes for pod "pvpod_default(ec696999-ba3d-11e6-a9a4-005056892975)": timeout expired waiting for volumes to attach/mount for pod "pvpod"/"default". list of unattached/unmounted volumes=[vmdk-storage]; skipping pod
Dec 04 16:25:00 k8-w2 kubelet-wrapper[946]: E1204 16:25:00.362208     946 pod_workers.go:184] Error syncing pod ec696999-ba3d-11e6-a9a4-005056892975, skipping: timeout expired waiting for volumes to attach/mount for pod "pvpod"/"default". list of unattached/unmounted volumes=[vmdk-storage]
Dec 04 16:25:01 k8-w2 kubelet-wrapper[946]: W1204 16:25:01.037741     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:25:11 k8-w2 kubelet-wrapper[946]: W1204 16:25:11.531951     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:25:12 k8-w2 kubelet-wrapper[946]: I1204 16:25:12.847941     946 operation_executor.go:916] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/ec696999-ba3d-11e6-a9a4-005056892975-default-token-zcpmt" (spec.Name: "default-token-zcpmt") pod "ec696999-ba3d-11e6-a9a4-005056892975" (UID: "ec696999-ba3d-11e6-a9a4-005056892975").
Dec 04 16:25:22 k8-w2 kubelet-wrapper[946]: W1204 16:25:22.065628     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:25:32 k8-w2 kubelet-wrapper[946]: W1204 16:25:32.635166     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:25:43 k8-w2 kubelet-wrapper[946]: W1204 16:25:43.113635     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:25:53 k8-w2 kubelet-wrapper[946]: W1204 16:25:53.599072     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:26:04 k8-w2 kubelet-wrapper[946]: W1204 16:26:04.074564     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:26:14 k8-w2 kubelet-wrapper[946]: W1204 16:26:14.750181     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:26:25 k8-w2 kubelet-wrapper[946]: W1204 16:26:25.399264     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:26:35 k8-w2 kubelet-wrapper[946]: W1204 16:26:35.875540     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated
Dec 04 16:26:46 k8-w2 kubelet-wrapper[946]: W1204 16:26:46.364960     946 vsphere.go:368] Creating new client session since the existing session is not valid or not authenticated

VM hardware with VMDK showing as being attached;

image

@kerneltime
Copy link

@BaluDontu is this the same as #37022

@FeatherKing
Copy link

FeatherKing commented Dec 6, 2016

@kerneltime i am also running into this even after removing the space between datastore and path. If i turn up debug on the kubelet to v=2 i see this scrolling over and over

"attacher.go:138] Checking VMDK [datastore] kubevols/test-pv.vmdk" is attached"

also as a side note, the kubelet is generating and holding a lot of connections to vcenter while this is happening. I am still running the 1.4.7 from your branch cherry-pick-37413

@jingxu97
Copy link
Contributor

jingxu97 commented Dec 6, 2016

@FeatherKing looks like there is a mismatch between the real device path and the device path retrieved in kubelet. Could you please check on your node what is the device path after the volume is attached?

@jingxu97 jingxu97 added the sig/storage Categorizes an issue or PR as relevant to SIG Storage. label Dec 6, 2016
@jingxu97
Copy link
Contributor

jingxu97 commented Dec 6, 2016

cc @kubernetes/sig-storage

@rootfs
Copy link
Contributor

rootfs commented Dec 6, 2016

@abithap

@BaluDontu
Copy link
Contributor

BaluDontu commented Dec 6, 2016

@FeatherKing : I guess you are using an existing kubernetes cluster where you have first tried to create a deployment with volume path: [datastore] kubevols/test-pv.vmdk. After this kubernetes stalls attempting to check for the volume [datastore] kubevols/test-pv.vmdk. On this existing kubernetes cluster which is already stalled, If you are now trying to create a new deployment to use a volumePath: [datastore]kubevols/test-pv.vmdk (without space), it wouldn't still work and you would see messages related to "attempting to mount volume/verifycontrollerattached for [datastore] kubevols/test-pv.vmdk". It is because kubernetes still attempts to mount this volume which it couldn't find at all. Because of this all subsequent deployment fails.

Inorder to resolve this issue, can you create a new kubernetes cluster and try to now use a vSphere volume path - [datastore]kubevols/test-pv.vmdk (without space). After this, it would perfectly work fine.

@abrarshivani
Copy link
Contributor

@FeatherKing For attacher.go:138] Checking VMDK [datastore] kubevols/test-pv.vmdk" is attached. Is property disk.enableUUID=TRUE set for all your vms on which kubernetes cluster is launched?

For also as a side note, the kubelet is generating and holding a lot of connections to vcenter while this is happening. This PR #36169 solves this.

@robdaemon
Copy link

In the case of what I saw with PR #36169, it never successfully attached disks on my production clusters because it never actually logged in to vSphere. This happened as a result of the changes in #34491.

@FeatherKing
Copy link

Here are some more logs after restarting all my services and using no space in after the datastore name

controller logs at pod creation
reconciler.go:202] Started AttachVolume for volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk" to node "intk8sm2"
operation_executor.go:612] AttachVolume.Attach succeeded for volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk" (spec.Name: "vmdk-storage") from node "intk8sm2".

node kubelet at pod creation
reconciler.go:229] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk" (spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75" (UID: "43fe2d93-bbf7-11e6-928f-005056a34c75")
nestedpendingoperations.go:262] Operation for ""kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk"" failed. No retries permitted until 2016-12-06 21:02:39.892310318 +0000 UTC (durationBeforeRetry 16s). Error: Volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk" (spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75" (UID: "43fe2d93-bbf7-11e6-928f-005056a34c75") has not yet been added to the list of VolumesInUse in the node's volume status.
reconciler.go:229] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk" (spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75" (UID: "43fe2d93-bbf7-11e6-928f-005056a34c75")
operation_executor.go:1172] Controller successfully attached volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk" (spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75" (UID: "43fe2d93-bbf7-11e6-928f-005056a34c75") devicePath: "/dev/disk/by-id/wwn-0x6000c290c97403da891839e9bd75ecac"
reconciler.go:305] MountVolume operation started for volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk" (spec.Name: "vmdk-storage") to pod "43fe2d93-bbf7-11e6-928f-005056a34c75" (UID: "43fe2d93-bbf7-11e6-928f-005056a34c75").
operation_executor.go:804] Entering MountVolume.WaitForAttach for volume "kubernetes.io/vsphere-volume/[datastore]kubevols/test.vmdk" (spec.Name: "vmdk-storage") pod "43fe2d93-bbf7-11e6-928f-005056a34c75" (UID: "43fe2d93-bbf7-11e6-928f-005056a34c75") DevicePath: "/dev/disk/by-id/wwn-0x6000c290c97403da891839e9bd75ecac"
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
attacher.go:138] Checking VMDK "[datastore]kubevols/test.vmdk" is attached
kubelet.go:1813] Unable to mount volumes for pod "pvpod_default(43fe2d93-bbf7-11e6-928f-005056a34c75)": timeout expired waiting for volumes to attach/mount for pod "pvpod"/"default". list of unattached/unmounted volumes=[vmdk-storage]; skipping
pod_workers.go:184] Error syncing pod 43fe2d93-bbf7-11e6-928f-005056a34c75, skipping: timeout expired waiting for volumes to attach/mount for pod "pvpod"/"default". list of unattached/unmounted volumes=[vmdk-storage]

from my controller logs a little bit after creating pod
vsphere.go:894] Failed to create vSphere client. err: Post https://vcenter:443/sdk: net/http: TLS handshake timeout
attacher.go:105] Error checking if volumes ([[datastore]kubevols/test.vmdk]) are attached to current node ("intk8sm2"). err=Post https://vcenter:443/sdk: net/http: TLS handshake timeout

@jingxu97
Copy link
Contributor

jingxu97 commented Dec 6, 2016 via email

@FeatherKing
Copy link

@jingxu97 that path doesnt exist. only the cdrom for the vm shows up in there.

@abrarshivani i did not have disk.EnableUuid set to true. i just set it now and will try again. Should this be in my vmx automatically?

@abrarshivani
Copy link
Contributor

@FeatherKing Yes. The path won't appear unless you have this property set.

@FeatherKing
Copy link

@abrarshivani setting disk.enableuuid seemed to clear this up in my test cluster! thank you!

@jingxu97
Copy link
Contributor

jingxu97 commented Dec 7, 2016

Close this issue since it is solved

@jingxu97 jingxu97 closed this as completed Dec 7, 2016
@KingJ
Copy link
Author

KingJ commented Dec 7, 2016

Just to confirm, setting Disk.EnableUuid to True in my worker VM's configuration did allow the disk to successfully mount.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

8 participants