-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VSphere cloud provider code refactoring #49164
VSphere cloud provider code refactoring #49164
Conversation
Hi @BaluDontu. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@brendandburns @saad-ali @jingxu97 : Can anyone of you run the tests? ("ok-to-test") |
/ok-to-test |
/ok-to-test |
ae8077d
to
8f80be8
Compare
8f80be8
to
9198ec0
Compare
I have resolved the conflicts and rebased the branch to the latest. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need a fix in vsphere.go -> newVSphere()
// Create context | ||
ctx, cancel := context.WithCancel(context.TODO()) | ||
defer cancel() | ||
err = vSphereConn.Connect(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the worker node we should not connect to vCenter if vm-name
is specified in the vsphere.conf
file.
Used following vsphere.conf file on Kubernetes 1.7.1, and restarted kubelet. Node was able to register successfully.
# cat vsphere.conf
[Global]
vm-name = "node1"
When same file is used on this Refactored code. kubelet failed to restart with following errors
{"log":"E0721 23:34:39.438996 11722 connection.go:75] Failed to create new client. err: Post https:///sdk: http: no Host in request URL\n","stream":"stderr","time":"2017-07-21T23:34:39.439139608Z"}
{"log":"E0721 23:34:39.439033 11722 connection.go:41] Failed to create govmomi client. err: Post https:///sdk: http: no Host in request URL\n","stream":"stderr","time":"2017-07-21T23:34:39.439156709Z"}
{"log":"E0721 23:34:39.439041 11722 vsphere.go:202] Failed to connect to vSphere\n","stream":"stderr","time":"2017-07-21T23:34:39.439160254Z"}
{"log":"Error: failed to run Kubelet: could not init cloud provider \"vsphere\": Post https:///sdk: http: no Host in request URL\n","stream":"stderr","time":"2017-07-21T23:34:39.439163376Z"}
We should fix this as part of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed it in latest commit. Now we don't connect to VC on worker nodes if we have vm-name available in vsphere.conf file.
if err != nil { | ||
return nil, err | ||
glog.Errorf("Failed to Profile ID by name: %s. err: %+v", storagePolicyName, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Failed to Profile ID by name
=> Failed to get Profile ID by name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have addressed it in latest commit.
9198ec0
to
b55543b
Compare
Fixed bazel errors and review comments from @divyenpatel . |
f33d537
to
1c00005
Compare
/retest |
@brendandburns @saad-ali @jingxu97 : Can one of you guys review this PR ? |
/retest |
LGTM |
/assign @jingxu97 |
This PR is internally reviewed by vSphere Cloud Provider team. Here are the reference PRs. vmware-archive#154 |
LGTM |
/lgtm |
@divyenpatel: changing LGTM is restricted to assignees, and only kubernetes org members may be assigned issues. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/approve |
@brendandburns @davidopp @jingxu97 Can you please approve this PR? All tests passed. PR needs approval label. |
LGTM |
/lgtm |
@saad-ali we need approval from pkg/volume/vsphere_volume/OWNERS |
/approve |
/approve no-issue |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BaluDontu, divyenpatel, jingxu97, kerneltime, luomiao Associated issue requirement bypassed by: kerneltime The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue |
Automatic merge from submit-queue Attempt to Attach Volume fails for very first time on vSphere in 1.6 release Before every attach, vSphere cloud provider checks if the SCSI controller is present on the VM. If not present, it will try to create one and attach the disk to that SCSI controller on VM. If already present, if will use the SCSI controller. For the very first time when SCSI controller is not present on the VM, we try to create one and retrieve the SCSI controller for the disk to created on that SCSI controller. But in release 1.6, after successful creation of SCSI controller, we are not assigning back the SCSI controller to the existing "scsicontroller" variable. Because of this the very first time, attach of the disk will fail. This problem is not observed on master, as we have taken care of this in vSphere cloud provider refactoring - #49164 @luomiao @divyenpatel @rohitjogvmw ```release-note vSphere: Fix attach volume failing on the first try. ```
Automatic merge from submit-queue Attempt to Attach Volume fails for very first time on vSphere in 1.7 release Before every attach, vSphere cloud provider checks if the SCSI controller is present on the VM. If not present, it will try to create one and attach the disk to that SCSI controller on VM. If already present, it will use the SCSI controller. For the very first time when SCSI controller is not present on the VM, we try to create one and retrieve the SCSI controller for the disk to created on that SCSI controller. But in release 1.7, after successful creation of SCSI controller, we are not assigning back the SCSI controller to the existing "scsicontroller" variable. Because of this the very first time, attach of the disk will fail. This problem is not observed on master, as we have taken care of this in vSphere cloud provider refactoring - #49164 @luomiao @divyenpatel @rohitjogvmw ```release-note vSphere: Fix attach volume failing on the first try. ```
The current PR tracks the vSphere Cloud Provider code refactoring which includes the following changes.
@divyenpatel @rohitjogvmw @luomiao