Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

REPLACED BY #6983 - Fix #2741. Add support for alternate Vagrant providers: ... #6879

Closed
wants to merge 9 commits into from
Closed

REPLACED BY #6983 - Fix #2741. Add support for alternate Vagrant providers: ... #6879

wants to merge 9 commits into from

Conversation

posita
Copy link
Contributor

@posita posita commented Apr 15, 2015

UPDATE: This PR is replaced by #6983 and should not be merged.

...VMWare Fusion, VMWare Workstation, and Parallels. (Tagging #2741.)

I tried to mirror the Vagrant philosophy and do the right thing in most cases without tweaking configuration (unless you really wanted to override something). The fall-back order is:

If any of these are installed properly (including any required Vagrant plugins), KUBERNETES_PROVIDER=vagrant .../cluster/kube-up.sh should just work. 😁 _CAVEAT__: I do not have a license to VMWare Fusion. I'm pretty sure it will function, but I have only tested Parallels._

To override, you can set DEFAULT_VAGRANT_PROVIDER (e.g., if you have both VMWare and Parallels installed, but want to use Parallels):

export KUBERNETES_PROVIDER=vagrant
export DEFAULT_VAGRANT_PROVIDER=parallels
.../cluster/kube-up.sh

You can override the box (by name):

export KUBERNETES_PROVIDER=vagrant
export DEFAULT_VAGRANT_PROVIDER=parallels
export KUBERNETES_BOX_NAME=rickard-von-essen/opscode_fedora-20 # will fetch from atlas
.../cluster/kube-up.sh

And even specify a version:

export KUBERNETES_PROVIDER=vagrant
export DEFAULT_VAGRANT_PROVIDER=parallels
export KUBERNETES_BOX_NAME=rickard-von-essen/opscode_fedora-20 # will fetch from atlas...
export KUBERNETES_BOX_VERSION=0.3.0 # ...with this version
.../cluster/kube-up.sh

Or specify a URL for the box itself, in which case you now must provide your own name:

export KUBERNETES_PROVIDER=vagrant
export DEFAULT_VAGRANT_PROVIDER=parallels
export KUBERNETES_BOX_NAME=rickard-von-essen-fedora20 # will set the name to this value (with version 0)...
export KUBERNETES_BOX_URL=https://atlas.hashicorp.com/rickard-von-essen/boxes/opscode_fedora-20/versions/0.4.0/providers/parallels.box # ...and download the box from here
.../cluster/kube-up.sh

WARNING: This breaks the existing behavior of KUBERNETES_BOX_URL, which is now ignored unless KUBERNETES_BOX_NAME is also set. Previously that name was fixed for everything (default or specified by URL) as fedora20, irrespective of underlying OS/version. Now if a default box is used, the name is set internally (currently only kube-fedora20), but users who specify their own box URL are now forced to name it. To avoid clashes or confusion, it is recommended (but not enforced) that the name distinguish the box from the defaults (e.g., posita-fedora21, kickass-custom-centos, etc.).

This PR also introduces the ability to set the master memory size independently of the minions (it is included with this PR instead of a separate one because memory configuration is Vagrant provider-specific; see, e.g., this vs. this):

export KUBERNETES_PROVIDER=vagrant
export KUBERNETES_MASTER_MEMORY=1024
export KUBERNETES_MINION_MEMORY=2048
.../cluster/kube-up.sh

KUBERNETES_MEMORY is maintained for backward compatibility. The following are equivalent to the previous example:

export KUBERNETES_PROVIDER=vagrant
export KUBERNETES_MEMORY=1024 # sets both
export KUBERNETES_MINION_MEMORY=2048 # overrides minion
.../cluster/kube-up.sh
export KUBERNETES_PROVIDER=vagrant
export KUBERNETES_MEMORY=2048 # sets both
export KUBERNETES_MASTER_MEMORY=1024 # overrides master
.../cluster/kube-up.sh

@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed, please reply here (e.g. I signed it!) and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If you signed the CLA as a corporation, please let us know the company's name.

@posita
Copy link
Contributor Author

posita commented Apr 15, 2015

Once you've signed, please reply here (e.g. I signed it!) and we'll verify. Thanks.

I signed it!

@googlebot
Copy link

CLAs look good, thanks!

@erictune
Copy link
Member

@derekwaynecarr you touched Vagrantfile once. Are you a good reviewer, or can you suggest one?

@derekwaynecarr
Copy link
Member

I can review tomorrow.

Thanks for the enhancement.

Sent from my iPhone

On Apr 15, 2015, at 7:29 PM, Eric Tune notifications@github.com wrote:

Assigned #6879 to @derekwaynecarr.


Reply to this email directly or view it on GitHub.

@posita
Copy link
Contributor Author

posita commented Apr 16, 2015

Ignore this comment. Original and updated content preserved below.

For some reason, I don't have any visibility into the Shippable failure from my latest commits. I haven't been able to reproduce any failures on my end with 9fca1cf or 50d80de (but I can back them out if necessary). The idea was to be able to support specific versions for boxes hosted (e.g.) via vagrantcloud.com.

UPDATE # 1: Scratch that, I was able to re-run the failing Shippable tests directly from my fork/branch (copy of logs). Forgive me (I'm not that familiar with Go, or Shippable for that matter), but I'm not sure I understand the error. It looks like there are bunch of missing package-related failures during integration testing that have nothing to do with my latest check-in?

UPDATE # 2: I reverted back to 468fb89 in a separate branch (posita/kubernetes@vmware-parallels-support-2741-shippable-test), and that failed (even though the exact same commit passed less than 6 hours prior).

UPDATE # 3: I merged posita/kubernetes@vmware-parallels-support-2741-shippable-test with GoogleCloudPlatform/kubernetes@master and it still failed. Then I ran posita/kubernetes@master, and even that failed, so I'm pretty sure it's not me? Either that, or it's something with my own Shippable account configuration and I'm not seeing the same error? 😕

posita added 3 commits April 15, 2015 21:23
…KUBERNETES_{MASTER,MINION}_MEMORY environment variables (with reversion to KUBERNETES_MEMORY).
…rantfile` so that `vagrant ssh minion-N` actually works (possibly related to #4633?). Collapse memory override fallback to more easily allow different default values for master/minion (if that ever becomes desirable).
@derekwaynecarr
Copy link
Member

Shippable is flaky, your PR is safe from Shippable.

I am trying to resolve a few other flaky issues with the Vagrant setup right now based on other changes that happened on the system to run more resources in pods, but once that is stabilized I will look to merge this.

@posita
Copy link
Contributor Author

posita commented Apr 17, 2015

It looks like I screwed something up trying to resolve the merge conflicts with master (there shouldn't be so many changes associated with this PR). I'm closing this in favor of #6978 #6983, which is cleaner.

@posita posita closed this Apr 17, 2015
@posita posita deleted the vmware-parallels-support-2741 branch April 17, 2015 14:05
@posita posita changed the title READY FOR REVIEW - Fix #2741. Add support for alternate Vagrant providers: ... REPLACED BY #6978 - Fix #2741. Add support for alternate Vagrant providers: ... Apr 17, 2015
@posita posita changed the title REPLACED BY #6978 - Fix #2741. Add support for alternate Vagrant providers: ... REPLACED BY #6983 - Fix #2741. Add support for alternate Vagrant providers: ... Apr 17, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants